Toggle light / dark theme

Are democratic societies ready for a future in which AI algorithmically assigns limited supplies of respirators or hospital beds during pandemics? Or one in which AI fuels an arms race between disinformation creation and detection? Or sways court decisions with amicus briefs written to mimic the rhetorical and argumentative styles of Supreme Court justices?

Decades of research show that most democratic societies struggle to hold nuanced debates about new technologies. These discussions need to be informed not only by the best available science but also the numerous ethical, regulatory and social considerations of their use. Difficult dilemmas posed by artificial intelligence are already… More.


Even AI experts are uneasy about how unprepared societies are for moving forward with the technology in a responsible fashion. We study the public and political aspects of emerging science. In 2022, our research group at the University of Wisconsin-Madison interviewed almost 2,200 researchers who had published on the topic of AI. Nine in 10 (90.3%) predicted that there will be unintended consequences of AI applications, and three in four (75.9%) did not think that society is prepared for the potential effects of AI applications.

Who gets a say on AI?

A team of scientists from Ames National Laboratory has developed a new machine learning model for discovering critical-element-free permanent magnet materials. The model predicts the Curie temperature of new material combinations. It is an important first step in using artificial intelligence to predict new permanent magnet materials. This model adds to the team’s recently developed capability for discovering thermodynamically stable rare earth materials. The work is published in Chemistry of Materials.

High performance magnets are essential for technologies such as , , electric vehicles, and magnetic refrigeration. These magnets contain critical materials such as cobalt and rare earth elements like neodymium and dysprosium. These materials are in high demand but have limited availability. This situation is motivating researchers to find ways to design new magnetic materials with reduced critical materials.

Machine learning (ML) is a form of . It is driven by computer algorithms that use data and trial-and-error algorithms to continually improve its predictions. The team used experimental data on Curie temperatures and theoretical modeling to train the ML algorithm. Curie temperature is the maximum temperature at which a material maintains its magnetism.

Using a standardized assessment, researchers in the UK compared the performance of a commercially available artificial intelligence (AI) algorithm with human readers of screening mammograms. Results of their findings were published in Radiology.

Mammographic does not detect every . False-positive interpretations can result in women without cancer undergoing unnecessary imaging and biopsy. To improve the sensitivity and specificity of screening mammography, one solution is to have two readers interpret every mammogram.

According to the researchers, double reading increases cancer detection rates by 6 to 15% and keeps recall rates low. However, this strategy is labor-intensive and difficult to achieve during reader shortages.

Please see my new FORBES article:

Thanks and please follow me on Linkedin for more tech and cybersecurity insights.


More remarkably, the advent of artificial intelligence (AI) and machine learning-based computers in the next century may alter how we relate to ourselves.

The digital ecosystem’s networked computer components, which are made possible by machine learning and artificial intelligence, will have a significant impact on practically every sector of the economy. These integrated AI and computing capabilities could pave the way for new frontiers in fields as diverse as genetic engineering, augmented reality, robotics, renewable energy, big data, and more.

Artificial intelligence (AI) has been helping humans in IT security operations since the 2010s, analyzing massive amounts of data quickly to detect the signals of malicious behavior. With enterprise cloud environments producing terabytes of data to be analyzed, threat detection at the cloud scale depends on AI. But can that AI be trusted? Or will hidden bias lead to missed threats and data breaches?

Bias can create risks in AI systems used for cloud security. There are steps humans can take to mitigate this hidden threat, but first, it’s helpful to understand what types of bias exist and where they come from.

Today marks nine months since ChatGPT was released, and six weeks since we announced our AI Start seed fund. Based on our conversations with scores of inception and early-stage AI founders, and hundreds of leading CXOs (chief experience officers), I can attest that we are definitely in exuberant times.

In the span of less than a year, AI investments have become de rigueur in any portfolio, new private company unicorns are being created every week, and the idea that AI will drive a stock market rebound is taking root. People outside of tech are becoming familiar with new vocabulary.

Large language models. ChatGPT. Deep-learning algorithms. Neural networks. Reasoning engines. Inference. Prompt engineering. CoPilots. Leading strategists and thinkers are sharing their view on how it will transform business, how it will unlock potential, and how it will contribute to human flourishing.

But most deep learning models are loosely based on the brain’s inner workings. AI agents are increasingly endowed with human-like decision-making algorithms. The idea that machine intelligence could become sentient one day no longer seems like science fiction.

How could we tell if machine brains one day gained sentience? The answer may be based on our own brains.

A preprint paper authored by 19 neuroscientists, philosophers, and computer scientists, including Dr. Robert Long from the Center for AI Safety and Dr. Yoshua Bengio from the University of Montreal, argues that the neurobiology of consciousness may be our best bet. Rather than simply studying an AI agent’s behavior or responses—for example, during a chat—matching its responses to theories of human consciousness could provide a more objective ruler.

In what can only bode poorly for our species’ survival during the inevitable robot uprisings, an AI system has once again outperformed the people who trained it. This time, researchers at the University of Zurich in partnership with Intel, pitted their “Swift” AI piloting system against a trio of world champion drone racers — none of whom could best its top time.

Swift is the culmination of years of AI and machine learning research by the University of Zurich. In 2021, the team set an earlier iteration of the flight control algorithm that used a series of external cameras to validate its position in space in real-time, against amateur human pilots, all of whom were easily overmatched in every lap of every race during the test. That result was a milestone in its own right as, previously, self-guided drones relied on simplified physics models to continually calculate their optimum trajectory, which severely lowered their top speed.

This week’s result is another milestone, not just because the AI bested people whose job is to fly drones fast, but because it did so without the cumbersome external camera arrays= of its predecessor. The Swift system “reacts in real time to the data collected by an onboard camera, like the one used by human racers,” an UZH Zurich release reads. It uses an integrated inertial measurement unit to track acceleration and speed while an onboard neural network localizes its position in space using data from the front-facing cameras. All of that data is fed into a central control unit — itself a deep neural network — which crunches through the numbers and devises a shortest/fastest path around the track.

Artificial Intelligence has transformed how we live, work, and interact with technology. From voice assistants and chatbots to recommendation algorithms and self-driving cars, AI has suddenly become an integral part of our daily lives, just a few months after the release of ChatGPT, which kickstarted this revolution.

However, with the increasing prevalence of AI, a new phenomenon called “AI fatigue” has emerged. This fatigue stems from the overwhelming presence of AI in various aspects of our lives, raising concerns about privacy, autonomy, and even the displacement of human workers.

AI fatigue refers to the weariness, frustration, or anxiety experienced by individuals due to the overreliance on AI technologies. While AI offers numerous benefits, such as increased efficiency, improved decision-making, and enhanced user experiences, it also presents certain drawbacks. Excessive dependence on AI can lead to a loss of human agency, diminishing trust in technology, and a feeling of disconnection from the decision-making process.