Toggle light / dark theme

Q&A: Will agentic AI replace human scientists?

An emerging type of artificial intelligence, known as “agentic” AI, seems to do everything that biomedical scientists do—and often, does it faster. This next-generation technology can interpret experimental data, report the results and make decisions on its own. But is agentic AI smart enough to replace actual scientists?

Jason Moore, Ph.D., chair of the Department of Computational Biomedicine at Cedars-Sinai, discusses the pluses and minuses of agentic AI. Moore is corresponding author of a new paper, published in Nature Biotechnology, that examines where agentic AI is today and where it is headed.

Quantum-informed AI improves long-term turbulence forecasts while using far less memory

An AI model informed by calculations from a quantum computer can better predict the behavior of a complex physical system over the long term than current best models that use only conventional computers, according to a new study led by UCL (University College London) researchers. The findings, published in the journal Science Advances, could improve models predicting how liquids and gases move and interact (fluid dynamics), used in areas ranging from climate science to transport, medicine and energy generation.

The researchers say the improved performance is linked to a quantum device’s ability to hold a large amount of information more efficiently. That is because instead of bits that are switched on or off, 1 or 0, as in a classical computer, the quantum computer’s qubits can be 1, 0, or any state in between, and each qubit can affect any of the other qubits—meaning a few qubits can generate a vast number of possible states.

Senior author Professor Peter Coveney, based in UCL Chemistry and the Advanced Research Computing Center at UCL, said, To make predictions about complex systems, we can either run a full simulation, which might take weeks—often too long to be useful—or we can use an AI model, which is quicker but more unreliable over longer time scales.

Slime-like artificial muscle reshapes on command, heals after damage and turns one robot into many

Breaking away from conventional robots that perform only predefined functions once fabricated, researchers have developed a next-generation artificial muscle that can change its shape in real time, recover from damage, and even be reused. The study is published in Science Advances.

The researchers developed a new type of dielectric elastomer actuator (DEA) using a phase-transitional ferrofluid (PTF) that behaves as a solid at room temperature but becomes fluid-like and highly flexible when exposed to external stimuli such as heat or magnetic fields.

Dielectric elastomer actuators (DEAs) are soft transducers that convert electrical energy into mechanical motion and are often referred to as artificial muscles because of their ability to move rapidly and precisely like human muscles.

Reddit Analysis Uncovers Unreported GLP-1 Side Effects

A large-scale analysis of Reddit data identifies symptoms—including fatigue and menstrual changes—frequently missed in clinical trials. These real-world patient insights highlight a broader spectrum of physiological responses to semaglutide and tirzepatide. More on the analysis.


Reproductive symptoms, temperature-related complaints, and psychiatric symptoms were among the side effects of GLP-1 drugs reported in an analysis of Reddit posts.

“Clinical trials tell us a lot, but they’re conducted under very controlled conditions with carefully selected participants,” Neil Sehgal, a doctoral student at the University of Pennsylvania School of Engineering and Applied Science, Philadelphia, told Medscape Medical News. “At the same time, millions of patients are using Reddit every day and sharing very detailed accounts of their experiences with these medications.”

To investigate signals that the medical community “might be missing or underappreciating,” Sehgal and colleagues developed an AI-based system to automatically extract and categorize symptoms from Reddit posts at scale.

Science Still Can’t Explain Consciousness…Here’s Why

Support the Research Behind this Channel on Patreon:
/ arvinash.

REFERENCES
Quantum consciousness • Quantum Mind: Is quantum physics responsib…
When AI became Self Aware • When AI Becomes Self-Aware. Is Machine Con…
Is consciousness God? • Is consciousness God? And where is it loca…

CHAPTERS
0:00 Why does matter become aware?
0:47 What is consciousness (scientific perspective)?
1:52 WHERE is consciousness?(Scientific perspective)?
4:40 Is quantum mechanics at the root of consciousness?
6:45 The reductionist approach
7:17 \

AI turns plain-language prompts into lab-ready recipes for novel materials

Advances in artificial intelligence promise to help chemical engineers discover complex new materials. These materials could be used for reactions such as turning carbon dioxide into fuel, but technical barriers have limited catalysis adoption so far. Researchers at the University of Rochester are now harnessing the benefits of large language models (LLMs) similar to ChatGPT, Claude, or Gemini to empower more researchers to use AI to discover new materials and accelerate experiment workflows.

In a study published in ACS Central Science, a team led by Marc Porosoff, an associate professor in the Department of Chemical and Sustainability Engineering, and Andrew White, visiting associate professor and the cofounder and chief technology officer of Edison Scientific, describes an AI based–method they developed that allows users to input natural language prompts about the materials they want to create and suggest optimal procedures for experiments to produce them. As the users run the experiments, they input the results back into the AI model and continue iterating until they reach their goal.

“We’re able to leverage the pre-trained knowledge of large language models and well-established statistical methods for materials discovery to help us as researchers navigate large experimental design spaces more efficiently,” says Porosoff.

AI and the mysteries of reality

Does AI have the potential to uncover the mysteries of reality, or does it lack the capacity for genuine discovery?

With the 2024 Nobel Prizes for physics and chemistry both awarded for AI-related science, claims that AI will soon make novel scientific breakthroughs on its own are growing louder.

Start-ups are already attempting to create “The AI Scientist,” and researchers at Imperial College argue AI will “usher in a new age of discovery to rival the golden age of the scientific method.” But critics argue the scientific capability of AI remains unknown.

Join computer scientist Roman Yampolskiy, philosopher Steve Fuller, and co-curator of “AI: More than Human” Suzanne Livingston to debate what AI can and can’t do for science.

Tap here to watch now.


The 2024 Nobel Prizes for physics and chemistry were both won for AI-related science, leading some to claim that AI will soon be making novel scientific discoveries on its own. Start-ups are already attempting to create “The AI Scientist,” which will one day “fully automate scientific discovery.” And researchers at Imperial College argue AI will.

Automated AI system flags qubit drift and instability, speeding quantum calibration

NPL, the UK’s National Metrology Institute (NMI), plays a central role in providing accurate and trusted measurement across emerging technology. Within its Institute for Quantum Standards and Technology (IQST), the team is developing methods to characterize and calibrate quantum devices, particularly quantum computing.

As part of a new collaboration, NPL is integrating NVIDIA’s Ising AI tools into its quantum measurement systems to automate key calibration tasks. This approach will help address one of the major challenges facing quantum computing: the need to manage large numbers of qubits, each affected by multiple sources of noise and instability.

Qubit performance is commonly assessed using metrics such as the qubit relaxation time, usually referred to as T1 time, which is a metric for the timescale at which a qubit decays from its excited state to the ground state. These values can fluctuate or drift due to interactions with the environment, requiring frequent checks to ensure reliable operation. Traditionally, such checks are carried out manually by experts.

Monkeys navigate a virtual forest with thought alone, pushing brain-computer interfaces beyond the lab

As a part of a study testing out a new type of implanted brain-computer interface (BCI), three rhesus monkeys controlled movements in a virtual reality (VR) world using only brain signals. The study, published in Science Advances, demonstrates a major step toward practical BCIs that can work outside of lab conditions.

BCIs allow direct communication between the brain and external devices, like a computer or robotic arm. This ability is thought to be extremely valuable for helping people suffering from paralysis to move objects, communicate or complete other tasks. However, there is a gap between lab-based BCI demonstrations and practical, flexible systems for real-world usage.

Previous research has explored intracortical BCIs—those implanted directly into the brain—in monkeys and humans, enabling them to control computer cursors, robotic or prosthetic arms and wheelchairs. Others have restored communication and the function of paralyzed limbs. However, real-world navigation requires adapting to unpredictable events and complex environments, which previous BCIs have struggled with, often requiring overt movement or only working in overly simple settings.

/* */