New research may help reduce reliance on rare earth elements used to power modern technology.
Medical artificial intelligence (AI) is often described as a way to make patient care safer by helping clinicians manage information. A new study by the Icahn School of Medicine at Mount Sinai and collaborators confronts a critical vulnerability: when a medical lie enters the system, can AI pass it on as if it were true?
Analyzing more than a million prompts across nine leading language models, the researchers found that these systems can repeat false medical claims when they appear in realistic hospital notes or social-media health discussions.
The findings, published in The Lancet Digital Health, suggest that current safeguards do not reliably distinguish fact from fabrication once a claim is wrapped in familiar clinical or social-media language. The paper is titled “Mapping LLM Susceptibility to Medical Misinformation Across Clinical Notes and Social Media.”
🏦 Invest In Luxury Dubai Property https://londonreal.tv/dubai-ytd.
🍿 Watch the full interview for free at https://londonreal.tv/countdown-to-extinction-how-ai-will-ex…poses-all/
Expert in robotics & artificial intelligence.
“I’m known for predicting that later this century there will be a terrible war, killing billions of people over the issue of species dominance.”
From a philosophical perspective, the concept of AI ending humanity challenges our assumptions about evolution, survival, and the nature of progress. Throughout history, humans have viewed themselves as top of the food chain, but advanced AI raises the possibility that we are merely a stepping stone.
🚨 Learn To Make Money In Crypto:
💰The Investment club: https://londonreal.tv/club.
💰Crypto & DeFi Academy: https://londonreal.tv/defi-ytd.
🔔 SUBSCRIBE ON YOUTUBE: http://bit.ly/SubscribeToLondonReal.
When you look at text, you subconsciously track how much space remains on each line. If you’re writing “Happy Birthday” and “Birthday” won’t fit, your brain automatically moves it to the next line. You don’t calculate this—you *see* it. But AI models don’t have eyes. They receive only sequences of numbers (tokens) and must somehow develop a sense of visual space from scratch.
Inside your brain, “place cells” help you navigate physical space by firing when you’re in specific locations. Remarkably, Claude develops something strikingly similar. The researchers found that the model represents character counts using low-dimensional curved manifolds—mathematical shapes that are discretized by sparse feature families, much like how biological place cells divide space into discrete firing zones.
The researchers validated their findings through causal interventions—essentially “knocking out” specific neurons to see if the model’s counting ability broke in predictable ways. They even discovered visual illusions—carefully crafted character sequences that trick the model’s counting mechanism, much like optical illusions fool human vision.
2. Attention mechanisms are geometric engines: The “attention heads” that power modern AI don’t just connect related words—they perform sophisticated geometric transformations on internal representations.
1. What other “sensory” capabilities have models developed implicitly? Can AI develop senses we don’t have names for?
Language models can perceive visual properties of text despite receiving only sequences of tokens-we mechanistically investigate how Claude 3.5 Haiku accomplishes one such task: linebreaking in fixed-width text. We find that character counts are represented on low-dimensional curved manifolds discretized by sparse feature families, analogous to biological place cells. Accurate predictions emerge from a sequence of geometric transformations: token lengths are accumulated into character count manifolds, attention heads twist these manifolds to estimate distance to the line boundary, and the decision to break the line is enabled by arranging estimates orthogonally to create a linear decision boundary. We validate our findings through causal interventions and discover visual illusions—character sequences that hijack the counting mechanism.
Compared with conventional psychological models, which use simple math equations, Centaur did a far better job of predicting behavior. Accurate predictions of how humans respond in psychology experiments are valuable in and of themselves: For example, scientists could use Centaur to pilot their experiments on a computer before recruiting, and paying, human participants. In their paper, however, the researchers propose that Centaur could be more than just a prediction machine. By interrogating the mechanisms that allow Centaur to effectively replicate human behavior, they argue, scientists could develop new theories about the inner workings of the mind.
But some psychologists doubt whether Centaur can tell us much about the mind at all. Sure, it’s better than conventional psychological models at predicting how humans behave—but it also has a billion times more parameters. And just because a model behaves like a human on the outside doesn’t mean that it functions like one on the inside. Olivia Guest, an assistant professor of computational cognitive science at Radboud University in the Netherlands, compares Centaur to a calculator, which can effectively predict the response a math whiz will give when asked to add two numbers. “I don’t know what you would learn about human addition by studying a calculator,” she says.
Even if Centaur does capture something important about human psychology, scientists may struggle to extract any insight from the model’s millions of neurons. Though AI researchers are working hard to figure out how large language models work, they’ve barely managed to crack open the black box. Understanding an enormous neural-network model of the human mind may not prove much easier than understanding the thing itself.
When James Dyson built his 5,127th prototype of a bagless vacuum cleaner, he had no idea that the same relentless engineering philosophy would one day transform him into Britain’s largest farmer. Today, Dyson strawberry farming represents one of the most ambitious applications of high-tech innovation to agriculture ever attempted in the United Kingdom.
The numbers tell an extraordinary story. After spending five years and creating over five thousand prototypes to perfect a single vacuum cleaner design, Dyson has now invested £140 million into a farming operation spanning 36,000 acres across five English counties. At the heart of this agricultural empire sits a 26-acre glasshouse in Lincolnshire, home to 1.25 million strawberry plants and technology that has increased yields by 250% compared to traditional farming methods.
This isn’t farming as your grandparents would recognize it. Inside Dyson’s facility, massive 5.5-meter “ferris wheel” structures rotate strawberry plants through optimal sunlight positions. Sixteen robotic arms delicately harvest ripe fruit using computer vision. UV-emitting robots patrol the aisles at night, destroying mould without chemicals. And all of it runs on renewable energy generated from an adjacent anaerobic digester.