Toggle light / dark theme

Kolmogorov-Arnold networks bridge AI and scientific discovery by increasing interpretability

AI has successfully been applied in many areas of science, advancing technologies like weather prediction and protein folding. However, there have been limitations for the world of scientific discovery involving more curiosity-driven research. But that may soon change, thanks to Kolmogorov-Arnold networks (KANs).

A recent study, published in the journal Physical Review X, details how this new kind of neural network architecture might help scientists discover and understand the physical world in a way that other AI can’t.

All-optical chip achieves 100-fold speed boost over top-tier NVIDIA chips

Scientists in China have unveiled a new AI chip called LightGen that is 100 times faster and 100 times more energy efficient than NVIDIA chips, the leading supplier of AI chips worldwide. Instead of using electricity to move information, this new optical chip relies on light to perform complex generative tasks.

Traditional general AI models, such as ChatGPT and Stable Diffusion, run on everyday silicon chips and require massive amounts of computing power and electricity, which can generate significant heat. For particularly complex tasks, these chips can struggle with the workload, resulting in slow processing times.

Feral AI gossip with the potential to spread damage and shame will become more frequent, researchers warn

“Feral” gossip spread via AI bots is likely to become more frequent and pervasive, causing reputational damage and shame, humiliation, anxiety, and distress, researchers have warned.

Chatbots like ChatGPT, Claude, and Gemini don’t just make things up—they generate and spread gossip, complete with negative evaluations and juicy rumors that can cause real-world harm, according to new analysis by philosophers Joel Krueger and Lucy Osler from the University of Exeter.

The research is published in the journal Ethics and Information Technology.

Helping AI agents search to get the best results out of large language models

Whether you’re a scientist brainstorming research ideas or a CEO hoping to automate a task in human resources or finance, you’ll find that artificial intelligence (AI) tools are becoming the assistants you didn’t know you needed. In particular, many professionals are tapping into the talents of semi-autonomous software systems called AI agents, which can call on AI at specific points to solve problems and complete tasks.

AI agents are particularly effective when they use large language models (LLMs) because those systems are powerful, efficient, and adaptable. One way to program such technology is by describing in code what you want your system to do (the “workflow”), including when it should use an LLM. If you were a software company trying to revamp your old codebase to use a more modern programming language for better optimizations and safety, you might build a system that uses an LLM to translate the codebase one file at a time, testing each file as you go.

But what happens when LLMs make mistakes? You’ll want the agent to backtrack to make another attempt, incorporating lessons it learned from previous mistakes.

New computer vision method links photos to floor plans with pixel-level accuracy

For people, matching what they see on the ground to a map is second nature. For computers, it has been a major challenge. A Cornell research team has introduced a new method that helps machines make these connections—an advance that could improve robotics, navigation systems, and 3D modeling.

The work, presented at the 2025 Conference on Neural Information Processing Systems and published on the arXiv preprint server, tackles a major weakness in today’s computer vision tools. Current systems perform well when comparing similar images, but they falter when the views differ dramatically, such as linking a street-level photo to a simple map or architectural drawing.

The new approach teaches machines to find pixel-level matches between a photo and a floor plan, even when the two look completely different. Kuan Wei Huang, a doctoral student in computer science, is the first author; the co-authors are Noah Snavely, a professor at Cornell Tech; Bharath Hariharan, an associate professor at the Cornell Ann S. Bowers College of Computing and Information Science; and undergraduate Brandon Li, a computer science student.

AI uncovers double-strangeness: A new double-Lambda hypernucleus

Researchers from the High Energy Nuclear Physics Laboratory at the RIKEN Pioneering Research Institute (PRI) in Japan and their international collaborators have made a discovery that bridges artificial intelligence and nuclear physics. By applying deep learning techniques to a vast amount of unexamined nuclear emulsion data from the J-PARC E07 experiment, the team identified, for the first time in 25 years, a new double-Lambda hypernucleus.

This marks the world’s first AI-assisted observation of such an exotic nucleus—an atomic nucleus containing two strange quarks. The finding, published in Nature Communications, represents a major advance in experimental nuclear physics and provides new insight into the composition of neutron star cores, one of the most extreme environments in the universe.

A new tool is revealing the invisible networks inside cancer

Spanish researchers have created a powerful new open-source tool that helps uncover the hidden genetic networks driving cancer. Called RNACOREX, the software can analyze thousands of molecular interactions at once, revealing how genes communicate inside tumors and how those signals relate to patient survival. Tested across 13 different cancer types using international data, the tool matches the predictive power of advanced AI systems—while offering something rare in modern analytics: clear, interpretable explanations that help scientists understand why tumors behave the way they do.

The World’s Strangest Computer Is Alive and It Blurs the Line Between Brains and Machines

At first glance, the idea sounds implausible: a computer made not of silicon, but of living brain cells. It’s the kind of concept that seems better suited to science fiction than to a laboratory bench. And yet, in a few research labs around the world, scientists are already experimenting with computers that incorporate living human neurons. Such computers are now being trained to perform complex tasks such as play games and even drive robots.

These systems are built from brain organoids: tiny, lab-grown clusters of human neurons derived from stem cells. Though often nicknamed “mini-brains,” they are not thinking minds or conscious entities. Instead, they are simplified neural networks that can be interfaced with electronics, allowing researchers to study how living neurons process information when placed in a computational loop.

In fact, some researchers even claim that these tools are pushing the frontiers of medicine, along with those of computing. Dr. Ramon Velaquez, a neuroscientist from Arizona State University, is one such researcher.

/* */