Humans tend to put our own intelligence on a pedestal. Our brains can do math, employ logic, explore abstractions, and think critically. But we can’t claim a monopoly on thought. Among a variety of nonhuman species known to display intelligent behavior, birds have been shown time and again to have advanced cognitive abilities. Ravens plan for the future, crows count and use tools, cockatoos open and pillage booby-trapped garbage cans, and chickadees keep track of tens of thousands of seeds cached across a landscape. Notably, birds achieve such feats with brains that look completely different from ours: They’re smaller and lack the highly organized structures that scientists associate with mammalian intelligence.
“A bird with a 10-gram brain is doing pretty much the same as a chimp with a 400-gram brain,” said Onur Güntürkün, who studies brain structures at Ruhr University Bochum in Germany. “How is it possible?”
Researchers have long debated about the relationship between avian and mammalian intelligences. One possibility is that intelligence in vertebrates—animals with backbones, including mammals and birds—evolved once. In that case, both groups would have inherited the complex neural pathways that support cognition from a common ancestor: a lizardlike creature that lived 320 million years ago, when Earth’s continents were squished into one landmass. The other possibility is that the kinds of neural circuits that support vertebrate intelligence evolved independently in birds and mammals.
]]>Computer simulations help materials scientists and biochemists study the motion of macromolecules, advancing the development of new drugs and sustainable materials. However, these simulations pose a challenge for even the most powerful supercomputers.
A University of Oregon graduate student has developed a new mathematical equation that significantly improves the accuracy of the simplified computer models used to study the motion and behavior of large molecules such as proteins, nucleic acids and synthetic materials such as plastics.
The breakthrough, published last month in Physical Review Letters, enhances researchers’ ability to investigate the motion of large molecules in complex biological processes, such as DNA replication. It could aid in understanding diseases linked to errors in such replication, potentially leading to new diagnostic and therapeutic strategies.
]]>For years, quantum computing has been the tech world’s version of “almost there”. But now, engineers at MIT have pulled off something that might change the game. They’ve made a critical leap in quantum error correction, bringing us one step closer to reliable, real-world quantum computers.
In a traditional computer, everything runs on bits —zeroes and ones that flip on and off like tiny digital switches. Quantum computers, on the other hand, use qubits. These are bizarre little things that can be both 0 and 1 at the same time, thanks to a quantum property called superposition. They’re also capable of entanglement, meaning one qubit can instantly influence another, even at a distance.
All this weirdness gives quantum computers enormous potential power. They could solve problems in seconds that might take today’s fastest supercomputers years. Think of it like having thousands of parallel universes doing your math homework at once. But there’s a catch.
]]>While early language models could only process text, contemporary large language models now perform highly diverse tasks on different types of data. For instance, LLMs can understand many languages, generate computer code, solve math problems, or answer questions about images and audio.
MIT researchers probed the inner workings of LLMs to better understand how they process such assorted data, and found evidence that they share some similarities with the human brain.
Neuroscientists believe the human brain has a “semantic hub” in the anterior temporal lobe that integrates semantic information from various modalities, like visual data and tactile inputs. This semantic hub is connected to modality-specific “spokes” that route information to the hub. The MIT researchers found that LLMs use a similar mechanism by abstractly processing data from diverse modalities in a central, generalized way. For instance, a model that has English as its dominant language would rely on English as a central medium to process inputs in Japanese or reason about arithmetic, computer code, etc. Furthermore, the researchers demonstrate that they can intervene in a model’s semantic hub by using text in the model’s dominant language to change its outputs, even when the model is processing data in other languages.
]]>A team of researchers at Nagoya University has discovered something surprising. If you have two tiny vibrating elements, each one barely moving on its own, and you combine them in the right way, their combined vibration can be amplified dramatically—up to 100 million times.
The paper is published in the Chaos: An Interdisciplinary Journal of Nonlinear Science.
Their findings suggest that by relying on structural amplification rather than power, even small, simple devices can transmit long-distance clear signals, potentially innovating long-distance communications and remote medical devices.
]]>Hardships in childhood could have lasting effects on the brain, new research shows, with adverse events such as family conflict and poverty potentially affecting cognitive function in kids for several years afterwards.
This study, led by a team from Brigham and Women’s Hospital in Massachusetts, looked specifically at white matter: the deeper tissue in the brain, made up of communication fibers ferrying information between neurons.
“We found that a range of adversities is associated with lower levels of fractional anisotropy (FA), a measure of white matter microstructure, throughout the whole brain, and that this is associated with lower performance on mathematics and language tasks later on,” write the researchers in their published paper.
]]>The mathematics of graphs has helped reveal a principle that limits the strength of quantum correlations – and explains why physicists have never measured any stronger connections in some post-quantum realm
]]>Solving one of the oldest algebra problems isn’t a bad claim to fame, and it’s a claim Norman Wildberger can now make: The mathematician has solved what are known as higher-degree polynomial equations, which have been puzzling experts for nearly 200 years.
Wildberger, from the University of New South Wales (UNSW) in Australia, worked with computer scientist Dean Rubine on a paper that details how these incredibly complex calculations could be worked out.
“This is a dramatic revision of a basic chapter in algebra,” says Wildberger. “Our solution reopens a previously closed book in mathematics history.”
]]>A mathematician has solved a 200-year-old maths problem after figuring out a way to crack higher-degree polynomial equations without using radicals or irrational numbers.
The method developed by Norman Wildberger, PhD, an honorary professor at the School of Mathematics and Statistics at UNSW Sydney, solves one of algebra’s oldest challenges by finding a general solution to equations where the variable is raised to the fifth power or higher.
]]>It inspired further work — mathematicians like Sophie Germain had previously contributed techniques (notably the “Sophie Germain trick” for special primes), and Dirichlet’s work continued the trend of applying novel number-theoretic tools.
(/ ˌ d ɪər ɪ ˈ k l eɪ / ; [ 1 ] German: [ləˈʒœn diʁiˈkleː] ; [ 2 ] 13 February 1805 – 5 May 1859) was a German mathematician. In number theory, he proved special cases of Fermat’s last theorem and created analytic number theory. In analysis, he advanced the theory of Fourier series and was one of the first to give the modern formal definition of a function. In mathematical physics, he studied potential theory, boundary-value problems, and heat diffusion, and hydrodynamics.
Although his surname is Lejeune Dirichlet, he is commonly referred to by his mononym Dirichlet, in particular for results named after him.
]]>