Toggle light / dark theme

Seeing the Quantum Butterfly Effect

A combined experimental and theoretical study reveals the emergence of quantum chaos in a complex system, suggesting that it can be described with a universal theoretical framework.

Consider the following thought experiment: Take all the air molecules in a thunderstorm and evolve them backward in time for an hour, effectively rewinding a molecular movie. Then slightly perturb the velocity directions of a few molecules and evolve the system forward again to the current moment. Because such systems are chaotic, microscopic perturbations in the past will lead to dramatically different futures. This “butterfly effect” also occurs in quantum systems. To observe it, researchers measure a mathematical entity called the out-of-time-ordered correlator (OTOC). Loosely speaking, the OTOC measures how quickly a system “forgets” its initial state. Unfortunately, the OTOC is notoriously difficult to measure because it typically requires experimental protocols that implement an effective many-body time reversal.

Leading AI models struggle to solve original math problems

Mathematics, like many other scientific endeavors, is increasingly using artificial intelligence. Of course, math is the backbone of AI, but mathematicians are also turning to these tools for tasks like literature searches and checking manuscripts for errors. But how well can AI perform when it comes to solving genuine, high-level research problems?

To date, there is still no widely accepted realistic methodology for assessing AI’s capabilities to solve math at this level. So a group of mathematicians decided to put the machines to the test as they detail in a study available on the arXiv preprint server.

Previous attempts at testing AI have used math contest problems and questions already found in textbooks. What makes this study different is that the questions the programs faced were drawn from mathematicians’ own research. They had never been posted or published online, which means AI couldn’t memorize answers from its training data.

Seeing the whole from a part: Revealing hidden turbulent structures from limited observations and equations

The irregular, swirling motion of fluids we call turbulence can be found everywhere, from stirring in a teacup to currents in the planetary atmosphere. This phenomenon is governed by the Navier-Stokes equations—a set of mathematical equations that describe how fluids move.

Despite being known for nearly two centuries, these equations still pose major challenges when it comes to making predictions. Turbulent flows are inherently chaotic, and tiny uncertainties can grow quickly over time.

In real-world situations, scientists can only observe part of a turbulent flow, usually its largest and slowest moving features. Thus, a long-standing question in fluid physics has been whether these partial observations are enough to reconstruct the full motion of the fluid.

Mathematics for Computer Science

This course covers elementary discrete mathematics for computer science and engineering. It emphasizes mathematical definitions and proofs as well as applicable methods. Topics include formal logic notation, proof methods; induction, well-ordering; sets, relations; elementary graph theory; integer congruences; asymptotic notation and growth of functions; permutations and combinations, counting principles; discrete probability. Further selected topics may also be covered, such as recursive definition and structural induction; state machines and invariants; recurrences; generating functions.

View a PDF of the paper titled When Models Manipulate Manifolds: The Geometry of a Counting Task, by Wes Gurnee and 6 other authors

When you look at text, you subconsciously track how much space remains on each line. If you’re writing “Happy Birthday” and “Birthday” won’t fit, your brain automatically moves it to the next line. You don’t calculate this—you *see* it. But AI models don’t have eyes. They receive only sequences of numbers (tokens) and must somehow develop a sense of visual space from scratch.

Inside your brain, “place cells” help you navigate physical space by firing when you’re in specific locations. Remarkably, Claude develops something strikingly similar. The researchers found that the model represents character counts using low-dimensional curved manifolds—mathematical shapes that are discretized by sparse feature families, much like how biological place cells divide space into discrete firing zones.

The researchers validated their findings through causal interventions—essentially “knocking out” specific neurons to see if the model’s counting ability broke in predictable ways. They even discovered visual illusions—carefully crafted character sequences that trick the model’s counting mechanism, much like optical illusions fool human vision.

2. Attention mechanisms are geometric engines: The “attention heads” that power modern AI don’t just connect related words—they perform sophisticated geometric transformations on internal representations.

1. What other “sensory” capabilities have models developed implicitly? Can AI develop senses we don’t have names for?


Language models can perceive visual properties of text despite receiving only sequences of tokens-we mechanistically investigate how Claude 3.5 Haiku accomplishes one such task: linebreaking in fixed-width text. We find that character counts are represented on low-dimensional curved manifolds discretized by sparse feature families, analogous to biological place cells. Accurate predictions emerge from a sequence of geometric transformations: token lengths are accumulated into character count manifolds, attention heads twist these manifolds to estimate distance to the line boundary, and the decision to break the line is enabled by arranging estimates orthogonally to create a linear decision boundary. We validate our findings through causal interventions and discover visual illusions—character sequences that hijack the counting mechanism.

How scientists are trying to use AI to unlock the human mind

Compared with conventional psychological models, which use simple math equations, Centaur did a far better job of predicting behavior. Accurate predictions of how humans respond in psychology experiments are valuable in and of themselves: For example, scientists could use Centaur to pilot their experiments on a computer before recruiting, and paying, human participants. In their paper, however, the researchers propose that Centaur could be more than just a prediction machine. By interrogating the mechanisms that allow Centaur to effectively replicate human behavior, they argue, scientists could develop new theories about the inner workings of the mind.

But some psychologists doubt whether Centaur can tell us much about the mind at all. Sure, it’s better than conventional psychological models at predicting how humans behave—but it also has a billion times more parameters. And just because a model behaves like a human on the outside doesn’t mean that it functions like one on the inside. Olivia Guest, an assistant professor of computational cognitive science at Radboud University in the Netherlands, compares Centaur to a calculator, which can effectively predict the response a math whiz will give when asked to add two numbers. “I don’t know what you would learn about human addition by studying a calculator,” she says.

Even if Centaur does capture something important about human psychology, scientists may struggle to extract any insight from the model’s millions of neurons. Though AI researchers are working hard to figure out how large language models work, they’ve barely managed to crack open the black box. Understanding an enormous neural-network model of the human mind may not prove much easier than understanding the thing itself.

Can Physics Use Inconsistent Mathematics?

Discussion with logician Graham Priest on the existence of true contradictions in reality.

Please consider financially supporting my work (ONLY if you have the means).

Patreon: https://patreon.com/trsam?utm_medium=unknown&utm_source=join…t=copyLink.
Substack: https://rsampod.substack.com.
PayPal: https://paypal.me/rsampod?country.x=AU&locale.x=en_AU

{Podcast}
Substack: https://rsampod.substack.com/podcast.
Spotify: https://open.spotify.com/show/4ryEqjut4r6SMtfxLdM1Le.
SFP: https://podcasters.spotify.com/pod/show/rahul-samaranayake.

{Website}

What Ultimately Is There? Metaphysics and the Ruliad

Stephen Wolfram shares surprising new ideas and results from a scientific approach to metaphysics. Discusses time, spacetime, computational irreducibility, significance of the observer, quantum mechanics and multiway systems, ruliad, laws of nature, objective reality, existence, mathematical reality.

Long-Sought Proof Tames Some of Math’s Unruliest Equations

The trajectory of a storm, the evolution of stock prices, the spread of disease — mathematicians can describe any phenomenon that changes in time or space using what are known as partial differential equations. But there’s a problem: These “PDEs” are often so complicated that it’s impossible to solve them directly.

Mathematicians instead rely on a clever workaround. They might not know how to compute the exact solution to a given equation, but they can try to show that this solution must be “regular,” or well-behaved in a certain sense — that its values won’t suddenly jump in a physically impossible way, for instance. If a solution is regular, mathematicians can use a variety of tools to approximate it, gaining a better understanding of the phenomenon they want to study.

But many of the PDEs that describe realistic situations have remained out of reach. Mathematicians haven’t been able to show that their solutions are regular. In particular, some of these out-of-reach equations belong to a special class of PDEs that researchers spent a century developing a theory of — a theory that no one could get to work for this one subclass. They’d hit a wall.

/* */