Toggle light / dark theme

What If The Universe Is Math?

PBS Member Stations rely on viewers like you. To support your local station, go to: http://to.pbs.org/DonateSPACE

Sign Up on Patreon to get access to the Space Time Discord!
/ pbsspacetime.

In his essay “The Unreasonable Effectiveness of Mathematics”, the physicist Eugine Wigner said that “the enormous usefulness of mathematics in the natural sciences is something bordering on the mysterious”. This statement was inspired by the observation that so many aspects of the physical world seem to be describable and predictable by mathematical equations to incredible precision especially as quantum phenomena. But quantum phenomena have no subjective qualities and have questionable physicality. They seem to be completely describable by only numbers, and their behavior precisely defined by equations. In a sense, the quantum world is made of math. So does that mean the universe is made of math too? If you believe the Mathematical Universe Hypothesis then yes. And so are you.

#space #universe #maths.

Check out the Space Time Merch Store.
https://www.pbsspacetime.com/shop.

Sign up for the mailing list to get episode notifications and hear special announcements!

AI tackles one of math’s most brutal problems: Inverse PDEs

Penn Engineers have developed a new way to use AI to solve inverse partial differential equations (PDEs), a particularly challenging class of mathematical problems with broad implications for understanding the natural world.

The advance, which the researchers call “Mollifier Layers,” could benefit fields as varied as genetics and weather forecasting, because inverse PDEs help scientists work backward from observable patterns to infer the hidden dynamics that produced them.

“Solving an inverse problem is like looking at ripples in a pond and working backward to figure out where the pebble fell,” says Vivek Shenoy, Eduardo D. Glandt President’s Distinguished Professor in Materials Science and Engineering (MSE) and senior author of a study published in Transactions on Machine Learning Research (TMLR), which will be presented at the Conference on Neural Information Processing Systems (NeurIPS 2026). “You can see the effects clearly, but the real challenge is inferring the hidden cause.”

A new way to understand the evolution of spacetime dynamics

The concept of spacetime, first described in Einstein’s theory of general relativity, has since been widely studied by many physicists worldwide. Spacetime is described mathematically as a four-dimensional (4D) continuum in which physical events occur, which merges three-dimensional (3D) space, with one-dimensional (1D) time.

This 4D continuum is known to continuously evolve following complex and intricate patterns that are governed by Einstein’s field equations; mathematical equations that describe how matter and energy shape spacetime. While various past theoretical studies explored the evolution of spacetime, identifying patterns that persist during its evolution has proved challenging so far.

Researchers at Adolfo Ibáñez University in Chile and Columbia University set out to explore the evolution of spacetime using ideas rooted in nonlinear electrodynamics, an area of physics that studies the behavior of electric and magnetic fields in complex materials.

Quantum-informed machine learning for predicting spatiotemporal chaos with practical quantum advantage

Ultimately, QIML proves that we don’t need a fully fault-tolerant quantum computer to see results. By using quantum processors to learn the complex “rules” of chaos, we can give classical computers the boost they need to make reliable, long-term predictions about the most turbulent environments in the natural world.


Modeling high-dimensional dynamical systems remains one of the most persistent challenges in computational science. Partial differential equations (PDEs) provide the mathematical backbone for describing a wide range of nonlinear, spatiotemporal processes across scientific and engineering domains (13). However, high-dimensional systems are notoriously sensitive to initial conditions and the floating-point numbers used to compute them (47), making it highly challenging to extract stable, predictive models from data. Modern machine learning (ML) techniques often struggle in this regime: While they may fit short-term trajectories, they fail to learn the invariant statistical properties that govern long-term system behavior. These challenges are compounded in high-dimensional settings, where data are highly nonlinear and contain complex multiscale spatiotemporal correlations.

ML has seen transformative success in domains such as large language models (8, 9), computer vision (10, 11), and weather forecasting (1215), and it is increasingly being adopted in scientific disciplines under the umbrella of scientific ML (16). In fluid mechanics, in particular, ML has been used to model complex flow phenomena, including wall modeling (17, 18), subgrid-scale turbulence (19, 20), and direct flow field generation (21, 22). Physics-informed neural networks (23, 24) attempt to inject domain knowledge into the learning process, yet even these models struggle with the long-term stability and generalization issues that high-dimensional dynamical systems demand. To address this, generative models such as generative adversarial networks (25) and operator-learning architectures such as DeepONet (26) and Fourier neural operators (FNO) (27) have been proposed. While neural operators offer discretization invariance and strong representational power for PDE-based systems, they still suffer from error accumulation and prediction divergence over long horizons, particularly in turbulent and other chaotic regimes (28, 29). Recent work, such as DySLIM (30), enhances stability by leveraging invariant statistical measures. However, these methods depend on estimating such measures from trajectory samples, which can be computationally intensive and inaccurate in all forms of chaotic systems, especially in high-dimensional cases. These limitations have prompted exploration into alternative computational paradigms. Quantum machine learning (QML) has emerged as a possible candidate due to its ability to represent and manipulate high-dimensional probability distributions in Hilbert space (31). Quantum circuits can exploit entanglement and interference to express rich, nonlocal statistical dependencies using fewer parameters than their promising counterparts, which makes them well suited for capturing invariant measures in high-dimensional dynamical systems, where long-range correlations and multimodal distributions frequently arise (32). QML and quantum-inspired ML have already demonstrated potential in fields such as quantum chemistry (33, 34), combinatorial optimization (35, 36), and generative modeling (37, 38). However, the field is constrained on two fronts: Fully quantum approaches are limited by noisy intermediate-scale quantum (NISQ) hardware noise and scalability (39), while quantum-inspired algorithms, being classical simulations, cannot natively leverage crucial quantum effects such as entanglement to efficiently represent the complex, nonlocal correlations found in such systems. These challenges limit the standalone utility of QML in scientific applications today. Instead, hybrid quantum-classical models provide a promising compromise, where quantum submodules work together with classical learning pipelines to improve expressivity, data efficiency, and physical fidelity. In quantum chemistry, this hybrid paradigm has proven feasible, notably through quantum mechanical/molecular mechanical coupling (40, 41), where classical force fields are augmented with quantum corrections. Within such frameworks, techniques such as quantum-selected configuration interaction (42) have been used to enhance accuracy while keeping the quantum resource requirements tractable. In the broader landscape of quantum computational fluid dynamics, progress has been made toward developing full quantum solvers for nonlinear PDEs. Recent works by Liu et al. (43) and Sanavio et al. (44, 45) have successfully applied Carleman linearization to the lattice Boltzmann equation, offering a promising pathway for simulating fluid flows at moderate Reynolds numbers. These approaches, typically using algorithms such as Harrow-Hassidim-Lloyd (HHL) (46), promise exponential speedups but generally necessitate deep circuits and fault-tolerant hardware.

Quantum-enhanced machine learning (QEML) combines the representational richness of quantum models with the scalability of classical learning. By leveraging uniquely quantum properties such as superposition and entanglement, QEML can explore richer feature spaces and capture complex correlations that are challenging for purely classical models. Recent successes in quantum-enhanced drug discovery (37), where hybrid quantum-classical generative models have produced experimentally validated candidates rivaling state-of-the-art classical methods, demonstrate the practical potential of QEML even before full quantum advantage is achieved. Despite these strengths, practical barriers remain. QEML pipelines require repeated quantum-classical communication during training and rely on costly quantum data-embedding and measurement steps, which slow computation and limit accessibility across research institutions.

New study bridges the worlds of classical and quantum physics

When you throw a ball in the air, the equations of classical physics will tell you exactly what path the ball will take as it falls, and when and where it will land. But if you were to squeeze that same ball down to the size of an atom or smaller, it would behave in ways beyond anything that classical physics can predict.

Or so we’ve thought.

MIT scientists have now shown that certain mathematical ideas from everyday classical physics can be used to describe the often weird and nonintuitive behavior that occurs at the quantum, subatomic scale.

Researchers use statistics and math to understand how the brain works

Nothing rivals the human brain’s complexity. Its 86 billion neurons and 85 billion other cells make an estimated 100 trillion connections. If the brain were a computer, it would perform an exaflop (a billion-billion) mathematical calculations every second and use the equivalent of only 20 watts of power. As impressive as the brain is, neurologists can’t fully explain how neurons work together.

To help find answers, researchers at the Institute for Neuroscience, Neurotechnology, and Society (INNS) at Georgia Tech are using math, data, and AI to unlock the secrets of thought. Together they are helping turn the brain’s raw electrical “noise” into real insights about how people think, move, and perceive the world.

Fair warning: Prepare your neurons for the complexity of this brain research ahead.

Large brain mapping dataset expands with new set of cognitive tasks

The Individual Brain Charting (IBC) project has released its fifth and largest update of high-resolution fMRI data, adding a new set of cognitive tasks to one of the most detailed brain-mapping datasets available today. The dataset, which is openly accessible through EBRAINS, is described in a new publication in Nature Scientific Data.

The new release expands the dataset with 18 tasks collected from 11 participants under tightly controlled, standardised conditions – bringing many of them close to 40 hours of scanned data each.

The IBC project launched in 2014 and was funded by the Human Brain Project. It aims to map how individual brains respond across a wide range of cognitive functions. By repeatedly scanning the same participants with diverse tasks – from mathematics and spatial navigation to emotion recognition, reward processing, and working memory – the team is building an exceptionally rich resource for studying individual variability in brain organization.

/* */