Toggle light / dark theme

Edward Frenkel is a renowned mathematician, professor of University of California, Berkeley, member of the American Academy of Arts and Sciences, and winner of the Hermann Weyl Prize in Mathematical Physics. In this episode, Edward Frenkel discusses the recent monumental proof in the Langlands program, explaining its significance and how it advances understanding in modern mathematics.

SPONSOR (THE ECONOMIST): As a listener of TOE you can get a special 20% off discount to The Economist and all it has to offer! Visit https://www.economist.com/toe.

Edward Frenkel’s previous lecture on TOE [Part 1]: • Revolutionary Math Proof No One Could…

Check out Edward Frenkel’s New York Times Bestselling book “Love and Math” which covers a lot of material in this video: https://amzn.to/4evbBkS

A black hole analog could tell us a thing or two about an elusive radiation theoretically emitted by the real thing.

Using a chain of atoms in single-file to simulate the event horizon of a black hole, a team of physicists in 2022 observed the equivalent of what we call Hawking radiation – particles born from disturbances in the quantum fluctuations caused by the black hole’s break in spacetime.

This, they say, could help resolve the tension between two currently irreconcilable frameworks for describing the Universe: the general theory of relativity, which describes the behavior of gravity as a continuous field known as spacetime; and quantum mechanics, which describes the behavior of discrete particles using the mathematics of probability.

Two of San Francisco’s leading players in artificial intelligence have challenged the public to come up with questions capable of testing the capabilities of large language models (LLMs) like Google Gemini and OpenAI’s o1. Scale AI, which specializes in preparing the vast tracts of data on which the LLMs are trained, teamed up with the Center for AI Safety (CAIS) to launch the initiative, Humanity’s Last Exam.

Featuring prizes of US$5,000 (£3,800) for those who come up with the top 50 questions selected for the test, Scale and CAIS say the goal is to test how close we are to achieving “expert-level AI systems” using the “largest, broadest coalition of experts in history.”

Why do this? The leading LLMs are already acing many established tests in intelligence, mathematics and law, but it’s hard to be sure how meaningful this is. In many cases, they may have pre-learned the answers due to the gargantuan quantities of data on which they are trained, including a significant percentage of everything on the internet.

The foundation of this simulation, as described by the team, is a well-known cosmological model that describes the universe as expanding uniformly over time. The researchers modeled how a quantum field, initially in a vacuum state (meaning no particles are present), responds to this expansion. As spacetime stretches, the field’s oscillations mix in a process that can create particles where none previously existed. This phenomenon is captured by a transformation that relates the field’s behavior before and after the universe expands, showing how vibrations at different momenta become entangled, leading to particle creation.

To understand how many particles are generated, the researchers used a mathematical tool called the Bogoliubov transformation. This approach describes how the field’s vacuum state evolves into a state where particles can be detected. As the expansion rate increases, more particles are produced, aligning with predictions from quantum field theory. By running this simulation on IBM quantum computers, the team was able to estimate the number of particles created and observe how the quantum field behaves during the universe’s expansion, offering a new way to explore complex cosmological phenomena.

According to the team, the most notable result of the study was the ability to estimate the number of particles created as a function of the expansion rate of the universe. By running their quantum circuit on both simulators and IBM’s 127-qubit Eagle quantum processor, the researchers demonstrated that they could successfully simulate particle creation in a cosmological context. While the results were noisy—particularly for low expansion rates—the error mitigation techniques used helped bring the outcomes closer to theoretical predictions.

We tackle the hard problem of consciousness taking the naturally-selected, self-organising, embodied organism as our starting point. We provide a mathematical formalism describing how biological systems self-organise to hierarchically interpret unlabelled sensory information according to valence and specific needs. Such interpretations imply behavioural policies which can only be differentiated from each other by the qualitative aspect of information processing. Selection pressures favour systems that can intervene in the world to achieve homeostatic and reproductive goals. Quality is a property arising in such systems to link cause to affect to motivate real world interventions. This produces a range of qualitative classifiers (interoceptive and exteroceptive) that motivate specific actions and determine priorities and preferences.

I have been thinking for a while about the mathematics used to formulate our physical theories, especially the similarities and differences among different mathematical formulations. This was a focus of my 2021 book, Physics, Structure, and Reality, where I discussed these things in the context of classical and spacetime physics.

Recently this has led me toward thinking about mathematical formulations of quantum mechanics, where an interesting question arises concerning the use of complex numbers. (I recently secured a grant from the National Science Foundation for a project investigating this.)

It is frequently said by physicists that complex numbers are essential to formulating quantum mechanics, and that this is different from the situation in classical physics, where complex numbers appear as a useful but ultimately dispensable calculational tool. It is not often said why, or in what way, complex numbers are supposed to be essential to quantum mechanics as opposed to classical physics.

We’re joined by Dr. Denis Noble, Professor Emeritus of Cardiovascular Physiology at the University of Oxford, and the father of ‘systems biology’. He is known for his groundbreaking creation of the first mathematical model of the heart’s electrical activity in the 1960s which radically transformed our understanding of the heart.

Dr. Noble’s contributions have revolutionized our understanding of cardiac function and the broader field of biology. His work continues to challenge long-standing biological concepts, including gene-centric views like Neo-Darwinism.

In this episode, Dr. Noble discusses his critiques of fundamental biological theories that have shaped science for over 80 years, such as the gene self-replication model and the Weissmann barrier. He advocates for a more holistic, systems-based approach to biology, where genes, cells, and their environments interact in complex networks rather than a one-way deterministic process.

We dive deep into Dr. Noble’s argument that biology needs to move beyond reductionist views, emphasizing that life is more than just the sum of its genetic code. He explains how AI struggles to replicate even simple biological systems, and how biology’s complexity suggests that life’s logic lies not in DNA alone but in the entire organism.