Toggle light / dark theme

The Singularity Needs a Navigator

In 2013, physicist Alex Wissner-Gross published a single equation for intelligence in [ITALIC] Physical Review Letters [/ITALIC]: # F = T∇Sτ

The force of an intelligent system equals its temperature — computational capacity, raw horsepower — multiplied by the gradient of its future option-space. Intelligence is not a mysterious property of carbon-based brains.

It is a physical force: the tendency of any sufficiently energetic system to maximize the number of future states accessible to it.

The equation was elegant. Correct. And incomplete.

It describes the force. It does not describe the geometry of the space through which that force navigates.

A gradient without a metric is a direction without distance — it tells the system where to push but not what distortion it will encounter on the way there.

We spent three years building the geometry. We tested it across 69 billion simulations. What we found changes everything. ## The Missing Geometry — From Force to Navigation.

Markov chain Monte Carlo

In statistics, Markov chain Monte Carlo (MCMC) is a class of algorithms used to draw samples from a probability distribution. Given a probability distribution, one can construct a Markov chain whose elements’ distribution approximates it – that is, the Markov chain’s equilibrium distribution matches the target distribution. The more steps that are included, the more closely the distribution of the sample matches the actual desired distribution.

Markov chain Monte Carlo methods are used to study probability distributions that are too complex or too high dimensional to study with analytic techniques alone. Various algorithms exist for constructing such Markov chains, including the Metropolis–Hastings algorithm.

Quantum computers must overcome major technical hurdles before tackling quantum chemistry problems

Although the potential applications of quantum computing are widespread, a new feasibility study suggests quantum computers still face major hurdles in solving quantum chemistry problems. The study, published in Physical Review B, evaluates what criteria are needed for a quantum advantage in searching for the ground state energy of molecules. The researchers attempt this feat using two different algorithms with differing strengths and weaknesses.

The team first determined the criteria for the variational quantum eigensolver (VQE) algorithm, which is used for noisy, near-term devices and sets an upper bound to the level of imprecision or decoherence in quantum hardware. The researchers derived quantitative criteria for VQE and QPE based on error rates, energy scales, and overlap with the ground state.

Results showed that VQE is extremely sensitive to hardware errors and decoherence. The team says that achieving chemical accuracy would require error rates far below current hardware capabilities. Available error mitigation techniques offer only limited improvement and scale poorly with system size.

The Abstraction Fallacy: Why AI Can Simulate But Not Instantiate Consciousness

The core issue: computation isn’t an intrinsic physical process; it’s an extrinsic, descriptive map. It logically requires an active, experiencing cognitive agent, a “mapmaker”, to alphabetize continuous physics into meaningful, discrete symbols.


Computational functionalism dominates current debates on AI consciousness. This is the hypothesis that subjective experience emerges entirely from abstract causal topology, regardless of the underlying physical substrate. We argue this view fundamentally mischaracterizes how physics relates to information. We call this mistake the Abstraction Fallacy. Tracing the causal origins of abstraction reveals that symbolic computation is not an intrinsic physical process. Instead, it is a mapmaker-dependent description. It requires an active, experiencing cognitive agent to alphabetize continuous physics into a finite set of meaningful states. Consequently, we do not need a complete, finalized theory of consciousness to assess AI sentience—a demand that simply pushes the question beyond near-term resolution and deepens the AI welfare trap. What we actually need is a rigorous ontology of computation. The framework proposed here explicitly separates simulation (behavioral mimicry driven by vehicle causality) from instantiation (intrinsic physical constitution driven by content causality). Establishing this ontological boundary shows why algorithmic symbol manipulation is structurally incapable of instantiating experience. Crucially, this argument does not rely on biological exclusivity. If an artificial system were ever conscious, it would be because of its specific physical constitution, never its syntactic architecture. Ultimately, this framework offers a physically grounded refutation of computational functionalism to resolve the current uncertainty surrounding AI consciousness.

Like 1 Recommend

The deep mystery physicists call “the problem of time” | Jim Al-Khalili: Full Interview

Become a Big Think member to unlock expert classes, premium print issues, exclusive events and more: https://bigthink.com/membership/?utm_

Preorder Jim Al-Khalili’s forthcoming book, On Time: The Physics That Makes the Universe, here: https://www.amazon.com/Time-Physics-T?tag=lifeboatfound-20

Up next.
Brian Cox: The quantum roots of reality | Full Interview ► • Brian Cox: The quantum roots of reality |…

Time feels obvious, but physics tells a stranger story about its existence: Theoretical physicist Jim Al-Khalili explores why our sense of time may be incredibly misleading, including the idea that past, present, and future might all exist at once.

0:00 Chapter 1: Does time flow?
2:42 Why Time Feels Faster as We Age.
3:56 Time and Change in Philosophy and Physics.
5:28 Einstein and the End of Absolute Time.
6:19 Time in the Equations of Physics.
7:50 Chapter 2: How do we reconcile quantum field theory with the general theory of relativity?
12:10 Evidence for Time Dilation: Muons.
14:29 Gravity Slows Time: General Relativity.
19:22 Space-Time and the Block Universe.
21:55 Does Time Really Exist?
26:33 The Debate: Eternalism vs Presentism.
34:12 Chapter 3: Is There a “Now”?
40:40 Chapter 4: Why Does Thermodynamics Have a Direction in Time?
49:38 Quantum Entanglement and the Direction of Time.
55:10 Did Time Begin at the Big Bang?
45:00 Will Time End?
1:05:40 Chapter 5: Is Time Travel Possible?

Cool Qubits Make Faster Decisions

Classical machine learning has benefited several physics subfields, from materials science to medical imaging. Implementing machine-learning algorithms on quantum computers could expand their use to more complex problems and to datasets that are inherently quantum. Nayeli Rodríguez-Briones at the Technical University of Vienna and Daniel Park at Yonsei University in South Korea have now proposed a thermodynamics-inspired protocol that could make quantum machine-learning techniques more efficient [1].

In one common classical machine-learning task, a system is trained on a known dataset and then challenged to classify new data. Its output quantifies both the classification and that classification’s uncertainty. Once the system’s parameters are fixed, evaluating the same data yields the same output. In contrast, the output of a quantum machine-learning algorithm is read out as binary measurements of qubits, which are inherently probabilistic. Because a single measurement provides only limited information, the computation must be repeated many times.

Rodríguez-Briones and Park recognized that how clearly a quantum computer reveals its output is determined by entropy. When the readout qubit is highly polarized—strongly favoring one outcome—its entropy is low. Few repetitions are needed to obtain a firm result. An unpolarized, high-entropy readout qubit returns both states more evenly, meaning more repetitions are required. The researchers showed that the readout qubit’s polarity can be increased by transferring its entropy to ancillary qubits, effectively cooling one while warming the others. Between runs, the ancillary qubits are reset by coupling them to a heat bath. Crucially, this entropy transfer affects the readout qubit’s degree of polarization without changing the encoded decision. The upshot: A given result can be arrived at with fewer repetitions.

Seeing global trade through the lens of physics

New research from the Complexity Science Hub (CSH) shows why widely used algorithms for measuring economic complexity produce trustworthy results and how these tools may benefit diverse areas such as ecology, social science, and agentic AI. The paper is published in the journal Physical Review E.

Joscha Bach & Anders Sandberg

Are minds just processes? Can AI become conscious, morally wiser, or even part of a larger collective intelligence? Anders Sandberg and Joscha Bach discuss consciousness, AGI, hybrid minds, moral uncertainty, collective agency and the future of the cyborg Leviathan. It’s a deep and winding discussion with so many interesting topics covered!

0:00 Intro.
0:37 What is consciousness? Phenomenology — functionalism & panpsychism.
1:54 Causal boundaries — the mind is a causally organised process with a non-arbitrary functional boundary, sustained through time by feedback, control, and internal continuity.
3:20 Minds are not states — they are processes. We don’t see causal filtering in tables.
5:54 Epiphenomenalism is self-undermining if it has no causal role, and taking causation seriously pushes towards functionalism.
9:49 Methodological humility about armchair philosophy of mind.
12:41 Putnam-style Brain-in-a-vat — and why standard objections to AI minds fall flat.
16:37 Is sentience required (or desired) for not just moral competence in AI, but moral motivation as well?
22:35 Why stepping outside yourself is powerful — seeing.
25:12 Are AIs born enlightened?
26:25 Are LLMs AGI yet? What’s still missing.
28:16 AI, hybrid minds, and the limits of human augmentation.
32:32 Can minds be extended — in humans, dogs, and cats?
36:19 Why human language may not be open-ended enough.
39:41 Why AI is so data-hungry — and why better algorithms must exist.
43:39 Why better representations matter more than raw compute (grokking was surprising)
48:46 How babies build a world model from touch and perception.
51:05 What comes after copilots: agent teams, multimodality and new AI workflows.
55:32 Can AI help us discover new forms of taste and aesthetics.
59:49 Using AI to learn art history and invent a transhumanist aesthetic.
1:01:47 When AI helps everyone looks professional, what still counts as real skill?
1:03:56 What happens when the self starts to merge with AI
1:05:43 How AI changes the way we think and create.
1:08:10 What happens when AI starts shaping human relationships.
1:11:18 Why feeling in control can matter more than being right.
1:12:58 Why intelligence without wisdom is very dangerous.
1:17:45 AI via scaling statistical pattern matching vs symbolic (& causal) reasoning. Can LLMs learn causality or just correlation?
1:23:00 Will multimodal AI replace LLMs or use them as glue everywhere.
1:24:02 10 years to the singularity?
1:25:27 AI, coordination and the corruption problem.
1:29:47 Can AI become more moral than us (humans)? and if so, should it?
1:34:31 Why pluralism still leaves moral collisions unresolved.
1:34:31 Traversing the landscape of norms (value)
1:38:14 Can ethics work across nested levels of existence? (from the person-effecting-view to the matrioshka-effecting-view)
1:43:08 Moral realism, evolution & game-theoretic symmetries.
1:48:01 Is there a global optimum of moral coordination? Is that god?
1:55:12 Metaphors of the body-politic, the body of Christ, Omega Point theory, Leviathan.
1:59:36 Will superintelligences converge into a cosmic singleton?

Many thanks for tuning in!
Please support SciFuture by subscribing and sharing!
Buy me a coffee? https://buymeacoffee.com/tech101z.

Have any ideas about people to interview? Want to be notified about future events? Any comments about the STF series?
Please fill out this form: https://docs.google.com/forms/d/1mr9P… regards, Adam Ford

Kind regards.
Adam Ford.
Science, Technology & the Future — #SciFuture — http://scifuture.org

Read more

Joscha Bach delivers “The Machine Consciousness Hypothesis” at Future Day 2026

Can AI become conscious?

What is consciousness for? And is biological consciousness best understood as a self-organising algorithm that could, in principle, be recreated in machines?

In this talk, Joscha explores consciousness as perception of perception, coherence maintenance, modelling, resonance, self-organisation, and the possibility that machine consciousness may emerge through the right virtual architecture.

Essay: ‘The Machine Consciousness Hypothesis’ by Joscha Bach & Hikari Sorenson: https://cimc.ai/cimcHypothesis.pdf

CIMC: https://cimc.ai

Post: https://scifuture.org/joscha-bach-the… Intro

/* */