Toggle light / dark theme

AI could help human scientists pick promising research topics

Large language models (LLMs) could help human scientists identify interesting research topics that have not previously been explored, say scientists at Germany’s Karlsruhe Institute of Technology (KIT). By analysing abstracts in materials science publications and mapping connections between different concepts, the model was able to generate predictions for future areas of interest that the KIT team says are more precise than those produced by traditional, rule-based algorithms.

The number of research articles published each year is increasing so quickly that it is impossible for scientists to keep up with everything, observes team leader Pascal Friederich, who heads a KIT research group on artificial intelligence for materials sciences. While experienced scientists know how to find connections between research areas within their field, identifying links between these and other, unfamiliar topics is a different story.

What If The Universe Is Math?

PBS Member Stations rely on viewers like you. To support your local station, go to: http://to.pbs.org/DonateSPACE

Sign Up on Patreon to get access to the Space Time Discord!
/ pbsspacetime.

In his essay “The Unreasonable Effectiveness of Mathematics”, the physicist Eugine Wigner said that “the enormous usefulness of mathematics in the natural sciences is something bordering on the mysterious”. This statement was inspired by the observation that so many aspects of the physical world seem to be describable and predictable by mathematical equations to incredible precision especially as quantum phenomena. But quantum phenomena have no subjective qualities and have questionable physicality. They seem to be completely describable by only numbers, and their behavior precisely defined by equations. In a sense, the quantum world is made of math. So does that mean the universe is made of math too? If you believe the Mathematical Universe Hypothesis then yes. And so are you.

#space #universe #maths.

Check out the Space Time Merch Store.
https://www.pbsspacetime.com/shop.

Sign up for the mailing list to get episode notifications and hear special announcements!

How To Simulate The Universe With DFT

PBS Member Stations rely on viewers like you. To support your local station, go to: http://to.pbs.org/DonateSPACE

Take the Space Time Fan Survey Here: https://forms.gle/wS4bj9o3rvyhfKzUA

Sign Up on Patreon to get access to the Space Time Discord!
/ pbsspacetime.

If you used every particle in the observable universe to do a full quantum simulation, how big would that simulation be? At best a large molecule. That’s how insanely information dense the quantum wavefunction really is. And yet we routinely simulate systems with thousands, even millions of particles. How? By cheating. Using the ultimate compression algorithm: Density Functional Theory (DFT). Let’s learn how to cheat the universe.

Check out the Space Time Merch Store.
https://www.pbsspacetime.com/shop.

Sign up for the mailing list to get episode notifications and hear special announcements!

AI tackles one of math’s most brutal problems: Inverse PDEs

Penn Engineers have developed a new way to use AI to solve inverse partial differential equations (PDEs), a particularly challenging class of mathematical problems with broad implications for understanding the natural world.

The advance, which the researchers call “Mollifier Layers,” could benefit fields as varied as genetics and weather forecasting, because inverse PDEs help scientists work backward from observable patterns to infer the hidden dynamics that produced them.

“Solving an inverse problem is like looking at ripples in a pond and working backward to figure out where the pebble fell,” says Vivek Shenoy, Eduardo D. Glandt President’s Distinguished Professor in Materials Science and Engineering (MSE) and senior author of a study published in Transactions on Machine Learning Research (TMLR), which will be presented at the Conference on Neural Information Processing Systems (NeurIPS 2026). “You can see the effects clearly, but the real challenge is inferring the hidden cause.”

High trust in AI leaves individuals vulnerable to “cognitive surrender,” study finds

People are increasingly outsourcing their thinking to artificial intelligence, bypassing critical reflection entirely. New research reveals that this “cognitive surrender” inflates confidence and causes users to blindly adopt algorithm-generated answers, even when the software is wrong.

A new way to understand the evolution of spacetime dynamics

The concept of spacetime, first described in Einstein’s theory of general relativity, has since been widely studied by many physicists worldwide. Spacetime is described mathematically as a four-dimensional (4D) continuum in which physical events occur, which merges three-dimensional (3D) space, with one-dimensional (1D) time.

This 4D continuum is known to continuously evolve following complex and intricate patterns that are governed by Einstein’s field equations; mathematical equations that describe how matter and energy shape spacetime. While various past theoretical studies explored the evolution of spacetime, identifying patterns that persist during its evolution has proved challenging so far.

Researchers at Adolfo Ibáñez University in Chile and Columbia University set out to explore the evolution of spacetime using ideas rooted in nonlinear electrodynamics, an area of physics that studies the behavior of electric and magnetic fields in complex materials.

Topological Origin of Cosmological Constant ( Dark Energy)

Shape of the universe and Cosmological Constant.


🚨 The Biggest Problem in Physics (Cosmological Constant) https://lnkd.in/gt7tEpJw ❓ Problem: Why is the Universe accelerating… and why is the value so unbelievably small? Observations (supernovae, CMB, BAO) show: 👉 The expansion is accelerating 👉 This requires a cosmological constant Λ From Einstein’s equation: Λ = 8πG ρ_Λ 😳 But here’s the crisis: Quantum physics predicts vacuum energy: ρ_vac ≈ M_Pl⁴ But observations give: ρ_Λ ≈ 10⁻¹²⁰ M_Pl⁴ 💥 That’s a mismatch of 120 orders of magnitude This is called the cosmological constant problem 🧠 Standard thinking fails because: We assume: 👉 Energy fills space uniformly 👉 Λ comes from summing quantum fluctuations ρ_vac = (1/V) Σ (½ ℏωₖ) But this diverges → way too large ❌ 💡 A different perspective (EWOG insight): Instead of asking: 👉 “What is the energy of empty space?” Ask: 👉 “What is the geometry of the Universe?

Better volcano eruption predictions on Earth—and Venus—thanks to Mauna Loa study

When Mauna Loa erupted in 2022, the largest lava flow headed on a path headed directly toward Daniel K. Inouye State Highway 200, also known as Saddle Road, a critical route that carries many residents from their homes on one side to their jobs on the other.

No one could accurately predict whether the lava would continue to flow and eventually block the highway, or stop short, sparing the road.

However, when the volcano next erupts scientists will be better able to monitor the eruption in real time and make more accurate predictions about where the lava will flow and when the volcano might erupt. These advances are thanks to the availability of satellite data from public and private sources as well as machine learning algorithms developed at Pitt with help from a colleague in Italy, as highlighted in a recent publication in the Journal of Volcanology and Geothermal Research.

Quantum-informed machine learning for predicting spatiotemporal chaos with practical quantum advantage

Ultimately, QIML proves that we don’t need a fully fault-tolerant quantum computer to see results. By using quantum processors to learn the complex “rules” of chaos, we can give classical computers the boost they need to make reliable, long-term predictions about the most turbulent environments in the natural world.


Modeling high-dimensional dynamical systems remains one of the most persistent challenges in computational science. Partial differential equations (PDEs) provide the mathematical backbone for describing a wide range of nonlinear, spatiotemporal processes across scientific and engineering domains (13). However, high-dimensional systems are notoriously sensitive to initial conditions and the floating-point numbers used to compute them (47), making it highly challenging to extract stable, predictive models from data. Modern machine learning (ML) techniques often struggle in this regime: While they may fit short-term trajectories, they fail to learn the invariant statistical properties that govern long-term system behavior. These challenges are compounded in high-dimensional settings, where data are highly nonlinear and contain complex multiscale spatiotemporal correlations.

ML has seen transformative success in domains such as large language models (8, 9), computer vision (10, 11), and weather forecasting (1215), and it is increasingly being adopted in scientific disciplines under the umbrella of scientific ML (16). In fluid mechanics, in particular, ML has been used to model complex flow phenomena, including wall modeling (17, 18), subgrid-scale turbulence (19, 20), and direct flow field generation (21, 22). Physics-informed neural networks (23, 24) attempt to inject domain knowledge into the learning process, yet even these models struggle with the long-term stability and generalization issues that high-dimensional dynamical systems demand. To address this, generative models such as generative adversarial networks (25) and operator-learning architectures such as DeepONet (26) and Fourier neural operators (FNO) (27) have been proposed. While neural operators offer discretization invariance and strong representational power for PDE-based systems, they still suffer from error accumulation and prediction divergence over long horizons, particularly in turbulent and other chaotic regimes (28, 29). Recent work, such as DySLIM (30), enhances stability by leveraging invariant statistical measures. However, these methods depend on estimating such measures from trajectory samples, which can be computationally intensive and inaccurate in all forms of chaotic systems, especially in high-dimensional cases. These limitations have prompted exploration into alternative computational paradigms. Quantum machine learning (QML) has emerged as a possible candidate due to its ability to represent and manipulate high-dimensional probability distributions in Hilbert space (31). Quantum circuits can exploit entanglement and interference to express rich, nonlocal statistical dependencies using fewer parameters than their promising counterparts, which makes them well suited for capturing invariant measures in high-dimensional dynamical systems, where long-range correlations and multimodal distributions frequently arise (32). QML and quantum-inspired ML have already demonstrated potential in fields such as quantum chemistry (33, 34), combinatorial optimization (35, 36), and generative modeling (37, 38). However, the field is constrained on two fronts: Fully quantum approaches are limited by noisy intermediate-scale quantum (NISQ) hardware noise and scalability (39), while quantum-inspired algorithms, being classical simulations, cannot natively leverage crucial quantum effects such as entanglement to efficiently represent the complex, nonlocal correlations found in such systems. These challenges limit the standalone utility of QML in scientific applications today. Instead, hybrid quantum-classical models provide a promising compromise, where quantum submodules work together with classical learning pipelines to improve expressivity, data efficiency, and physical fidelity. In quantum chemistry, this hybrid paradigm has proven feasible, notably through quantum mechanical/molecular mechanical coupling (40, 41), where classical force fields are augmented with quantum corrections. Within such frameworks, techniques such as quantum-selected configuration interaction (42) have been used to enhance accuracy while keeping the quantum resource requirements tractable. In the broader landscape of quantum computational fluid dynamics, progress has been made toward developing full quantum solvers for nonlinear PDEs. Recent works by Liu et al. (43) and Sanavio et al. (44, 45) have successfully applied Carleman linearization to the lattice Boltzmann equation, offering a promising pathway for simulating fluid flows at moderate Reynolds numbers. These approaches, typically using algorithms such as Harrow-Hassidim-Lloyd (HHL) (46), promise exponential speedups but generally necessitate deep circuits and fault-tolerant hardware.

Quantum-enhanced machine learning (QEML) combines the representational richness of quantum models with the scalability of classical learning. By leveraging uniquely quantum properties such as superposition and entanglement, QEML can explore richer feature spaces and capture complex correlations that are challenging for purely classical models. Recent successes in quantum-enhanced drug discovery (37), where hybrid quantum-classical generative models have produced experimentally validated candidates rivaling state-of-the-art classical methods, demonstrate the practical potential of QEML even before full quantum advantage is achieved. Despite these strengths, practical barriers remain. QEML pipelines require repeated quantum-classical communication during training and rely on costly quantum data-embedding and measurement steps, which slow computation and limit accessibility across research institutions.

Bridging structure and function: artificial intelligence-based modelling of kidney proteins

Advances in artificial intelligence-driven algorithms and experimental technologies have revolutionized the field of protein modelling. This Review describes how these developments have provided unprecedented insights into the structure of key proteins within the kidney, improved understanding of the relationships between protein structure and stability, and enabled mechanistic interpretation of variants that underlie a variety of kidney pathologies.

/* */