Toggle light / dark theme

Why Time Doesn’t Exist | Leonard Susskind

We experience time as something that flows. Seconds pass. Moments disappear. The future becomes the present and then turns into the past.

But modern physics does not describe time this way.

In this video, we explore why time — as we intuitively understand it — may not exist at the fundamental level of reality.

Drawing on ideas associated with Leonard Susskind, this documentary examines how relativity and quantum physics challenge the idea of a flowing temporal river. Einstein’s theory removes the notion of a universal present. There is no global “now” that sweeps across the universe.

Without a universal present, the idea of time flowing becomes difficult to define physically.

In the relativistic picture, spacetime is a four-dimensional structure. Events are not created moment by moment. They are embedded in geometry. The equations of physics do not contain a moving present. They describe relations between events.

Silicon metasurfaces boost optical image processing with passive intensity-based filtering

Of the many feats achieved by artificial intelligence (AI), the ability to process images quickly and accurately has had an especially impressive impact on science and technology. Now, researchers in the McKelvey School of Engineering at Washington University in St. Louis have found a way to improve the efficiency and capability of machine vision and AI diagnostics using optical systems instead of traditional digital algorithms.

Mark Lawrence, an assistant professor of electrical and systems engineering, and doctoral student Bo Zhao developed this approach to achieve efficient processing performance without high energy consumption. Typically, all-optical image processing is highly constrained by the lack of nonlinearity, which usually requires high light intensities or external power, but the new method uses nanostructured films called metasurfaces to enhance optical nonlinearity passively, making it practical for everyday use.

Their work shows the ability to filter images based on light intensity, potentially making all-optical neural networks more powerful without using additional energy. Results of the research were published online in Nano Letters on Jan. 21, 2026.

AI method accelerates liquid simulations by learning fundamental physical relationships

Researchers at the University of Bayreuth have developed a method using artificial intelligence that can significantly speed up the calculation of liquid properties. The AI approach predicts the chemical potential—an indispensable quantity for describing liquids in thermodynamic equilibrium. The researchers present their findings in a new study published in Physical Review Letters.

Many common AI methods are based on the principle of supervised machine learning: a model—for instance, a neural network—is specifically trained to predict a particular target quantity directly. One example that illustrates this approach is image recognition, where the AI system is shown numerous images in which it is known whether or not a cat is depicted. On this basis, the system learns to identify cats in new, previously unseen images.

“However, such a direct approach is difficult in the case of the chemical potential, because determining it usually requires computationally expensive algorithms,” says Prof. Dr. Matthias Schmidt, Chair of Theoretical Physics II at the University of Bayreuth. He and his research associate Dr. Florian Sammüller address this challenge with their newly developed AI method. It is based on a neural network that incorporates the theoretical structure of liquids—and more generally, of soft matter—allowing it to predict their properties with great accuracy.

JUST RECORDED: Elon Musk Announces MAJOR Company Shakeup

Elon Musk Announces MAJOR Company Changes as XAI/SpaceX ## Elon Musk is announcing significant changes and advancements across his companies, primarily focused on developing and integrating artificial intelligence (AI) to drive innovation, productivity, and growth ## ## Questions to inspire discussion.

Product Development & Market Position.

🚀 Q: How fast did xAI achieve market leadership compared to competitors?

A: xAI reached number one in voice, image, video generation, and forecasting with the Grok 4.20 model in just 2.5 years, outpacing competitors who are 5–20 years old with larger teams and more resources.

📱 Q: What scale did xAI’s everything app reach in one year?

A: In one year, xAI went from nothing to 2M Teslas using Grok, deployed a Grok voice agent API, and built an everything app handling legal questions, slide decks, and puzzles.

AI Discovers Geophysical Turbulence Model

One of the biggest challenges in climate science and weather forecasting is predicting the effects of turbulence at spatial scales smaller than the resolution of atmospheric and oceanic models. Simplified sets of equations known as closure models can predict the statistics of this “subgrid” turbulence, but existing closure models are prone to dynamic instabilities or fail to account for rare, high-energy events. Now Karan Jakhar at the University of Chicago and his colleagues have applied an artificial-intelligence (AI) tool to data generated by numerical simulations to uncover an improved closure model [1]. The finding, which the researchers subsequently verified with a mathematical derivation, offers insights into the multiscale dynamics of atmospheric and oceanic turbulence. It also illustrates that AI-generated prediction models need not be “black boxes,” but can be transparent and understandable.

The team trained their AI—a so-called equation-discovery tool—on “ground-truth” data that they generated by performing computationally costly, high-resolution numerical simulations of several 2D turbulent flows. The AI selected the smallest number of mathematical functions (from a library of 930 possibilities) that, in combination, could reproduce the statistical properties of the dataset. Previously, researchers have used this approach to reproduce only the spatial structure of small-scale turbulent flows. The tool used by Jakhar and collaborators filtered for functions that correctly represented not only the structure but also energy transfer between spatial scales.

They tested the performance of the resulting closure model by applying it to a computationally practical, low-resolution version of the dataset. The model accurately captured the detailed flow structures and energy transfers that appeared in the high-resolution ground-truth data. It also predicted statistically rare conditions corresponding to extreme-weather events, which have challenged previous models.

A long-lost Soviet spacecraft: AI could finally solve the mystery of Luna 9’s landing site

Using an advanced machine-learning algorithm, researchers in the UK and Japan have identified several promising candidate locations for the long-lost landing site of the Soviet Luna 9 spacecraft. Publishing their results in npj Space Exploration, the team, led by Lewis Pinault at University College London, hope that their model’s predictions could soon be tested using new observations from India’s Chandrayaan-2 orbiter.

In 1966, the USSR’s Luna 9 mission became the first human-made object to land safely on the moon’s surface and to transmit photographs from another celestial body. Compared with modern missions, the landing was dramatic: shortly before the main spacecraft itself struck the lunar surface, it deployed a 58-cm-wide, roughly 100-kg spherical landing capsule from above, then maneuvered away to crash at a safe distance.

Equipped with inflatable shock absorbers, the capsule bounced several times before coming to rest, stabilizing itself by unfurling four petal-like panels. Although Luna 9 operated for just three days, it transmitted a wealth of valuable data back to Earth, helping to inspire confidence in crewed space exploration, that would see humanity take its first steps on the moon just three years later.

Seeing the whole from a part: Revealing hidden turbulent structures from limited observations and equations

The irregular, swirling motion of fluids we call turbulence can be found everywhere, from stirring in a teacup to currents in the planetary atmosphere. This phenomenon is governed by the Navier-Stokes equations—a set of mathematical equations that describe how fluids move.

Despite being known for nearly two centuries, these equations still pose major challenges when it comes to making predictions. Turbulent flows are inherently chaotic, and tiny uncertainties can grow quickly over time.

In real-world situations, scientists can only observe part of a turbulent flow, usually its largest and slowest moving features. Thus, a long-standing question in fluid physics has been whether these partial observations are enough to reconstruct the full motion of the fluid.

How scientists are trying to use AI to unlock the human mind

Compared with conventional psychological models, which use simple math equations, Centaur did a far better job of predicting behavior. Accurate predictions of how humans respond in psychology experiments are valuable in and of themselves: For example, scientists could use Centaur to pilot their experiments on a computer before recruiting, and paying, human participants. In their paper, however, the researchers propose that Centaur could be more than just a prediction machine. By interrogating the mechanisms that allow Centaur to effectively replicate human behavior, they argue, scientists could develop new theories about the inner workings of the mind.

But some psychologists doubt whether Centaur can tell us much about the mind at all. Sure, it’s better than conventional psychological models at predicting how humans behave—but it also has a billion times more parameters. And just because a model behaves like a human on the outside doesn’t mean that it functions like one on the inside. Olivia Guest, an assistant professor of computational cognitive science at Radboud University in the Netherlands, compares Centaur to a calculator, which can effectively predict the response a math whiz will give when asked to add two numbers. “I don’t know what you would learn about human addition by studying a calculator,” she says.

Even if Centaur does capture something important about human psychology, scientists may struggle to extract any insight from the model’s millions of neurons. Though AI researchers are working hard to figure out how large language models work, they’ve barely managed to crack open the black box. Understanding an enormous neural-network model of the human mind may not prove much easier than understanding the thing itself.

Los Alamos Forms Quantum Computing-Focused Research Center

PRESS RELEASE — Los Alamos National Laboratory has formed the Center for Quantum Computing, which will bring together the Lab’s diverse quantum computing research capabilities. Headquartered in downtown Los Alamos, the Center for Quantum Computing will consolidate the Laboratory’s expertise in national security applications, quantum algorithms, quantum computer science and workforce development in a shared research space.

“This new center of excellence will bring together the Laboratory’s quantum computing research capabilities that support Department of Energy, Defense and New Mexico state initiatives to achieve a critical mass of expertise greater than the individual parts,” said Mark Chadwick, associate Laboratory director for Simulation, Computing and Theory. “This development highlights our commitment to supporting the next generation of U.S. scientific and technological innovation in quantum computing, especially as the technology can support key Los Alamos missions.”

The center will bring together as many as three dozen quantum researchers from across the Lab. The center’s formation occurs at a pivotal time for the development of quantum computing, as Lab researchers partner with private industry and on a number of state and federal quantum computing initiatives to bring this high-priority technology closer to fruition. Laboratory researchers may include those working with the DARPA Quantum Benchmarking Initiative, the DOE’s Quantum Science Center, the National Nuclear Security Administration Advanced Simulation and Computing program’s Beyond Moore’s Law project, and multiple Laboratory Directed Research and Development projects.

AGI Is Here: AI Legend Peter Norvig on Why it Doesn’t Matter Anymore

Are we chasing the wrong goal with Artificial General Intelligence, and missing the breakthroughs that matter now?

On this episode of Digital Disruption, we’re joined by former research director at Google and AI legend, Peter Norvig.

Peter is an American computer scientist and a Distinguished Education Fellow at the Stanford Institute for Human-Centered Artificial Intelligence (HAI). He is also a researcher at Google, where he previously served as Director of Research and led the company’s core search algorithms group. Before joining Google, Norvig headed NASA Ames Research Center’s Computational Sciences Division, where he served as NASA’s senior computer scientist and received the NASA Exceptional Achievement Award in 2001.He is best known as the co-author, alongside Stuart J. Russell, of Artificial Intelligence: A Modern Approach — the world’s most widely used textbook in the field of artificial intelligence.

Peter sits down with Geoff to separate facts from fiction about where AI is really headed. He explains why the hype around Artificial General Intelligence (AGI) misses the point, how today’s models are already “general,” and what truly matters most: making AI safer, more reliable, and human-centered. He discusses the rapid evolution of generative models, the risks of misinformation, AI safety, open-source regulation, and the balance between democratizing AI and containing powerful systems. This conversation explores the impact of AI on jobs, education, cybersecurity, and global inequality, and how organizations can adapt, not by chasing hype, but by aligning AI to business and societal goals. If you want to understand where AI actually stands, beyond the headlines, this is the conversation you need to hear.

In this episode:
00:00 Intro.
01:00 How AI evolved since Artificial Intelligence: A Modern Approach.
03:00 Is AGI already here? Norvig’s take on general intelligence.
06:00 The surprising progress in large language models.
08:00 Evolution vs. revolution.
10:00 Making AI safer and more reliable.
12:00 Lessons from social media and unintended consequences.
15:00 The real AI risks: misinformation and misuse.
18:00 Inside Stanford’s Human-Centered AI Institute.
20:00 Regulation, policy, and the role of government.
22:00 Why AI may need an Underwriters Laboratory moment.
24:00 Will there be one “winner” in the AI race?
26:00 The open-source dilemma: freedom vs. safety.
28:00 Can AI improve cybersecurity more than it harms it?
30:00 “Teach Yourself Programming in 10 Years” in the AI age.
33:00 The speed paradox: learning vs. automation.
36:00 How AI might (finally) change productivity.
38:00 Global economics, China, and leapfrog technologies.
42:00 The job market: faster disruption and inequality.
45:00 The social safety net and future of full-time work.
48:00 Winners, losers, and redistributing value in the AI era.
50:00 How CEOs should really approach AI strategy.
52:00 Why hiring a “PhD in AI” isn’t the answer.
54:00 The democratization of AI for small businesses.
56:00 The future of IT and enterprise functions.
57:00 Advice for staying relevant as a technologist.
59:00 A realistic optimism for AI’s future.

#ai #agi #humancenteredai #futureofwork #aiethics #innovation.

/* */