Toggle light / dark theme

RRAM-based analog computing system rapidly solves matrix equations with high precision

Analog computers are systems that perform computations by manipulating physical quantities such as electrical current, that map math variables, instead of representing information using abstraction with discrete binary values (i.e., 0 or 1), like digital computers.

While computing systems can perform well on general-purpose tasks, they are known to be susceptible to noise (i.e., background or external interferences) and less precise than .

Researchers at Peking University and the Beijing Advanced Innovation Center for Integrated Circuits have developed a scalable analog computing device that can solve so-called matrix equations with remarkable precision. This new system, introduced in a paper published in Nature Electronics, was built using tiny non-volatile memory devices known as resistive random-access memory (RRAM) chips.

Mathematical proof debunks the idea that the universe is a computer simulation

From the article:

“We have demonstrated that it is impossible to describe all aspects of physical reality using a computational theory of quantum gravity,” says Dr. Faizal. “Therefore, no physically complete and consistent theory of everything can be derived from computation alone. Rather, it requires a non-algorithmic understanding, which is more fundamental than the computational laws of quantum gravity and therefore more fundamental than spacetime itself.”


It’s a plot device beloved by science fiction: our entire universe might be a simulation running on some advanced civilization’s supercomputer. But new research from UBC Okanagan has mathematically proven this isn’t just unlikely—it’s impossible.

Dr. Mir Faizal, Adjunct Professor with UBC Okanagan’s Irving K. Barber Faculty of Science, and his international colleagues, Drs. Lawrence M. Krauss, Arshid Shabir and Francesco Marino have shown that the fundamental nature of reality operates in a way that no computer could ever simulate.

Their findings, published in the Journal of Holography Applications in Physics, go beyond simply suggesting that we’re not living in a simulated world like The Matrix. They prove something far more profound: the universe is built on a type of understanding that exists beyond the reach of any algorithm.

Gemini gets a huge upgrade for academics and researchers with powerful new LaTeX features

For anyone who has ever wrestled with creating documents containing complex mathematical equations, intricate tables, or precise multi-column layouts, the LaTeX document preparation system is likely a familiar (and sometimes frustrating) friend. It’s the standard for high-quality academic, scientific, and technical documents, but it traditionally requires specialized editors and significant technical know-how.

AI efficiency advances with spintronic memory chip that combines storage and processing

To make accurate predictions and reliably complete desired tasks, most artificial intelligence (AI) systems need to rapidly analyze large amounts of data. This currently entails the transfer of data between processing and memory units, which are separate in existing electronic devices.

Over the past few years, many engineers have been trying to develop new hardware that could run AI algorithms more efficiently, known as compute-in-memory (CIM) systems. CIM systems are electronic components that can both perform computations and store information, typically serving both as processors and non-volatile memories. Non-volatile essentially means that they can retain data even when they are turned off.

Most previously introduced CIM designs rely on analog computing approaches, which allow devices to perform calculations leveraging electrical current. Despite their good energy efficiency, analog computing techniques are known to be significantly less precise than digital computing methods and often fail to reliably handle large AI models or vast amounts of data.

Artificial neurons replicate biological function for improved computer chips

Researchers at the USC Viterbi School of Engineering and School of Advanced Computing have developed artificial neurons that replicate the complex electrochemical behavior of biological brain cells.

The innovation, documented in Nature Electronics, is a leap forward in neuromorphic computing technology. The innovation will allow for a reduction of the chip size by orders of magnitude, will reduce its energy consumption by orders of magnitude, and could advance artificial general intelligence.

Unlike conventional digital processors or existing neuromorphic chips based on silicon technology that merely simulate neural activity, these physically embody or emulate the analog dynamics of their biological counterparts. Just as neurochemicals initiate brain activity, chemicals can be used to initiate computation in neuromorphic (brain-inspired) . By being a physical replication of the biological process, they differ from prior iterations of artificial neurons that were solely mathematical equations.

Unit-free theorem pinpoints key variables for AI and physics models

Machine learning models are designed to take in data, to find patterns or relationships within those data, and to use what they have learned to make predictions or to create new content. The quality of those outputs depends not only on the details of a model’s inner workings but also, crucially, on the information that is fed into the model.

Some models follow a brute force approach, essentially adding every bit of data related to a particular problem into the model and seeing what comes out. But a sleeker, less energy-hungry way to approach a problem is to determine which variables are vital to the outcome and only provide the model with information about those key variables.

Now, Adrián Lozano-Durán, an associate professor of aerospace at Caltech and a visiting professor at MIT, and MIT graduate student Yuan Yuan, have developed a theorem that takes any number of possible variables and whittles them down, leaving only those that are most important. In the process, the model removes all units, such as meters and feet, from the underlying equations, making them dimensionless, something scientists require of equations that describe the physical world. The work can be applied not only to machine learning but to any .

Researcher improves century-old equation to predict movement of dangerous air pollutants

A new method developed at the University of Warwick offers the first simple and predictive way to calculate how irregularly shaped nanoparticles—a dangerous class of airborne pollutant—move through the air.

Every day, we breathe in millions of , including soot, dust, pollen, microplastics, viruses, and synthetic nanoparticles. Some are small enough to slip deep into the lungs and even enter the bloodstream, contributing to conditions such as heart disease, stroke, and cancer.

Most of these are irregularly shaped. Yet the mathematical models used to predict how these particles behave typically assume they are perfect spheres, simply because the equations are easier to solve. This makes it difficult to monitor or predict the movement of real-world, non-spherical—and often more hazardous—particles.

Gravitational wave events hint at ‘second-generation’ black holes

In a paper published in The Astrophysical Journal Letters, the international LIGO-Virgo-KAGRA Collaboration reports on the detection of two gravitational wave events in October and November of 2024 with unusual black hole spins. This observation adds an important new piece to our understanding of the most elusive phenomena in the universe.

Gravitational waves are “ripples” in that result from cataclysmic events in deep space, with the strongest waves produced by the collision of black holes.

Using sophisticated algorithmic techniques and mathematical models, researchers are able to reconstruct many physical features of the detected black holes from the analysis of gravitational signals, such as their masses and the distance of the event from Earth, and even the speed and direction of their rotation around their axis, called spin.

ML Systems Textbook

Machine Learning Systems provides a systematic framework for understanding and engineering machine learning (ML) systems. This textbook bridges the gap between theoretical foundations and practical engineering, emphasizing the systems perspective required to build effective AI solutions. Unlike resources that focus primarily on algorithms and model architectures, this book highlights the broader context in which ML systems operate, including data engineering, model optimization, hardware-aware training, and inference acceleration. Readers will develop the ability to reason about ML system architectures and apply enduring engineering principles for building flexible, efficient, and robust machine learning systems.

AI teaches itself and outperforms human-designed algorithms

Like humans, artificial intelligence learns by trial and error, but traditionally, it requires humans to set the ball rolling by designing the algorithms and rules that govern the learning process. However, as AI technology advances, machines are increasingly doing things themselves. An example is a new AI system developed by researchers that invented its own way to learn, resulting in an algorithm that outperformed human-designed algorithms on a series of complex tasks.

For decades, human engineers have designed the algorithms that agents use to learn, especially reinforcement learning (RL), where an AI learns by receiving rewards for successful actions. While learning comes naturally to humans and animals, thanks to millions of years of evolution, it has to be explicitly taught to AI. This process is often slow and laborious and is ultimately limited by human intuition.

Taking their cue from evolution, which is a random trial and error process, the researchers created a large digital population of AI agents. These agents tried to solve numerous tasks in many different, complex environments using a particular learning rule.

/* */