Toggle light / dark theme

Artificial neurons replicate biological function for improved computer chips

Researchers at the USC Viterbi School of Engineering and School of Advanced Computing have developed artificial neurons that replicate the complex electrochemical behavior of biological brain cells.

The innovation, documented in Nature Electronics, is a leap forward in neuromorphic computing technology. The innovation will allow for a reduction of the chip size by orders of magnitude, will reduce its energy consumption by orders of magnitude, and could advance artificial general intelligence.

Unlike conventional digital processors or existing neuromorphic chips based on silicon technology that merely simulate neural activity, these physically embody or emulate the analog dynamics of their biological counterparts. Just as neurochemicals initiate brain activity, chemicals can be used to initiate computation in neuromorphic (brain-inspired) . By being a physical replication of the biological process, they differ from prior iterations of artificial neurons that were solely mathematical equations.

Neuromorphic computer prototype learns patterns with fewer computations than traditional AI

Could computers ever learn more like humans do, without relying on artificial intelligence (AI) systems that must undergo extremely expensive training?

Neuromorphic computing might be the answer. This emerging technology features brain-inspired computer hardware that could perform AI tasks much more efficiently with far fewer training computations using much less power than conventional systems. Consequently, neuromorphic computers also have the potential to reduce reliance on energy-intensive data centers and bring AI inference and learning to .

Dr. Joseph S. Friedman, associate professor of electrical and computer engineering at The University of Texas at Dallas, and his team of researchers in the NeuroSpinCompute Laboratory have taken an important step forward in building a neuromorphic computer by creating a small-scale prototype that learns patterns and makes predictions using fewer training computations than conventional AI systems. Their next challenge is to scale up the proof-of-concept to larger sizes.

Unit-free theorem pinpoints key variables for AI and physics models

Machine learning models are designed to take in data, to find patterns or relationships within those data, and to use what they have learned to make predictions or to create new content. The quality of those outputs depends not only on the details of a model’s inner workings but also, crucially, on the information that is fed into the model.

Some models follow a brute force approach, essentially adding every bit of data related to a particular problem into the model and seeing what comes out. But a sleeker, less energy-hungry way to approach a problem is to determine which variables are vital to the outcome and only provide the model with information about those key variables.

Now, Adrián Lozano-Durán, an associate professor of aerospace at Caltech and a visiting professor at MIT, and MIT graduate student Yuan Yuan, have developed a theorem that takes any number of possible variables and whittles them down, leaving only those that are most important. In the process, the model removes all units, such as meters and feet, from the underlying equations, making them dimensionless, something scientists require of equations that describe the physical world. The work can be applied not only to machine learning but to any .

AI Boosts Ocean Forecasting Accuracy and Speed

“The ability to resolve the Gulf Stream and its dynamics properly, has been an open challenge for many years in oceanography,” said Dr. Ashesh Chattopadhyay.


How can AI be used to predict ocean forecasting? This is what a recent study published in the Journal of Geophysical Research Machine Learning and Computation hopes to address as a team of researchers investigated how AI can be used to predict short-and long-term trends in ocean dynamics. This study has the potential to help scientists and the public better understand new methods estimating long-term ocean forecasting, specifically with climate change increasing ocean temperatures.

For the study, the researchers presented a new AI-based modeling tool for predicting ocean dynamics for the Gulf of Mexico, which is a major trade route between the United States and Mexico. The goal of the tool is to build upon longstanding physics-based models that have traditionally been used for predicting ocean dynamics, including temperature and changes in temperature.

In the end, the researchers found that this new model demonstrates improved performance in predicting ocean dynamics, specifically for short-term intervals of 30 days, along with long-term intervals of 10 years. The team aspires to use this new tool for modeling ocean dynamics worldwide.

Inside X0 and XTR-0

XTR-0 is the first way Extropic chips will be integrated with conventional computers. We intend to build more advanced systems around future TSUs that allow them to more easily integrate with conventional AI accelerators like GPUs. This could take the form of something simple like a PCIe card, or could in principle be as complicated as building a single chip that contains both a GPU and a TSU.

X0 houses a family of circuits that generate samples from primitive probability distributions. Our future chips will combine millions of these probabilistic circuits to run EBMs efficiently.

The probabilistic circuits on X0 output random continuous-time voltage signals. Repeatedly observing the signals (waiting sufficiently long between observations) allows a user to generate approximately independent samples from the distribution embodied by the circuit. These circuits are used to generate the random output voltage, making them much more energy efficient than their counterparts on deterministic digital computers.

Ultracompact semiconductor could power next-gen AI and 6G chips

A research team, led by Professor Heein Yoon in the Department of Electrical Engineering at UNIST has unveiled an ultra-small hybrid low-dropout regulator (LDO) that promises to advance power management in advanced semiconductor devices. This innovative chip not only stabilizes voltage more effectively, but also filters out noise—all while taking up less space—opening new doors for high-performance system-on-chips (SoCs) used in AI, 6G communications, and beyond.

The new LDO combines analog and digital circuit strengths in a hybrid design, ensuring stable power delivery even during sudden changes in current demand—like when launching a game on your smartphone—and effectively blocking unwanted noise from the power supply.

What sets this development apart is its use of a cutting-edge digital-to-analog transfer (D2A-TF) method and a local ground generator (LGG), which work together to deliver exceptional voltage stability and noise suppression. In tests, it kept voltage ripple to just 54 millivolts during rapid 99 mA current swings and managed to restore the voltage to its proper level in just 667 nanoseconds. Plus, it achieved a power supply rejection ratio (PSRR) of −53.7 dB at 10 kHz with a 100 mA load, meaning it can effectively filter out nearly all noise at that frequency.

Machine learning enables real-time analysis of iron oxide thin film growth in reactive magnetron sputtering

Researchers at University of Tsukuba have developed a technology for real-time estimation of the valence state and growth rate of iron oxide thin films during their formation. This novel technology was realized by analyzing the full-wavelength data of plasma emission spectra generated during reactive sputtering using machine learning. It is expected to enable high-precision control of the film deposition process.

Metal oxide and nitride thin films are commonly used in and energy materials. Reactive sputtering is a versatile technique for depositing thin films by reacting a target metal with gases such as oxygen or nitrogen. A challenge with this process is the transitioning of the target surface between metallic and compound states, causing large fluctuations in film growth rate and composition. At present, there are limited effective methods for real-time monitoring of a material’s chemical state and deposition rate during film formation.

A machine learning technique based on was employed to examine massive emission spectra generated within a reactive sputter plasma. This analysis focused on assessing the state of thin film formation. The results, published in Science and Technology of Advanced Materials: Methods, indicated that the valence state of iron oxide was accurately identified using only the first and second principal components of the spectra. In addition, the film growth rate was predicted with high precision.

/* */