Building better batteries is a tough technical challenge. AI might be able to help.
Forget the cloud.
Northwestern University engineers have developed a new nanoelectronic device that can perform accurate machine-learning classification tasks in the most energy-efficient manner yet. Using 100-fold less energy than current technologies, the device can crunch large amounts of data and perform artificial intelligence (AI) tasks in real time without beaming data to the cloud for analysis.
With its tiny footprint, ultra-low power consumption and lack of lag time to receive analyses, the device is ideal for direct incorporation into wearable electronics (like smart watches and fitness trackers) for real-time data processing and near-instant diagnostics.
Summary: Unveiling the neurological enigma of traumatic memory formation, researchers harnessed innovative optical and machine-learning methodologies to decode the brain’s neuronal networks engaged during trauma memory creation.
The team identified a neural population encoding fear memory, revealing the synchronous activation and crucial role of the dorsal part of the medial prefrontal cortex (dmPFC) in associative fear memory retrieval in mice.
Groundbreaking analytical approaches, including the ‘elastic net’ machine-learning algorithm, pinpointed specific neurons and their functional connectivity within the spatial and functional fear-memory neural network.
When ChatGPT & Co. have to check their answers themselves, they make fewer mistakes, according to a new study by Meta.
ChatGPT and other language models repeatedly reproduce incorrect information — even when they have learned the correct information. There are several approaches to reducing hallucination. Researchers at Meta AI now present Chain-of-Verification (CoVe), a prompt-based method that significantly reduces this problem.
New method relies on self-verification of the language model.
In a new Science Advances study, scientists from the University of Science and Technology of China have developed a dynamic network structure using laser-controlled conducting filaments for neuromorphic computing.
Neuromorphic computing is an emerging field of research that draws inspiration from the human brain to create efficient and intelligent computer systems. At its core, neuromorphic computing relies on artificial neural networks, which are computational models inspired by the neurons and synapses in the brain. But when it comes to creating the hardware, it can be a bit challenging.
Mott materials have emerged as suitable candidates for neuromorphic computing due to their unique transition properties. Mott transition involves a rapid change in electrical conductivity, often accompanied by a transition between insulating and metallic states.
Forget the cloud. Northwestern University engineers have developed a new nanoelectronic device that can perform accurate machine-learning classification tasks in the most energy-efficient manner yet. Using 100-fold less energy than current technologies, the device can crunch large amounts of data and perform artificial intelligence (AI) tasks in real time without beaming data to the cloud for analysis.
With its tiny footprint, ultra-low power consumption and lack of lag time to receive analyses, the device is ideal for direct incorporation into wearable electronics (like smart watches and fitness trackers) for real-time data processing and near-instant diagnostics.
To test the concept, engineers used the device to classify large amounts of information from publicly available electrocardiogram (ECG) datasets. Not only could the device efficiently and correctly identify an irregular heartbeat, it also was able to determine the arrhythmia subtype from among six different categories with near 95% accuracy.
Chip designer Tachyum has accepted a major purchase order from a U.S. company to build a new supercomputing system for AI. This will be based on its 5 nanometre (nm) “Prodigy” Universal Processor chip, delivering more than 50 exaFLOPS of performance.
Tachyum, founded in 2016 and headquartered in Santa Clara, California, claims to have developed a disruptive, ultra-low-power processor architecture that could revolutionise data centre, AI, and high-performance computing (HPC) markets.