Toggle light / dark theme

Based on how an AI model transcribes audio into text, the researchers behind the study could map brain activity that takes place during conversation more accurately than traditional models that encode specific features of language structure — such as phonemes (the simple sounds that make up words) and parts of speech (such as nouns, verbs and adjectives).

The model used in the study, called Whisper, instead takes audio files and their text transcripts, which are used as training data to map the audio to the text. It then uses the statistics of that mapping to “learn” to predict text from new audio files that it hasn’t previously heard.

National Institutes of Health researchers have mapped how individual neurons in the primary somatosensory cortex receive brain-wide presynaptic inputs that encode behavioral states, refining our understanding of cortical activity.

Neurons in the primary somatosensory cortex process different types of sensory information and exhibit distinct activity patterns, yet the cause of these differences has remained unclear. Previous research emphasized the role of motor cortical regions in movement-related processing, but also recognized that the thalamus plays a role beyond sensory relay.

Using high-resolution single-cell mapping to trace , the team revealed that thalamic input is the primary driver for movement-correlated neurons, while motor cortical input plays a smaller role.

NASA’s upcoming EZIE mission will use three small satellites to study electrojets — powerful electrical currents in the upper atmosphere linked to auroras. These mysterious currents influence geomagnetic storms that can disrupt satellites, power grids, and communication systems. By mapping how electrojets evolve, EZIE will improve space weather predictions, helping to safeguard modern technology.

Vertical columns How the Brain Maps Jaw Movements: A Hidden Architecture of Motion.

Our brains contain intricate maps that guide every voluntary movement we make, from reaching out to grab a cup to the delicate motions involved in speaking or chewing. But how exactly are these maps organized, and what role do different types of brain cells play in shaping them?

A new study dives deep into the orofacial motor maps—the brain’s blueprint for controlling jaw movements—revealing a surprising level of organization. Researchers used optogenetics, a technique that activates specific neurons with light, to map out how different classes of excitatory neurons contribute to jaw motion in mice. What they found was remarkable: rather than a single unified map, the motor cortex is divided into distinct, genetically defined modules, each governing jaw movement from different brain regions, including sensory, motor, and premotor areas.

These modules don’t act in isolation. When one was stimulated, activity rippled across the brain, converging in the primary motor cortex, the region that directly controls movement. What’s more, when the mice learned new motor skills—such as refining their licking motion—some of these modules expanded, adapting to support the learned behavior.

This research suggests that voluntary movement isn’t just dictated by a single command center. Instead, a network of specialized cell groups collaborates across different parts of the brain, dynamically adjusting as we learn new motor skills. Understanding this fine-tuned motor map could have implications for treating movement disorders or even advancing brain-computer interfaces in the future.


Scientists have identified previously unknown neural modules in the brain that control movement and adapt during skill learning. Their findings challenge long-held ideas about how the brain organizes movement.

Optical atomic clocks have the potential to improve timekeeping and GPS

GPS, or Global Positioning System, is a satellite-based navigation system that provides location and time information anywhere on or near the Earth’s surface. It consists of a network of satellites, ground control stations, and GPS receivers, which are found in a variety of devices such as smartphones, cars, and aircraft. GPS is used for a wide range of applications including navigation, mapping, tracking, and timing, and has an accuracy of about 3 meters (10 feet) in most conditions.

Using lattice quantum chromodynamics, researchers have created what is likely the smallest force field map ever generated. Their findings reveal astonishingly powerful interactions, akin to the weight of 10 elephants squeezed into a space smaller than an atomic nucleus.

Mapping the Forces Inside a Proton

Scientists have successfully mapped the forces inside a proton, revealing in unprecedented detail how quarks—the tiny particles within—react when struck by high-energy photons.

Scientists have uncovered “Quipu,” the largest known galactic structure, stretching 1.4 billion light-years. This discovery reshapes cosmic mapping and affects key measurements of the universe’s expansion.

A team of scientists has identified the largest cosmic superstructure ever reliably measured. The discovery was made while mapping the nearby universe using galaxy clusters detected in the ROSAT X-ray satellite’s sky survey. Spanning approximately 1.4 billion light-years, this structure — primarily composed of dark matter — is the largest known formation in the universe to date. The research was led by scientists from the Max Planck Institute for Extraterrestrial Physics and the Max Planck Institute for Physics, in collaboration with colleagues from Spain and South Africa.

A Vastly Structured Universe

Dr. Rumi Chunara: “Our system learns to recognize more subtle patterns that distinguish trees from grass, even in challenging urban environments.”


How can artificial intelligence (AI) help improve city planning to account for more green spaces? This is what a recent study published in the ACM Journal on Computing and Sustainable Societies hopes to address as a team of researchers proposed a novel concept using AI with the goal of both monitoring and improving urban green spaces, which are natural public spaces like parks and gardens, and provide a myriad of benefits, including physical and mental health, combating climate change, wildlife habitats, and increased social interaction.

For the study, the researchers developed a method they refer to as “green augmentation”, which uses an AI algorithm to analyze Google Earth satellite images with the goal of improving current AI methods by more accurately identifying green vegetation like grass and trees under various weather and seasonal conditions. For example, current AI methods identify green vegetation with an accuracy and reliability of 63.3 percent and 64 percent, respectively. Using this new method, the researchers successfully identified green vegetation with an accuracy and reliability of 89.4 percent and 90.6 percent, respectively.

“Previous methods relied on simple light wavelength measurements,” said Dr. Rumi Chunara, who is an associate professor of biostatistics at New York University and a co-author on the study. “Our system learns to recognize more subtle patterns that distinguish trees from grass, even in challenging urban environments. This type of data is necessary for urban planners to identify neighborhoods that lack vegetation so they can develop new green spaces that will deliver the most benefits possible. Without accurate mapping, cities cannot address disparities effectively.”

Swimming robots are essential for mapping pollution, studying aquatic ecosystems, and monitoring water quality in sensitive areas such as coral reefs and lake shores. However, many existing models rely on noisy propellers that can disturb or even harm wildlife. Additionally, navigating these environments is challenging due to natural obstacles like plants, animals, and debris.

To address these issues, researchers from the Soft Transducers Lab and the Unsteady Flow Diagnostics Laboratory at EPFL’s School of Engineering, in collaboration with the Max Planck Institute for Intelligent Systems, have developed a compact, highly maneuverable swimming robot. Smaller than a credit card and weighing just six grams, this agile robot can navigate tight spaces and carry payloads significantly heavier than itself. Its design makes it particularly suited for confined environments such as rice fields or for inspecting waterborne machinery. The study has been published in Science Robotics.

“In 2020, our team demonstrated autonomous insect-scale crawling robots, but making untethered ultra-thin robots for aquatic environments is a whole new challenge,” says EPFL Soft Transducers Lab head Herbert Shea. “We had to start from scratch, developing more powerful soft actuators, new undulating locomotion strategies, and compact high-voltage electronics”