Toggle light / dark theme

The device provides a range of sensations, such as vibrations, pressure, and twisting. A team of engineers led by Northwestern University has developed a new wearable device that stimulates the skin to deliver a range of complex sensations. This thin, flexible device gently adheres to the skin, offering more realistic and immersive sensory experiences. While it is well-suited for gaming and virtual reality (VR), the researchers also see potential applications in healthcare. For instance, the device could help individuals with visual impairments “feel” their surroundings or provide feedback to those with prosthetic limbs.

For over a century, galvanic vestibular stimulation (GVS) has been used as a way to stimulate the inner ear nerves by passing a small amount of current.

We use GVS in a two player escape the room style VR game set in a dark virtual world. The VR player is remote controlled like a robot by a non-VR player with GVS to alter the VR player’s walking trajectory. We also use GVS to induce the physical sensations of virtual motion and mitigate motion sickness in VR.

Brain hacking has been a futurist fascination for decades. Turns out, we may be able to make it a reality as research explores the impact of GVS on everything from tactile sensation to memory.

Misha graduated in June 2018 from the MIT Media Lab where she worked in the Fluid Interfaces group with Prof Pattie Maes. Misha works in the area of human-computer interaction (HCI), specifically related to virtual, augmented and mixed reality. The goal of her work is to create systems that use the entire body for input and output and automatically adapt to each user’s unique state and context. Misha calls her concept perceptual engineering, i.e., immersive systems that alter the user’s perception (or more specifically the input signals to their perception) and influence or manipulate it in subtle ways. For example, they modify a user’s sense of balance or orientation, manipulate their visual attention and more, all without the user’s explicit awareness, and in order to assist or guide their interactive experience in an effortless way.

The systems Misha builds use the entire body for input and output, i.e., they can use movement, like walking, or a physiological signal, like breathing as input, and can output signals that actuate the user’s vestibular system with electrical pulses, causing the individual to move or turn involuntarily. HCI up to now has relied upon deliberate, intentional usage, both for input (e.g., touch, voice, typing) and for output (interpreting what the system tells you, shows you, etc.). In contrast, Misha develops techniques and build systems that do not require this deliberate, intentional user interface but are able to use the body as the interface for more implicit and natural interactions.

Misha’s perceptual engineering approach has been shown to increase the user’s sense of presence in VR/MR, provide novel ways to communicate between the user and the digital system using proprioception and other sensory modalities, and serve as a platform to question the boundaries of our sense of agency and trust.

Could this VR experience change how you see the planet?


For many, constant bad news numbs our reaction to climate disasters. But research suggests that a new type of immersive storytelling about nature told through virtual reality (VR) can both build empathy and inspire us to act.

I’m crying into a VR headset. I’ve just watched a VR experience that tells the story of a young pangolin called Chestnut, as she struggles to survive in the Kalahari Desert. A vast, dusty landscape extends around me in all directions, and her armoured body seems vulnerable as she curls up, alone, to sleep. Her story is based on the life of a real pangolin that was tracked by scientists.

Chestnut hasn’t found enough to ants to eat, since insect numbers have dwindled due to climate change. Her sunny voice remains optimistic even as exhaustion takes over. In the final scenes, she dies, and I must clumsily lift my headset to dab my eyes.

Summary: New research indicates a strong link between high social media use and psychiatric disorders involving delusions, such as narcissism and body dysmorphia. Conditions like narcissistic personality disorder, anorexia, and body dysmorphic disorder thrive on social platforms, allowing users to build and maintain distorted self-perceptions without real-world checks.

The study highlights how virtual environments enable users to escape social scrutiny, intensifying delusional self-images and potentially exacerbating existing mental health issues. Researchers emphasize that social media isn’t inherently harmful, but immersive virtual environments coupled with real-life isolation can significantly amplify unhealthy mental states.

An international team of scientists developed augmented reality glasses with technology to receive images beamed from a projector, to resolve some of the existing limitations of such glasses, such as their weight and bulk. The team’s research is being presented at the IEEE VR conference in Saint-Malo, France, in March 2025.

Augmented reality (AR) technology, which overlays and virtual objects on an image of the real world viewed through a device’s viewfinder or , has gained traction in recent years with popular gaming apps like Pokémon Go, and real-world applications in areas including education, manufacturing, retail and health care. But the adoption of wearable AR devices has lagged over time due to their heft associated with batteries and electronic components.

AR glasses, in particular, have the potential to transform a user’s physical environment by integrating virtual elements. Despite many advances in hardware technology over the years, AR glasses remain heavy and awkward and still lack adequate computational power, battery life and brightness for optimal user experience.

Rodolfo Llinas tells the story of how he has developed bundles of nanowires thinner than spider webs that can be inserted into the blood vessels of human brains.

While these wires have so far only been tested in animals, they prove that direct communication with the deep recesses of the brain may not be so far off. To understand just how big of a breakthrough this is—US agents from the National Security Agency quickly showed up at the MIT laboratory when the wires were being developed.

What does this mean for the future? It might be possible to stimulate the senses directly — creating visual perceptions, auditory perceptions, movements, and feelings. Deep brain stimulation could create the ultimate virtual reality. Not to mention, direct communication between man and machine or human brain to human brain could become a real possibility.

Llinas poses compelling questions about the potentials and ethics of his technology.

Novel technology intends to redefine the virtual reality experience by expanding to incorporate a new sensory connection: taste.

The interface, dubbed “e-Taste,” uses a combination of sensors and wireless chemical dispensers to facilitate the remote perception of —what scientists call gustation. These sensors are attuned to recognize molecules like glucose and glutamate—chemicals that represent the five basic tastes of sweet, sour, salty, bitter, and umami. Once captured via an , that data is wirelessly passed to a remote device for replication.

Field testing done by researchers at The Ohio State University confirmed the device’s ability to digitally simulate a range of taste intensities, while still offering variety and safety for the user.

Summary: Scientists have developed e-Taste, a novel technology that digitally replicates taste in virtual environments. Using chemical sensors and wireless dispensers, the system captures and transmits taste data remotely, enabling users to experience sweet, sour, salty, bitter, and umami flavors.

In tests, participants distinguished different taste intensities with 70% accuracy, and remote tasting was successfully initiated across long distances. Beyond gaming and immersive experiences, this breakthrough could enhance accessibility for individuals with sensory impairments and deepen our understanding of how the brain processes taste.

In today’s AI news, Alibaba Group plans to invest more than $52 billion on AI and cloud infrastructure over the next three years, in a bid to seize more opportunities in the artificial-intelligence era. The spending of at least 380 billion yuan, equivalent to $52.41 billion, will surpass the company’s AI and cloud computing investment over the past decade, Alibaba said in a post Monday on its news site.

And, at the Global Developer Conference, an AI community event hosted in Shanghai over the weekend, open-source developers from around China congregated in a show of exuberance over the possibilities of AI since DeepSeek’s resource-efficient models captured the world’s imagination. Use cases on display included everything from robotics to virtual reality glasses.

Then, John Werner poses the question, what if you could just run to the supply room, and Xerox an entire firm? What would that look like? Well, it might be expensive. But probably not as expensive as humans. John says, Dwarkesh Patel gives us an idea in a new collaborative essay Jan. 31 talking about the potential for all-AI companies. Suggesting that “everyone is sleeping on the collective advantages AI will have” …

And, agents capable of handling shopping-related tasks, optimizing supply chains, and creating personalized customer experiences are already here. Retail, in particular e-commerce, has been the poster child for agentic AI and is a sector where there is a lot of hype but also some very compelling use cases. So, let’s explore what’s happening in this space and what we can expect to see in the future.

In videos, during the 2025 Annual Meeting of the World Economic Forum in Davos, Switzerland, the chairman, founder and chief educational technology scientist of Squirrel AI Learning, Derek Haoyang Li, discusses with Forbes’ Randall Lane, the research, technology and success behind the Shanghai company’s innovative adaptive education models.

Meanwhile, as AI chatbots become more personal and proactive, the line between tool and companion is beginning to blur, with some users even professing love for their digital aides, says business consultant Amaryllis Liampoti. She presents three foundational principles for how brands can harness AI to build deeper emotional connections with consumers while prioritizing well-being, transparency and autonomy —

In other advances, Professor Danfei Xu and the Robot Learning and Reasoning Lab (RL2) present EgoMimic is a full-stack framework that scales robot manipulation through egocentric-view human demonstrations via Meta’s Project Aria glasses at the Georgia Institute of Technology. Learn more about the Aria Research Kit at projectaria.com.

T need AGI or even the latest and greatest models; they need products that augment their existing workflows … + Thats all for today, but AI is moving fast — like, comment, and subscribe for more AI news! Thank you for supporting my partners and I — it’s how I keep Neural News free.