Fish are masters of coordinated motion. Schools of fish have no leader, yet individuals manage to stay in formation, avoid collisions, and respond with liquid flexibility to changes in their environment. Reproducing this combination of robustness and flexibility has been a long-standing challenge for human-engineered systems like robots.
NASA continues to mark progress on plans to work with commercial and international partners as part of the Gateway program. The primary structure of HALO (Habitation and Logistics Outpost) arrived at Northrop Grumman’s facility in Gilbert, Arizona, where it will undergo final outfitting and verification testing.
HALO will provide Artemis astronauts with space to live, work, and conduct scientific research. The habitation module will be equipped with essential systems including command and control, data handling, energy storage, power distribution, and thermal regulation.
Following HALO’s arrival on April 1 from Thales Alenia Space in Turin, Italy, where it was assembled, NASA and Northrop Grumman hosted an April 24 event to acknowledge the milestone, and the module’s significance to lunar exploration. The event opened with remarks by representatives from Northrop Grumman and NASA, including NASA’s Acting Associate Administrator for Exploration Systems Development Lori Glaze, Gateway Program Manager Jon Olansen, and NASA astronaut Randy Bresnik. Event attendees, including Senior Advisor to the NASA Administrator Todd Ericson, elected officials, and local industry and academic leaders, viewed HALO and virtual reality demonstrations during a tour of the facilities.
Meta has laid off employees in the company’s Reality Labs division that is tasked with developing virtual reality, augmented reality and wearable devices.
Human cyborgs are individuals who integrate advanced technology into their bodies, enhancing their physical or cognitive abilities. This fusion of man and machine blurs the line between science fiction and reality, raising questions about the future of humanity, ethics, and the limits of human potential. From bionic limbs to brain-computer interfaces, cyborg technology is rapidly evolving, pushing us closer to a world where humans and machines become one.
An innovative algorithm for detecting collisions of high-speed particles within nuclear fusion reactors has been developed, inspired by technologies used to determine whether bullets hit targets in video games. This advancement enables rapid predictions of collisions, significantly enhancing the stability and design efficiency of future fusion reactors.
Professor Eisung Yoon and his research team in the Department of Nuclear Engineering at UNIST announced that they have successfully developed a collision detection algorithm capable of quickly identifying collision points of high-speed particles within virtual fusion devices. The research is published in the journal Computer Physics Communications.
When applied to the Virtual KSTAR (V-KSTAR), this algorithm demonstrated a detection speed up to 15 times faster than previous methods. The V-KSTAR is a digital twin that replicates the Korean Superconducting Tokamak Advanced Research (KSTAR) fusion experiment in a three-dimensional virtual environment.
A brief episode of anxiety may have a bigger influence on a person’s ability to learn what is safe and what is not. Research recently published in npj Science of Learning has used a virtual reality game that involved picking flowers with bees in some of the blossoms that would sting the participant—simulated by a mild electrical stimulation on the hand.
Researchers worked with 70 neurotypical participants between the ages of 20 and 30. Claire Marino, a research assistant in the ZVR Lab, and Pavel Rjabtsenkov, a Neuroscience graduate student at the University of Rochester School of Medicine and Dentistry, were co-first authors of the study.
Their team found that the people who learned to distinguish between the safe and dangerous areas—where the bees were and were not—showed better spatial memory and had lower anxiety, while participants who did not learn the different areas had higher anxiety and heightened fear even in safe areas.
How likely is it that we live in a simulation? Are virtual worlds real?
In this first episode of the 2nd Series we delve into the fascinating topic of virtual reality simulations and the extraordinary possibility that our universe is itself a simulation. For thousands of years some mystical traditions have maintained that the physical world and our separated ‘selves’ are an illusion, and now, only with the development of our own computer simulations and virtual worlds have scientists and philosophers begun to assess the statistical probabilities that our shared reality could in fact be some kind of representation rather than a physical place. As we become more open to these possibilities, other difficult questions start to come into focus. How can we create a common language to talk about matter and energy, that bridges the simulated and simulating worlds. Who could have created such a simulation? Could it be an artificial intelligence rather than a biological or conscious being? Do we have ethical obligations to the virtual beings we interact with in our virtual worlds and to what extent are those beings and worlds ‘real’? The list is long and mind bending.
Fortunately, to untangle our thoughts on this, we have one of the best known philosophers of all things mind bending in the world, Dr. David Chalmers; who has just released a book ‘Reality+: virtual worlds and the problems of philosophy’ about this very topic. Dr. Chalmers is an Australian philosopher and cognitive scientist specialising in the areas of philosophy of mind and philosophy of language. He is a Professor of Philosophy and Neuroscience at New York University, as well as co-director of NYU’s Center for Mind, Brain and Consciousness. He’s the founder of the ‘Towards a Science of Consciousness Conference’ at which he coined the term in 1994 The Hard Problem of Consciousness, kicking off a renaissance in consciousness studies, which has been increasing in popularity and research output ever since.
What we discuss in this episode: 00:00 Short Intro. 06:00 Synesthesia. 08:27 The science of knowing the nature of reality. 11:02 The Simulation Hypothesis explained. 15:25 The statistical probability evaluation. 18:00 Knowing for sure is beyond the reaches of science. 19:00 You’d only have to render the part you’re interacting with. 20:00 Clues from physics. 22:00 John Wheeler — ‘It from bit’ 23:32 Eugene Wigner: measurement as a conscious observation. 27:00 Information theory as a useful but risky hold-all language tool. 34:30 Virtual realities are real and virtual interactions are meaningful. 37:00 Ethical approaches to Non-player Characters (NPC’s) and their rights. 38:45 Will advanced AI be conscious? 42:45 Is god a hacker in the universe up? Simulation Theology. 44:30 Simulation theory meets the argument for the existence of God from design. 51:00 The Hard problem of consciousness applies to AI too. 55:00 Testing AI’s consciousness with the Turing test. 59:30 Ethical value applied to immoral actions in virtual worlds.
The development of increasingly sophisticated sensors can facilitate the advancement of various technologies, including robots, security systems, virtual reality (VR) equipment and sophisticated prosthetics. Multimodal tactile sensors, which can pick up different types of touch-related information (e.g., pressure, texture and type of material), are among the most promising for applications that can benefit from the artificial replication of the human sense of touch.
When exploring their surroundings, communicating with others and expressing themselves, humans can perform a wide range of body motions. The ability to realistically replicate these motions, applying them to human and humanoid characters, could be highly valuable for the development of video games and the creation of animations, content that can be viewed using virtual reality (VR) headsets and training videos for professionals.
Researchers at Peking University’s Institute for Artificial Intelligence (AI) and the State Key Laboratory of General AI recently introduced new models that could simplify the generation of realistic motions for human characters or avatars. The work is published on the arXiv preprint server.
Their proposed approach for the generation of human motions, outlined in a paper presented at CVPR 2025, relies on a data augmentation technique called MotionCutMix and a diffusion model called MotionReFit.