Toggle light / dark theme

Researchers from Seoul National University College of Engineering announced they have developed an optical design technology that dramatically reduces the volume of cameras with a folded lens system utilizing “metasurfaces,” a next-generation nano-optical device.

By arranging metasurfaces on the so that light can be reflected and moved around in the glass substrate in a folded manner, the researchers have realized a with a thickness of 0.7mm, which is much thinner than existing refractive lens systems. The research was published on Oct. 30 in the journal Science Advances.

Traditional cameras are designed to stack multiple glass lenses to refract light when capturing images. While this structure provided excellent high-quality images, the thickness of each lens and the wide spacing between lenses increased the overall bulk of the camera, making it difficult to apply to devices that require ultra-compact cameras, such as virtual and augmented reality (VR-AR) devices, smartphones, endoscopes, drones, and more.

A paper published in Proceedings of the 37th Annual ACM Symposium on User Interface Software and Technology, by researchers in Carnegie Mellon University’s Human-Computer Interaction Institute, introduces EgoTouch, a tool that uses artificial intelligence to control AR/VR interfaces by touching the skin with a finger.

It allows multiple users to walk in any direction without colliding, enhancing VR immersion. Developed by Disney Imagineer Lanny Smoot, this innovation could revolutionize VR experiences and stage performances. (Video Credit: Disney Parks/YouTube)

Modern imaging systems, such as those used in smartphones, virtual reality (VR), and augmented reality (AR) devices, are constantly evolving to become more compact, efficient, and high-performing. Traditional optical systems rely on bulky glass lenses, which have limitations like chromatic aberrations, low efficiency at multiple wavelengths, and large physical sizes. These drawbacks present challenges when designing smaller, lighter systems that still produce high-quality images.

MIT CSAIL researchers have developed a generative AI system, LucidSim, to train robots in virtual environments for real-world navigation. Using ChatGPT and physics simulators, robots learn to traverse complex terrains. This method outperforms traditional training, suggesting a new direction for robotic training.


A team of roboticists and engineers at MIT CSAIL, Institute for AI and Fundamental Interactions, has developed a generative AI approach to teaching robots how to traverse terrain and move around objects in the real world.

The group has published a paper describing their work and possible uses for it on the arXiv preprint server. They also presented their ideas at the recent Conference on Robot Learning (CORL 2024), held in Munich Nov. 6–9.

Artificial Intelligence is everywhere in Europe.

While some are worried about its long-term impact, a team of researchers at the University of Technology in Vienna is working on responsible ways to use AI.

Watch more 👉


From industry to healthcare to the media and even the creative arts, artificial intelligence is already having an impact on our daily lives. It’s hailed by advocates as a gift to humanity, but others worry about the long-term effects on society.

Shaking hands with a character from the Fortnite video game. Visualizing a patient’s heart in 3D—and “feeling” it beat. Touching the walls of the Roman Coliseum—from your sofa in Los Angeles. What if we could touch and interact with things that aren’t physically in front of us? This reality might be closer than we think, thanks to an emerging technology: the holodeck.

The name might sound familiar. In Star Trek’s Next Generation, a holodeck was an advanced 3D virtual reality world that created the illusion of solid objects. Now, immersive technology researchers at USC and beyond are taking us one step closer to making this science fiction concept a science fact.

On Dec. 15, USC hosted the first International Conference on Holodecks. Organized by Shahram Ghandeharizadeh, a USC associate professor of computer science, the conference featured keynotes, papers and presentations from researchers at USC, Brown University, UCLA, University of Colorado, Stanford University, New Jersey Institute of Technology, UC-Riverside, and haptic technology company UltraLeap.

Wetware computing and organoid intelligence is an emerging research field at the intersection of electrophysiology and artificial intelligence. The core concept involves using living neurons to perform computations, similar to how Artificial Neural Networks (ANNs) are used today. However, unlike ANNs, where updating digital tensors (weights) can instantly modify network responses, entirely new methods must be developed for neural networks using biological neurons. Discovering these methods is challenging and requires a system capable of conducting numerous experiments, ideally accessible to researchers worldwide. For this reason, we developed a hardware and software system that allows for electrophysiological experiments on an unmatched scale. The Neuroplatform enables researchers to run experiments on neural organoids with a lifetime of even more than 100 days. To do so, we streamlined the experimental process to quickly produce new organoids, monitor action potentials 24/7, and provide electrical stimulations. We also designed a microfluidic system that allows for fully automated medium flow and change, thus reducing the disruptions by physical interventions in the incubator and ensuring stable environmental conditions. Over the past three years, the Neuroplatform was utilized with over 1,000 brain organoids, enabling the collection of more than 18 terabytes of data. A dedicated Application Programming Interface (API) has been developed to conduct remote research directly via our Python library or using interactive compute such as Jupyter Notebooks. In addition to electrophysiological operations, our API also controls pumps, digital cameras and UV lights for molecule uncaging. This allows for the execution of complex 24/7 experiments, including closed-loop strategies and processing using the latest deep learning or reinforcement learning libraries. Furthermore, the infrastructure supports entirely remote use. Currently in 2024, the system is freely available for research purposes, and numerous research groups have begun using it for their experiments. This article outlines the system’s architecture and provides specific examples of experiments and results.

The recent rise in wetware computing and consequently, artificial biological neural networks (BNNs), comes at a time when Artificial Neural Networks (ANNs) are more sophisticated than ever.

The latest generation of Large Language Models (LLMs), such as Meta’s Llama 2 or OpenAI’s GPT-4, fundamentally rely on ANNs.

Adeno-associated virus (AAV) is a well-known gene delivery tool with a wide range of applications, including as a vector for gene therapies. However, the molecular mechanism of its cell entry remains unknown. Here, we performed coarse-grained molecular dynamics simulations of the AAV serotype 2 (AAV2) capsid and the universal AAV receptor (AAVR) in a model plasma membrane environment. Our simulations show that binding of the AAV2 capsid to the membrane induces membrane curvature, along with the recruitment and clustering of GM3 lipids around the AAV2 capsid. We also found that the AAVR binds to the AAV2 capsid at the VR-I loops using its PKD2 and PKD3 domains, whose binding poses differs from previous structural studies. These first molecular-level insights into AAV2 membrane interactions suggest a complex process during the initial phase of AAV2 capsid internalization.