Toggle light / dark theme

Metasurface-based SLM could enhance AR, VR and LiDAR performance

Many cutting-edge technologies, ranging from augmented reality (AR) and virtual reality (VR) to LiDAR (light detection and ranging) systems, rely on components that enable the precise control of light. These components include so-called spatial light modulators (SLMs), systems that dynamically adjust the position of a light wave within its cycle (i.e., phase), as well as its amplitude or direction across several pixels.

Conventional SLMs rely on liquid crystals, materials in a state of matter at the intersection between solid and liquid. While these components are widely used, they typically struggle to reach the speed and pixel density required to create high-quality three-dimensional (3D) images known as holographs.

Researchers at Huazhong University of Science and Technology and other institutes recently developed a new metasurface, an ultrathin and nano-engineered surface, that could be used to produce dynamic and high-quality holographic images in real time, with a remarkable definition. The new metasurface, introduced in a paper published in Nature Nanotechnology, was used to create a SLM that could be used to enhance the performance of AR, VR, and LiDAR technology.

Brain organoids can be trained to solve a goal-directed task

This research is the first rigorous academic demonstration of goal-directed learning in lab-grown brain organoids, and lays the foundation for adaptive organoid computation—exploring the capacity of lab-grown brain organoids to learn and solve tasks.

Using organoids derived from mouse stem cells and an electrophysiology system developed by industry partners Maxwell Biosciences, the researchers use electrical simulation to send and receive information to and from neurons. By using stronger or weaker signals, they communicate to the organoid the angle of the pole, which exists in a virtual environment, as it falls in one direction or the other. As this happens, the researchers observe as the organoid sends back signals of how to apply force to balance the pole, and they apply this force to the virtual pole.

For their pole-balancing experiments, the researchers observe as the organoid controls the pole until it drops, which is called an episode. Then, the pole is reset and a new episode begins. In essence, the organoid plays a video game in which the goal is to balance the pole upright for as long as possible.

The researchers observe the organoid’s progress in five-episode increments. If the organoid keeps the pole upright for longer on average in the past five episodes as compared to the past 20, it receives no training signal since it has been improving. If it does not improve the average time it keeps the pole upright, it receives a training signal.

Training feedback is not given to the organoid while it is balancing the pole—only at the end of an episode. An AI algorithm called reinforcement learning is used to select which neurons within the organoid get the training signal.

The results of this study prove that the reinforcement learning algorithm can guide the brain organoids toward improved performance at the cart-pole task—meaning organoids can learn to balance the pole for longer periods of time.

The researchers adopted a rigorous framework for success to make sure they were observing true improvement, and not just random success, including a threshold for the minimum time an organoid needs to balance the pole to “win” the game.

Origami-inspired ring lets users ‘feel’ virtual worlds

Virtual reality (VR) and augmented reality (AR) are technologies that allow users to immerse themselves in digital worlds or enhance their surroundings with computer-generated filters or images, respectively. Both these technologies are now widely used worldwide, whether to experience video games and media content in more engaging ways or improve specific training and assist professionals in their daily tasks.

To date, VR and AR have primarily focused on what users see and hear, primarily improving the quality of digital experiences from a visual and auditory standpoint. The sense of touch, on the other hand, has been in great part overlooked.

Researchers at Sungkyunkwan University, École Polytechnique Fédérale de Lausanne and Istanbul Technical University recently developed a new wearable device that could allow users to also realistically “feel” tactile sensations aligned with what they are experiencing in a virtual world. This device, introduced in a paper published in Nature Electronics, is an origami-inspired ring that measures forces on a user’s skin, pushing back onto the finger to produce specific sensations.

Shanghai scientists create computer chip in fiber thinner than a human hair, yet can withstand crushing force of 15.6 tons — fiber packs 100,000 transistors per centimeter

A group of researchers has built a computer chip in a flexible fiber thinner than an average human hair. The team from Fudan University in Shanghai says that their Fiber Integrated Circuit (FIC) design can process information like a computer, yet is durable enough to be “stretched, twisted, and woven into everyday clothing.” Use cases touted by the authors of the paper include advancements in the fields of brain-computer interfaces, VR devices, and smart textiles. This cutting-edge FIC design was apparently inspired by the construction of the humble sushi roll.

Flexible electronics have come a long way in recent years, with malleable components for power, sensing, and display readily available. However, so-called flexible electronic devices and the wearables made from them still usually contain components fabricated from rigid silicon wafers, limiting their applications and comfort. The Fudan team says that their FIC can remove the last vestiges of electronic rigidity “by creating a fiber integrated circuit (FIC) with unprecedented microdevice density and multimodal processing capacity.”

Once Thought To Support Neurons, Astrocytes Turn Out To Be in Charge

Misha Ahrens’ team at Janelia Research Campus placed zebra fish in virtual reality where swimming produced no progress. Normally, fish give up after ~20 seconds. The researchers found astrocytes were “counting” swim attempts via accumulating calcium. When calcium reached a threshold, astrocytes released adenosine to suppress swimming circuits. When researchers disabled astrocytes with a laser, the fish never stopped swimming; when they artificially activated astrocytes, the fish stopped immediately. This showed astrocytes actively mediate the transition from hope to hopelessness.

Marc Freeman’s lab showed norepinephrine doesn’t just activate astrocytes—it changes their “hearing.” At low norepinephrine (low arousal), astrocytes ignore synaptic activity. At high norepinephrine (high arousal), astrocytes suddenly “listen” to every synapse and modulate neuronal response accordingly. This creates a dynamic gain control system layered atop neuronal networks.


“We did expect that, in large part, the effect of norepinephrine on synapses would be mediated by astrocytes,” Papouin said. “But we did not expect all of it to be!”

The finding of parallel molecular pathways in such distinct species as fruit flies, zebra fish, and mice points to “an evolutionarily conserved way in which astrocytes can profoundly affect neural circuits,” Freeman said.

The results suggest a gaping hole in previous theories of neuromodulation. “In the past, neuroscientists studied neuromodulators and knew they were important in regulating neural circuit function, but none of their thinking, none of their diagrams, none of their models had anything in them other than neurons,” Fields said. “Now we see that they missed a big part of the story.”

Study solves key micro-LED challenges, enabling ‘reality-like’ visuals for AR/VR devices

From TVs and smartwatches to rapidly emerging VR and AR devices, micro-LEDs are a next-generation display technology in which each LED—smaller than the thickness of a human hair—emits light on its own. Among the three primary colors required for full-color displays—red, green, and blue—the realization of high-performance red micro-LEDs has long been considered the most difficult.

KAIST researchers have successfully demonstrated a high-efficiency, ultra-high-resolution red micro-LED display, paving the way for displays that can deliver visuals even sharper than reality. The work is published in the journal Nature Electronics.

A research team led by Professor Sanghyeon Kim of the School of Electrical Engineering, in collaboration with Professor Dae-Myeong Geum of Inha University, compound-semiconductor manufacturer QSI, and microdisplay/SoC design company Raontech, has developed a red micro-LED display technology that achieves ultra-high resolution while significantly reducing power consumption.

New ClickFix attacks abuse Windows App-V scripts to push malware

A new malicious campaign mixes the ClickFix method with fake CAPTCHA and a signed Microsoft Application Virtualization (App-V) script to ultimately deliver the Amatera infostealing malware.

The Microsoft App-V script acts as a living-off-the-land binary that proxies the execution of PowerShell through a trusted Microsoft component to disguise the malicious activity.

Microsoft Application Virtualization is an enterprise Windows feature that allows applications to be packaged and run in isolated virtual environments without being actually installed on the system.

The Singularity Countdown: AGI by 2029, Humans Merge with AI, Intelligence 1000x | Ray Kurzweil

Ray Kurzweil predicts humans will merge with artificial intelligence (AI) by 2045, resulting in a 1000x increase in intelligence and marking the beginning of a new era of unprecedented innovation, potentially transforming human life and society ## ## Questions to inspire discussion.

Preparing for AI Timeline.

🤖 Q: When should I expect human-level AI and what defines it? A: Human-level AI arrives by 2029, defined not by passing the Turing test (which only matches an ordinary person), but as AGI requiring expertise in thousands of fields and the ability to combine insights across disciplines.

🧠 Q: When will the singularity occur and what intelligence gain can I expect? A: The singularity happens by 2045 when humanity merges with AI to become 1000x more intelligent, creating a seamless merger where biological and computational thought processes become indistinguishable.

⚡ Q: How much change should I prepare for in the next decade? A: Expect as much change in the next 10 years as occurred in the last 100 years (1925−2025), with AGI and supercomputers by 2035 enabling merging with AI for 1000x intelligence increase.

Career and Economic Adaptation.

How the brain decides what to remember: Study reveals sequentially operating molecular ‘timers’

Every day, our brains transform quick impressions, flashes of inspiration, and painful moments into enduring memories that underpin our sense of self and inform how we navigate the world. But how does the brain decide which bits of information are worth keeping—and how long to hold on?

Now, new research demonstrates that long-term memory is formed by a cascade of molecular “timers” unfolding across brain regions. With a virtual reality-based behavioral model in mice, the scientists discovered that long-term memory is orchestrated by key regulators that either promote memories into progressively more lasting forms or demote them until they are forgotten.

The findings, published in Nature, highlight the roles of multiple brain regions in gradually reorganizing memories into more enduring forms, with gates along the way to assess salience and promote durability.

/* */