Toggle light / dark theme

As artificial intelligence and smart devices continue to evolve, machine vision is taking an increasingly pivotal role as a key enabler of modern technologies. Unfortunately, despite much progress, machine vision systems still face a major problem: Processing the enormous amounts of visual data generated every second requires substantial power, storage, and computational resources. This limitation makes it difficult to deploy visual recognition capabilities in edge devices, such as smartphones, drones, or autonomous vehicles.

Interestingly, the human visual system offers a compelling alternative model. Unlike conventional machine vision systems that have to capture and process every detail, our eyes and brain selectively filter information, allowing for higher efficiency in visual processing while consuming minimal power.

Neuromorphic computing, which mimics the structure and function of biological neural systems, has thus emerged as a promising approach to overcome existing hurdles in computer vision. However, two major challenges have persisted. The first is achieving color recognition comparable to human vision, whereas the second is eliminating the need for external power sources to minimize energy consumption.

As neuro-ophthalmology educators, we have sought ways to improve the teaching of pupil-related disorder, focusing on incorporating their dynamic aspects and active learning. Our solution is an app for smartphone and tablet devices. The app, Pupil Wizard, provides a digital textbook featuring a dynamic presentation of the key pupillary abnormalities. It allows the users to interact with a digital patient and explore how each condition responds to direct and indirect light stimuli, near focus, and changes in ambient light (Fig. 1). Moreover, the users can test their knowledge in quiz mode, where random pupillary abnormalities must be correctly identified and multiple-choice questions about them answered.


Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

ChatGPT-maker OpenAI has enlisted the legendary designer behind the iPhone to create an irresistible gadget for using generative artificial intelligence (AI).

The ability to engage digital assistants as easily as speaking with friends is being built into eyewear, speakers, computers and smartphones, but some argue that the Age of AI calls for a transformational new gizmo.

“The products that we’re using to deliver and connect us to unimaginable technology are decades old,” former Apple chief design officer Jony Ive said when his alliance with OpenAI was announced.

Silicon is king in the semiconductor technology that underpins smartphones, computers, electric vehicles and more, but its crown may be slipping, according to a team led by researchers at Penn State.

In a world first, they used two-dimensional (2D) materials, which are only an atom thick and retain their properties at that scale, unlike , to develop a computer capable of simple operations.

The development, published in Nature, represents a major leap toward the realization of thinner, faster and more energy-efficient electronics, the researchers said.

A team of engineers, AI specialists and chip design researchers at the Chinese Academy of Sciences has designed, built and tested what they are describing as the first AI-based chip design system. The group has published a paper describing their system, called QiMeng, on the arXiv preprint server.

Over the past several decades, integrated circuit makers have developed systems for developing processor chips for computers, smartphones and other . Such systems tend to be made up of large teams of highly skilled people who can take design ideas (such as faster computing or running AI apps) and turn them into physical designs that can be fabricated in specially designed factories. The process is notoriously slow and expensive.

More recently, computer and device makers have been looking for ways to speed up the process and to allow for more flexibility—some may want a chip that can do just one thing, for example, but do it really well. In this new study, the team in China has applied AI to the problem.

When the computer or phone you’re using right now blinks its last blink and you drop it off for recycling, do you know what happens?

At the recycling center, powerful magnets will pull out steel. Spinning drums will toss aluminum into bins. Copper wires will get neatly bundled up for resale. But as the conveyor belt keeps rolling, tiny specks of valuable, lesser-known materials such as gallium, indium and tantalum will be left behind.

Those tiny specks are critical materials. They’re essential for building new technology, and they’re in short supply in the U.S. They could be reused, but there’s a problem: Current recycling methods make recovering from e-waste too costly or hazardous, so many recyclers simply skip them.

Color, as the way light’s wavelength is perceived by the human eye, goes beyond a simple aesthetic element, containing important scientific information like a substance’s composition or state.

Spectrometers are that analyze by decomposing light into its constituent wavelengths, and they are widely used in various scientific and industrial fields, including material analysis, chemical component detection, and life science research.

Existing high-resolution spectrometers were large and complex, making them difficult for widespread daily use. However, thanks to the ultra-compact, high-resolution spectrometer developed by KAIST researchers, it is now expected that light’s color information can be utilized even within smartphones or wearable devices.

At Apple’s WWDC 2025 event, the company announced its most dramatic software design change in over a decade: Liquid Glass. This visual overhaul gives us a glimpse into what might be coming in Apple’s rumored AR glasses, which will reportedly debut next year.

Users are connecting Liquid Glass to potential AR glasses because the new design draws strong inspiration from that of Apple’s Vision Pro VR headset.

Liquid Glass is named with the idea that each window on a phone is like a pane of glass, see-through and somewhat reflective. It gives the screen a sleeker look, though in its developer beta, Apple hasn’t quite worked out the kinks of playing with opacity.

From smartphones and TVs to credit cards, technologies that manipulate light are deeply embedded in our daily lives, many of which are based on holography. However, conventional holographic technologies have faced limitations, particularly in displaying multiple images on a single screen and in maintaining high-resolution image quality.

Recently, a research team led by Professor Junsuk Rho at POSTECH (Pohang University of Science and Technology) has developed a groundbreaking metasurface technology that can display up to 36 high-resolution images on a surface thinner than a human hair. This research has been published in Advanced Science.

This achievement is driven by a special nanostructure known as a metasurface. Hundreds of times thinner than a human hair, the metasurface is capable of precisely manipulating light as it passes through. The team fabricated nanometer-scale pillars using silicon nitride, a material known for its robustness and excellent optical transparency. These pillars, referred to as meta-atoms, allow for fine control of light on the metasurface.

You may not have heard of tantalum, but chances are you’re holding some right now. It’s an essential component in our cell phones and laptops, and currently, there’s no effective substitute. Even if you plan to recycle your devices after they die, the tantalum inside is likely to end up in a landfill or shipped overseas, being lost forever.

As a researcher focused on critical materials recovery, I’ve spent years digging through , not seeing it as garbage, but as an urban mine filled with valuable materials like .