Toggle light / dark theme

New retina-inspired photodiodes could advance machine vision

Over the past decades, computer scientists have developed increasingly sophisticated sensors and machine learning algorithms that allow computer systems to process and interpret images and videos. This tech-powered capability, also referred to as machine vision, is proving to be highly advantageous for the manufacturing and production of food products, drinks, electronics, and various other goods.

Machine vision could enable the automation of various tedious steps in industry and manufacturing, such as the detection of defects, the inspection of electronics, automotive parts or other items, the verification of labels or expiration dates and the sorting of products into different categories.

While the sensors underpinning the functioning of many previously introduced machine vision systems are highly sophisticated, they typically do not process with as much detail as the human retina (i.e., a light-sensitive tissue in the eye that processes visual signals).

Experimental PromptLock ransomware uses AI to encrypt, steal data

Threat researchers discovered the first AI-powered ransomware, called PromptLock, that uses Lua scripts to steal and encrypt data on Windows, macOS, and Linux systems.

The malware uses OpenAI’s gpt-oss:20b model through the Ollama API to dynamically generate the malicious Lua scripts from hard-coded prompts.

Scientists just developed a new AI modeled on the human brain — it’s outperforming LLMs like ChatGPT at reasoning tasks

The hierarchical reasoning model (HRM) system is modeled on the way the human brain processes complex information, and it outperformed leading LLMs in a notoriously hard-to-beat benchmark.

Engineers send a wireless curveball to deliver massive amounts of data

High frequency radio waves can wirelessly carry the vast amount of data demanded by emerging technology like virtual reality, but as engineers push into the upper reaches of the radio spectrum, they are hitting walls. Literally.

Ultrahigh frequency bandwidths are easily blocked by objects, so users can lose transmissions walking between rooms or even passing a bookcase.

Now, researchers at Princeton Engineering have developed a machine-learning system that could allow ultrahigh frequency transmissions to dodge those obstacles. In an article in Nature Communications, the researchers unveiled a system that shapes transmissions to avoid obstacles coupled with a neural network that can rapidly adjust to a complex and dynamic environment.

AI prescribes new electrolyte additive combinations for enhanced battery performance

Batteries, like humans, require medicine to function at their best. In battery technology, this medicine comes in the form of electrolyte additives, which enhance performance by forming stable interfaces, lowering resistance and boosting energy capacity, resulting in improved efficiency and longevity.

Finding the right electrolyte for a battery is much like prescribing the right medicine. With hundreds of possibilities to consider, identifying the best additive for each battery is a challenge due to the vast number of possibilities and the time-consuming nature of traditional experimental methods.

Researchers at the U.S. Department of Energy’s (DOE) Argonne National Laboratory are using models to analyze known electrolyte additives and predict combinations that could improve battery performance. They trained models to forecast key battery metrics, like resistance and energy capacity, and applied these models to suggest new additive combinations for testing.

New AI attack hides data-theft prompts in downscaled images

Researchers have developed a novel attack that steals user data by injecting malicious prompts in images processed by AI systems before delivering them to a large language model.

The method relies on full-resolution images that carry instructions invisible to the human eye but become apparent when the image quality is lowered through resampling algorithms.

Developed by Trail of Bits researchers Kikimora Morozova and Suha Sabi Hussain, the attack builds upon a theory presented in a 2020 USENIX paper by a German university (TU Braunschweig) exploring the possibility of an image-scaling attack in machine learning.

/* */