Toggle light / dark theme

Legged robot could accelerate resource prospecting on the moon and the search for life on Mars

Planetary surface missions currently operate cautiously. On Mars, communication delays between Earth and rovers (typically between four and 22 minutes), as well as data transfer constraints due to uplink and downlink limitations, force scientists to plan operations in advance. Rovers are designed for energy efficiency and safety, and to move slowly across hazardous terrain.

As a result, exploration is typically limited to only a small portion of the landing site, with rovers typically traveling up to a few hundreds of meters per day, which makes it difficult to collect geologically diverse data.

In a study published in Frontiers in Space Technologies, a team led by Dr. Gabriela Ligeza, former Ph.D. student from the University of Basel and now a postdoctoral researcher at the European Space Agency (ESA), tested a different approach: a semi-autonomous robotic explorer which can investigate multiple targets one-by-one and collect data without constant human intervention.

What’s inside a masterpiece? Laser scans and AI map paint layers molecule by molecule

Paintings are far more than dabs of oil on canvas. They are complex works of art composed of multiple layers, from primer and glues to the pigments and protective varnishes applied by the artists. Being able to see into these layers and map their chemical makeup is essential for art historians and conservators. A new technique developed by an international team of scientists can now probe paint layers in far greater molecular detail than before.

As they describe in a paper published in the journal Science Advances, the researchers combined a technique called MALDI-MSI (matrix-assisted laser desorption/ionization mass spectrometry imaging) with an AI named MSIpredictART to help identify the specific pigments and binders present in each layer of a painting.

Current approaches looking at the internal structure of a painting have to run several different tests on tiny samples. MALDI-MSI reduces the need for multiple separate techniques by using a high-resolution laser scan to map both the pigments and the binder or glue that holds them together.

Three-in-one diode integrates sensing, memory and processing for smart cameras

Think about how easily you recognize a friend in a dimly lit room. Your eyes capture light, while your brain filters out background noise, retrieves stored visual information, and processes the image to make a match. It all happens in a fraction of a second and uses remarkably little energy. Unfortunately, artificial vision systems in smartphones, cameras, and autonomous machines operate more like an assembly line. In our recent paper published in Nature Electronics, we describe how we addressed this challenge by enabling sensing, memory, and processing within the same device, pointing to a possible route toward more efficient machine vision.

The iGaN Laboratory led by Professor Haiding Sun at the School of Microelectronics, University of Science and Technology of China (USTC), in collaboration with multiple institutions, developed the multifunctional semiconductor diode with integrated photosensing, memory, and processing capabilities.

To understand the challenge, it helps to look at the basic building block of modern digital cameras: the semiconductor p-n diode. These tiny junctions act as the light-sensing pixels in imaging systems. However, a conventional diode is usually limited to a single function. It converts light into an electrical signal, and the captured data must then be transferred to separate memory and processing units. Moving this data back and forth consumes time, power, and chip area.

Meta-Harness: End-to-End Optimization of Model Harnesses

Think of a Large Language Model (LLM) like a brilliant scholar. To do their job well, they don’t just need their own brain; they need a good workspace—a desk with the right books, a filing cabinet that’s easy to navigate, and a clear set of instructions on how to process information. In the tech world, this “workspace” is called a harness.

Up until now, these harnesses have been built by human engineers through trial and error. While we have tools to automatically improve the AI’s “brain” (the model weights), the code that actually manages the AI’s information has remained stubbornly manual.


Meta-Harness automatically optimizes model harnesses — the code determining what to store, retrieve, and present to an LLM — surpassing hand-designed systems on text classification, math reasoning, and agentic coding.

Cooperation by non-kin during birth underpins sperm whale social complexity

In an unprecedented observation, researchers in Science captured the birth of a sperm whale calf, documenting how 11 whales from two normally separate family groups coordinated closely to support the newborn for hours after its arrival.

These findings offer quantitative evidence of direct communal caregiving in cetaceans and suggest that short-term, highly coordinated cooperation during critical moments like birth may play a foundational role in maintaining the complex social structures seen in sperm whale societies.


Birth and neonatal care represent particularly revealing contexts for understanding the emergence of cooperation. Cetacean species produce a small number of offspring with long lifespans. Calves are born infrequently and represent a major maternal investment; calf survival depends heavily on immediate support after birth and early caregiving (9). Thus, births offer critical opportunities to study how individuals coordinate in high-stakes contexts. Direct quantitative observations of sperm whale births remain virtually absent (14), with only four sperm whale births being reported over the past 60 years, and all of them either anecdotal or whaling related (1518).

Within the matrilineal social units of sperm whales, individuals take turns socializing, foraging, and caring for calves across years (1924). Through decades of observational work (19, 21, 22, 2528), communal allocare for calves has been identified as the central mechanism driving selection for sociality in this species. Although it has been hypothesized that communal defense and shared parental care underpin the evolution of sperm whale sociality (19, 22, 23, 26), these hypotheses have lacked direct empirical grounding during the birth of a newborn. Newborns are assumed to be negatively buoyant (20, 29) and likely require immediate physical support to breathe, and this potentially shapes the evolutionary importance of cooperative allocare within units (26, 30). Under this framework, the survival of mothers and newborns around birth creates a potentially dangerous environment in which selection is strongly imposed.

Here, we present a high-resolution, multiscale analysis of a sperm whale birth event through the integration of drone-based videography, machine learning, and longitudinal association and kinship data. We quantified how individuals across two distinct matrilines coordinated around the mother and newborn by analyzing and tracking physical support, proximity, orientation, and role distribution over time. Our results suggest that kin and non-kin engaged in sustained, cooperative, postnatal care, taking turns to support the newborn and maintain group cohesion, in contrast to historical kin-segregated foraging patterns (21). These findings provide rare quantitative evidence of direct allocare in cetaceans and can lend support to the hypothesis that transient, structured cooperation during birth is a key mechanism sustaining complex sociality in sperm whales.

Eyal Aharoni — Breaking the Moral Turing Test

Dr. discusses one of the most provocative frontiers in technology: the automation of moral judgement — in his talk focusses on outcomes of a comparative moral Turing test (AI outperforms humans across a range of metrics), as well as AI assisted medical triage!

Link in reply🔗

Eyal Aharoni


Dr. Eyal Aharoni (Georgia State University) to the Future Day 2026 stage to discuss one of the most provocative frontiers in technology: the automation of moral judgement.

Breaking the Moral Turing Test: Studies of human attribution and deference to AI moral judgment and decision-making.

Aubrey de Grey — How close are we to robust mouse rejuvenation, and why does that matter?

Full talk at Future Day 2026 — link in reply 🔗


Polymath and trailblazer in bio-rejuvenation Aubrey de Grey gave a talk at Future Day 2026 on the next phase of robust mouse rejuvenation trials!

Synopsis: The “damage repair” approach to bringing aging under medical control has made huge strides since I first proposed it 25 years ago. However, since it is a divide-and-conquer strategy, we should not be surprised at the absence of progress in the “bottom line” of life extension, even in mice. Can we realistically expect that to change any time soon? I will present reasons to believe that we can, in the form of accelerating progress in proofs of efficacy of individual treatments, together with initial proof of concept that combining damage repair modalities will give additive benefits.

0:00 Intro.
0:29 Talk starts.
1:28 Age related vs infectious diseases.
3:26Epidemic of the chronic conditions of late life — why?
4:42 Ways to be sick: popular view.
7:10 Aging in three words (metabolism, damage, pathology)
11:46 Ways to be sick: correct view.
15:29 What we do these days against aging — Geriatrics.
18:21 Gerontology: A more promising approach?
20:57 Metabolism is complex.
22:37 Maintenance: A common sense alternative.
24:39 Comparison: car maintenance.
26:00 7 deadly things.
29:17 Cell 153:1194 — too many citations to count.
30:22 The first round of the race to RMR (Robust Mouse Rejuvenation)
38:43 Females: yay, additivity!
40:09 Males: messier, but mostly the same story.
41:02 What health indices did we measure?
43:23 RMR2: ASAP! See levf.org/rmr2
46:30 AUBRAI
48:36 Learn more and help!
51:11 How has the longevity industry vibe changed over the last 7 years?
56:32 LEV Foundation only org working on this combination of damage repair regimes.
57:55 Has AI made progress in helping solve aging? In-silico medicine.
1:01:16 Changes to seven deadly things?
1:04:54 Hallmarks of aging — defacto taxonomy — difficulty translating to other taxonomies?
1:06:17 Has the damage repair methodology been attracting people over?
1:09:56 Stradelling both academia and private industry — but what about the state?
1:13:36 Robust Mouse Rejuvenation timelines under ideal funding.
1:19:49 Infections.
1:25:46 Treatment cadence.

#rejuvenation #medicine #health #aging #ageing.

14 JEPA Milestones as a Map of AI Progress

Tx, Yann LeCun.

• JEPA / H-JEPA: avoids predicting every single pixel (too expensive) and rather predicts in latent space. H-JEPA adds hierarchy — short term details vs long term planning ie. how humans actually learn.

• I-JEPA: built for very efficient vision models. Masks image patches and predicts the semantics and in doing so bypasses heavy compute of traditional autoencoders.

• MC-JEPA & V-JEPA: both of these are built for videos. MC-JEPA separates content (what an object is) vs motion (how it moves). V-JEPA masks video features with no text labels making it perfect of action tracking at scale.

• Audio-JEPA: filters out background noise by treating sounds like visuals.

• Point-JEPA & 3D-JEPA: used primarily in AVs. Uses LiDAR point clouds & volumetric grids.

• ACT-JEPA: filters out real world noise to learn manipulation tasks efficiently via imitation learning.

Human brain operates near, but not at, the critical point

A recent study published in Physical Review Letters reveals that many widely used signatures of criticality in brain data may be statistical artifacts. They propose a more robust framework that, when applied to whole-brain fMRI data, confirms the brain operates near, but not exactly at, a critical point.

Neuroscientists have long found the idea fascinating—that the brain operates near a “critical point,” a phase transition between stable and chaotic dynamics. Theory suggests this sweet spot enhances computational flexibility, dynamic range, and sensitivity to inputs. Evidence has mounted over the years from neural recordings showing approximate scale invariance and power-law behavior across spatiotemporal scales.

The concept has even influenced AI, particularly reservoir computing, where networks near the “edge of chaos” tend to perform best. However, the field faces a persistent concern: are these criticality signatures intrinsic to the brain’s recurrent dynamics, or do external inputs and data limitations shape them?

/* */