Toggle light / dark theme

In the late 1960s, physicists like Charles Misner proposed that the regions surrounding singularities—points of infinite density at the centers of black holes—might exhibit chaotic behavior, with space and time undergoing erratic contractions and expansions. This concept, termed the “Mixmaster universe,” suggested that an astronaut venturing into such a black hole would experience a tumultuous mixing of their body parts, akin to the action of a kitchen mixer.

S general theory of relativity, which describes the gravitational dynamics of black holes, employs complex mathematical formulations that intertwine multiple equations. Historically, researchers like Misner introduced simplifying assumptions to make these equations more tractable. However, even with these assumptions, the computational tools of the time were insufficient to fully explore the chaotic nature of these regions, leading to a decline in related research. + Recently, advancements in mathematical techniques and computational power have reignited interest in studying the chaotic environments near singularities. Physicists aim to validate the earlier approximations made by Misner and others, ensuring they accurately reflect the predictions of Einsteinian gravity. Moreover, by delving deeper into the extreme conditions near singularities, researchers hope to bridge the gap between general relativity and quantum mechanics, potentially leading to a unified theory of quantum gravity.

Understanding the intricate and chaotic space-time near black hole singularities not only challenges our current physical theories but also promises to shed light on the fundamental nature of space and time themselves.


Physicists hope that understanding the churning region near singularities might help them reconcile gravity and quantum mechanics.

SAN FRANCISCO – BAE Systems won a $230.6 million NASA contract to deliver spacecraft for the National Oceanic and Atmospheric Agency’s Lagrange 1 Series space weather project.

Under the firm-fixed-price award, announced Feb. 21, BAE Systems Space & Mission Systems, formerly Ball Aerospace, will develop Lagrange 1 Series spacecraft, integrate instruments, and support flight and mission operations. Contract-related work, scheduled to begin this month, will be performed in Boulder, Colorado, through January 2034.

The Lagrange 1 Series, part of NOAA’s Space Weather Next program, is designed to provide continuity of coronal imagery and upstream solar wind measurements, with spacecraft expected to launch in 2029 and 2032. BAE Systems also is building the Space Weather Follow On Lagrange 1 mission set to fly no earlier than September on NASA’s Interstellar Mapping and Acceleration Probe.

Neural technologies are adopting bio-inspired designs to enhance biointegration and functionality. This review maps the growing field of bio-inspired electronics and discusses recent developments in tissue-like bioelectronics, from soft interfaces to ‘biohybrid’ and ‘all-living’ platforms.

A new breakthrough in cosmic mapping has unveiled the structure of a colossal filament, part of the vast cosmic web that connects galaxies.

Dark matter and gas shape these filaments, but their faint glow makes them hard to detect. By using advanced telescope technology and hundreds of hours of observation, astronomers have captured the most detailed image yet, bringing us closer to decoding the evolution of galaxies and the hidden forces shaping the universe.

The hidden order of the universe.

Glaciers separate from the continental ice sheets in Greenland and Antarctica covered a global area of approximately 706,000 km2 around the year 200019, with an estimated total volume of 158,170 ± 41,030 km3, equivalent to a potential sea-level rise of 324 ± 84 mm (ref. 20). Glaciers are integral components of Earth’s climate and hydrologic system1. Hence, glacier monitoring is essential for understanding and assessing ongoing changes21,22, providing a basis for impact2,3,4,5,6,7,8,9,10 and modelling11,12,13 studies, and helping to track progress on limiting climate change23. The four main observation methods to derive glacier mass changes include glaciological measurements, digital elevation model (DEM) differencing, altimetry and gravimetry. Additional concepts include hybrid approaches that combine different observation methods. In situ glaciological measurements have been carried out at about 500 unevenly distributed glaciers24, representing less than 1% of Earth’s glaciers19. Glaciological time series provide seasonal-to-annual variability of glacier mass changes25. Although these are generally well correlated regionally, long-term trends of individual glaciers might not always be representative of a given region. Spaceborne observations complement in situ measurements, allowing for glacier monitoring at global scale over recent decades. Several optical and radar sensors allow the derivation of DEMs, which reflect the glacier surface topography. Repeat mapping and calculation of DEM differences provide multi-annual trends in elevation and volume changes26 for all glaciers in the world27. Similarly, laser and radar altimetry determine elevation changes along linear tracks, which can be extrapolated to calculate regional estimates of glacier elevation and volume change28. Unlike DEM differencing, altimetry provides spatially sparse observations but has a high (that is, monthly to annual) temporal resolution26. DEM differencing and altimetry require converting glacier volume to mass changes using density assumptions29. Satellite gravimetry estimates regional glacier mass changes at monthly resolution by measuring changes in Earth’s gravitational field after correcting for solid Earth and hydrological effects30,31. Although satellite gravimetry provides high temporal resolution and direct estimates of mass, it has a spatial resolution of a few hundred kilometres, which is several orders of magnitude lower than DEM differencing or altimetry26.

The heterogeneity of these observation methods in terms of spatial, temporal and observational characteristics, the diversity of approaches within a given method, and the lack of homogenization challenged past assessments of glacier mass changes. In the Intergovernmental Panel on Climate Change (IPCC)’s Sixth Assessment Report (AR6)16, for example, glacier mass changes for the period from 2000 to 2019 relied on DEM differencing from a limited number of global27 and regional studies16. Results from a combination of glaciological and DEM differencing25 as well as from gravimetry30 were used for comparison only. The report calculated regional estimates over a specific baseline period (2000–2019) and as mean mass-change rates based on selected studies per region, which only partly considered the strengths and limitations of the different observation methods.

The spread of reported results—many outside uncertainty margins—and recent updates from different observation methods afford an opportunity to assess regional and global glacier mass loss with a community-led effort. Within the Glacier Mass Balance Intercomparison Exercise (GlaMBIE; https://glambie.org), we collected, homogenized and combined regional results from the observation methods described above to yield a global assessment towards the upcoming IPCC reports of the seventh assessment cycle. At the same time, GlaMBIE provides insights into regional trends and interannual variabilities, quantifies the differences among observation methods, tracks observations within the range of projections, and delivers a refined observational baseline for future impact and modelling studies.

Head to https://squarespace.com/artem to save 10% off your first purchase of a website or domain using code ARTEMKIRSANOV

Socials:
X/Twitter: https://twitter.com/ArtemKRSV
Patreon: / artemkirsanov.

My name is Artem, I’m a graduate student at NYU Center for Neural Science and researcher at Flatiron Institute.

In this video video we are exploring a fascinating paper which revealed the role of biological constraints on what patterns of neural dynamics the brain and cannot learn.

Augmented reality (AR) has become a hot topic in the entertainment, fashion, and makeup industries. Though a few different technologies exist in these fields, dynamic facial projection mapping (DFPM) is among the most sophisticated and visually stunning ones. Briefly put, DFPM consists of projecting dynamic visuals onto a person’s face in real-time, using advanced facial tracking to ensure projections adapt seamlessly to movements and expressions.

While imagination should ideally be the only thing limiting what’s possible with DFPM in AR, this approach is held back by technical challenges. Projecting visuals onto a moving face implies that the DFPM system can detect the user’s facial features, such as the eyes, nose, and mouth, within less than a millisecond.

Even slight delays in processing or minuscule misalignments between the camera’s and projector’s image coordinates can result in projection errors—or “misalignment artifacts”—that viewers can notice, ruining the immersion.

“ tabindex=”0” accuracy and scale, brings scientists closer to understanding how neurons connect and communicate.

Mapping Thousands of Synaptic Connections

Harvard researchers have successfully mapped and cataloged over 70,000 synaptic connections from approximately 2,000 rat neurons. They achieved this using a silicon chip capable of detecting small but significant synaptic signals from a large number of neurons simultaneously.

Summary: Researchers have developed a geometric deep learning approach to uncover shared brain activity patterns across individuals. The method, called MARBLE, learns dynamic motifs from neural recordings and identifies common strategies used by different brains to solve the same task.

Tested on macaques and rats, MARBLE accurately decoded neural activity linked to movement and navigation, outperforming other machine learning methods. The system works by mapping neural data into high-dimensional geometric spaces, enabling pattern recognition across individuals and conditions.

MIT researchers developed a new approach for assessing predictions with a spatial dimension, like forecasting weather or mapping air pollution.

Re relying on a weather app to predict next week’s temperature. How do you know you can trust its forecast? Scientists use statistical and physical models to make predictions about everything from weather to air pollution. But checking whether these models are truly reliable is trickier than it seems—especially when the locations where we have validation data don Traditional validation methods struggle with this problem, failing to provide consistent accuracy in real-world scenarios. In this work, researchers introduce a new validation approach designed to improve trust in spatial predictions. They define a key requirement: as more validation data becomes available, the accuracy of the validation method should improve indefinitely. They show that existing methods don’t always meet this standard. Instead, they propose an approach inspired by previous work on handling differences in data distributions (known as “covariate shift”) but adapted for spatial prediction. Their method not only meets their strict validation requirement but also outperforms existing techniques in both simulations and real-world data.

By refining how we validate predictive models, this work helps ensure that critical forecasts—like air pollution levels or extreme weather events—can be trusted with greater confidence.


A new evaluation method assesses the accuracy of spatial prediction techniques, outperforming traditional methods. This could help scientists make better predictions in areas like weather forecasting, climate research, public health, and ecological management.