Toggle light / dark theme

Scientists have uncovered “Quipu,” the largest known galactic structure, stretching 1.4 billion light-years. This discovery reshapes cosmic mapping and affects key measurements of the universe’s expansion.

A team of scientists has identified the largest cosmic superstructure ever reliably measured. The discovery was made while mapping the nearby universe using galaxy clusters detected in the ROSAT X-ray satellite’s sky survey. Spanning approximately 1.4 billion light-years, this structure — primarily composed of dark matter — is the largest known formation in the universe to date. The research was led by scientists from the Max Planck Institute for Extraterrestrial Physics and the Max Planck Institute for Physics, in collaboration with colleagues from Spain and South Africa.

A Vastly Structured Universe

Dr. Rumi Chunara: “Our system learns to recognize more subtle patterns that distinguish trees from grass, even in challenging urban environments.”


How can artificial intelligence (AI) help improve city planning to account for more green spaces? This is what a recent study published in the ACM Journal on Computing and Sustainable Societies hopes to address as a team of researchers proposed a novel concept using AI with the goal of both monitoring and improving urban green spaces, which are natural public spaces like parks and gardens, and provide a myriad of benefits, including physical and mental health, combating climate change, wildlife habitats, and increased social interaction.

For the study, the researchers developed a method they refer to as “green augmentation”, which uses an AI algorithm to analyze Google Earth satellite images with the goal of improving current AI methods by more accurately identifying green vegetation like grass and trees under various weather and seasonal conditions. For example, current AI methods identify green vegetation with an accuracy and reliability of 63.3 percent and 64 percent, respectively. Using this new method, the researchers successfully identified green vegetation with an accuracy and reliability of 89.4 percent and 90.6 percent, respectively.

“Previous methods relied on simple light wavelength measurements,” said Dr. Rumi Chunara, who is an associate professor of biostatistics at New York University and a co-author on the study. “Our system learns to recognize more subtle patterns that distinguish trees from grass, even in challenging urban environments. This type of data is necessary for urban planners to identify neighborhoods that lack vegetation so they can develop new green spaces that will deliver the most benefits possible. Without accurate mapping, cities cannot address disparities effectively.”

Swimming robots are essential for mapping pollution, studying aquatic ecosystems, and monitoring water quality in sensitive areas such as coral reefs and lake shores. However, many existing models rely on noisy propellers that can disturb or even harm wildlife. Additionally, navigating these environments is challenging due to natural obstacles like plants, animals, and debris.

To address these issues, researchers from the Soft Transducers Lab and the Unsteady Flow Diagnostics Laboratory at EPFL’s School of Engineering, in collaboration with the Max Planck Institute for Intelligent Systems, have developed a compact, highly maneuverable swimming robot. Smaller than a credit card and weighing just six grams, this agile robot can navigate tight spaces and carry payloads significantly heavier than itself. Its design makes it particularly suited for confined environments such as rice fields or for inspecting waterborne machinery. The study has been published in Science Robotics.

“In 2020, our team demonstrated autonomous insect-scale crawling robots, but making untethered ultra-thin robots for aquatic environments is a whole new challenge,” says EPFL Soft Transducers Lab head Herbert Shea. “We had to start from scratch, developing more powerful soft actuators, new undulating locomotion strategies, and compact high-voltage electronics”

In the late 1960s, physicists like Charles Misner proposed that the regions surrounding singularities—points of infinite density at the centers of black holes—might exhibit chaotic behavior, with space and time undergoing erratic contractions and expansions. This concept, termed the “Mixmaster universe,” suggested that an astronaut venturing into such a black hole would experience a tumultuous mixing of their body parts, akin to the action of a kitchen mixer.

S general theory of relativity, which describes the gravitational dynamics of black holes, employs complex mathematical formulations that intertwine multiple equations. Historically, researchers like Misner introduced simplifying assumptions to make these equations more tractable. However, even with these assumptions, the computational tools of the time were insufficient to fully explore the chaotic nature of these regions, leading to a decline in related research. + Recently, advancements in mathematical techniques and computational power have reignited interest in studying the chaotic environments near singularities. Physicists aim to validate the earlier approximations made by Misner and others, ensuring they accurately reflect the predictions of Einsteinian gravity. Moreover, by delving deeper into the extreme conditions near singularities, researchers hope to bridge the gap between general relativity and quantum mechanics, potentially leading to a unified theory of quantum gravity.

Understanding the intricate and chaotic space-time near black hole singularities not only challenges our current physical theories but also promises to shed light on the fundamental nature of space and time themselves.


Physicists hope that understanding the churning region near singularities might help them reconcile gravity and quantum mechanics.

SAN FRANCISCO – BAE Systems won a $230.6 million NASA contract to deliver spacecraft for the National Oceanic and Atmospheric Agency’s Lagrange 1 Series space weather project.

Under the firm-fixed-price award, announced Feb. 21, BAE Systems Space & Mission Systems, formerly Ball Aerospace, will develop Lagrange 1 Series spacecraft, integrate instruments, and support flight and mission operations. Contract-related work, scheduled to begin this month, will be performed in Boulder, Colorado, through January 2034.

The Lagrange 1 Series, part of NOAA’s Space Weather Next program, is designed to provide continuity of coronal imagery and upstream solar wind measurements, with spacecraft expected to launch in 2029 and 2032. BAE Systems also is building the Space Weather Follow On Lagrange 1 mission set to fly no earlier than September on NASA’s Interstellar Mapping and Acceleration Probe.

Neural technologies are adopting bio-inspired designs to enhance biointegration and functionality. This review maps the growing field of bio-inspired electronics and discusses recent developments in tissue-like bioelectronics, from soft interfaces to ‘biohybrid’ and ‘all-living’ platforms.

A new breakthrough in cosmic mapping has unveiled the structure of a colossal filament, part of the vast cosmic web that connects galaxies.

Dark matter and gas shape these filaments, but their faint glow makes them hard to detect. By using advanced telescope technology and hundreds of hours of observation, astronomers have captured the most detailed image yet, bringing us closer to decoding the evolution of galaxies and the hidden forces shaping the universe.

The hidden order of the universe.

Glaciers separate from the continental ice sheets in Greenland and Antarctica covered a global area of approximately 706,000 km2 around the year 200019, with an estimated total volume of 158,170 ± 41,030 km3, equivalent to a potential sea-level rise of 324 ± 84 mm (ref. 20). Glaciers are integral components of Earth’s climate and hydrologic system1. Hence, glacier monitoring is essential for understanding and assessing ongoing changes21,22, providing a basis for impact2,3,4,5,6,7,8,9,10 and modelling11,12,13 studies, and helping to track progress on limiting climate change23. The four main observation methods to derive glacier mass changes include glaciological measurements, digital elevation model (DEM) differencing, altimetry and gravimetry. Additional concepts include hybrid approaches that combine different observation methods. In situ glaciological measurements have been carried out at about 500 unevenly distributed glaciers24, representing less than 1% of Earth’s glaciers19. Glaciological time series provide seasonal-to-annual variability of glacier mass changes25. Although these are generally well correlated regionally, long-term trends of individual glaciers might not always be representative of a given region. Spaceborne observations complement in situ measurements, allowing for glacier monitoring at global scale over recent decades. Several optical and radar sensors allow the derivation of DEMs, which reflect the glacier surface topography. Repeat mapping and calculation of DEM differences provide multi-annual trends in elevation and volume changes26 for all glaciers in the world27. Similarly, laser and radar altimetry determine elevation changes along linear tracks, which can be extrapolated to calculate regional estimates of glacier elevation and volume change28. Unlike DEM differencing, altimetry provides spatially sparse observations but has a high (that is, monthly to annual) temporal resolution26. DEM differencing and altimetry require converting glacier volume to mass changes using density assumptions29. Satellite gravimetry estimates regional glacier mass changes at monthly resolution by measuring changes in Earth’s gravitational field after correcting for solid Earth and hydrological effects30,31. Although satellite gravimetry provides high temporal resolution and direct estimates of mass, it has a spatial resolution of a few hundred kilometres, which is several orders of magnitude lower than DEM differencing or altimetry26.

The heterogeneity of these observation methods in terms of spatial, temporal and observational characteristics, the diversity of approaches within a given method, and the lack of homogenization challenged past assessments of glacier mass changes. In the Intergovernmental Panel on Climate Change (IPCC)’s Sixth Assessment Report (AR6)16, for example, glacier mass changes for the period from 2000 to 2019 relied on DEM differencing from a limited number of global27 and regional studies16. Results from a combination of glaciological and DEM differencing25 as well as from gravimetry30 were used for comparison only. The report calculated regional estimates over a specific baseline period (2000–2019) and as mean mass-change rates based on selected studies per region, which only partly considered the strengths and limitations of the different observation methods.

The spread of reported results—many outside uncertainty margins—and recent updates from different observation methods afford an opportunity to assess regional and global glacier mass loss with a community-led effort. Within the Glacier Mass Balance Intercomparison Exercise (GlaMBIE; https://glambie.org), we collected, homogenized and combined regional results from the observation methods described above to yield a global assessment towards the upcoming IPCC reports of the seventh assessment cycle. At the same time, GlaMBIE provides insights into regional trends and interannual variabilities, quantifies the differences among observation methods, tracks observations within the range of projections, and delivers a refined observational baseline for future impact and modelling studies.

Head to https://squarespace.com/artem to save 10% off your first purchase of a website or domain using code ARTEMKIRSANOV

Socials:
X/Twitter: https://twitter.com/ArtemKRSV
Patreon: / artemkirsanov.

My name is Artem, I’m a graduate student at NYU Center for Neural Science and researcher at Flatiron Institute.

In this video video we are exploring a fascinating paper which revealed the role of biological constraints on what patterns of neural dynamics the brain and cannot learn.

Augmented reality (AR) has become a hot topic in the entertainment, fashion, and makeup industries. Though a few different technologies exist in these fields, dynamic facial projection mapping (DFPM) is among the most sophisticated and visually stunning ones. Briefly put, DFPM consists of projecting dynamic visuals onto a person’s face in real-time, using advanced facial tracking to ensure projections adapt seamlessly to movements and expressions.

While imagination should ideally be the only thing limiting what’s possible with DFPM in AR, this approach is held back by technical challenges. Projecting visuals onto a moving face implies that the DFPM system can detect the user’s facial features, such as the eyes, nose, and mouth, within less than a millisecond.

Even slight delays in processing or minuscule misalignments between the camera’s and projector’s image coordinates can result in projection errors—or “misalignment artifacts”—that viewers can notice, ruining the immersion.