Toggle light / dark theme

Abstract: Decoding neurodegeneration one cell at a time

https://doi.org/10.1172/JCI199841 As part of the JCI’s Review Series on Neurodegeneration, Olivia Gautier, Thao P. Nguyen & Aaron D. Gitler explore the molecular basis for selective neuronal vulnerability and degeneration and summarize recent advances and applications of single-cell genomic approaches.


How do we decide whether we should choose single-cell or single-nucleus sequencing? This depends on sample types and biological applications. Single-cell sequencing is typically applied to fresh, readily dissociable tissues or cultured cells to study intact cell populations. Because it captures both cytoplasmic and nuclear transcripts, scRNA-seq provides a comprehensive view of cellular gene expression. However, tissue dissociation can induce stress-related transcriptional artifacts and introduce substantial cell-type bias. Large or fragile neurons are often lost during dissociation, whereas smaller cell types, such as astrocytes and oligodendrocytes, tend to be overrepresented. In contrast, single-nucleus sequencing is commonly used for frozen samples or for tissues that are difficult to dissociate, including the brain and spinal cord. Although fresh or fresh-frozen samples are typically used, snRNA-seq is compatible with formalin-fixed, paraffin-embedded (FFPE) samples, enabling the analysis of archived human specimens. A key limitation is that snRNA-seq does not capture cytoplasmic transcripts and is therefore biased toward nuclear, often premature, mRNA species.

Spatial transcriptomics does not require tissue dissociation and enables examination of cellular transcriptomes within their native tissue niches. Some spatial transcriptomic technologies are now compatible with FFPE samples, allowing analyses of preserved clinical specimens along with fixed-frozen and fresh-frozen samples. These technologies can be broadly classified into two main categories: imaging-based and sequencing-based (Figure 2B). Imaging-based approaches, like multiplexed error-robust fluorescence in situ hybridization (MERFISH), spatially resolved transcript amplicon readout mapping (STARmap), and 10x Genomics Xenium, rely on probe hybridization and multiplexed imaging to detect and visualize transcripts at high spatial resolution, often achieving single-cell or even subcellular resolution (17, 18). Although whole-transcriptome measurements are possible, MERFISH typically targets predefined gene panels due to the constraints of iterative hybridization and imaging. In contrast, sequencing-based approaches, including NanoString GeoMx and 10x Genomics Visium, capture RNA on spatially barcoded tissue slides or nanobeads followed by next-generation sequencing. These methods generally recover a broader range of transcripts than imaging-based approaches but, in most cases, do not yet achieve true single-cell resolution. Instead, they measure gene expression within spatial “spots” that encompass multiple cells and therefore rely on computational deconvolution to infer cell-type composition. Newer spatial transcriptomic methods, like spatial enhanced resolution omics sequencing (Stereo-seq) and reverse-padlock amplicon-encoding fluorescence in situ hybridization (RAEFISH), are approaching single-cell and single-molecule resolution (1921).

In this Review, we summarize recent advances and applications of single-cell genomics approaches to study neurodegenerative disorders, including Alzheimer disease (AD), Parkinson disease (PD), amyotrophic lateral sclerosis (ALS), frontotemporal dementia (FTD), and Huntington disease (HD). We focus on how these approaches provide insight into the unique vulnerabilities of specific neuronal populations, define novel disease-associated cellular states, and reveal contributions of non-neuronal cells to disease pathogenesis. We then look to the future, envisioning how these technologies will empower genetic screens to uncover modifiers of neurodegeneration and new therapeutic targets.

Physicists observe rare nuclear isomer in ytterbium-150 for first time

Nuclear isomers are crucial probes for studying the structure of nuclei. Unlike chemical isomers—which have the same chemical formula but different arrangements of atoms—nuclear isomers are nuclei that exist in a long-lived and relatively stable excited state.

Normally, an atomic nucleus resides in its lowest-energy state, known as the ground state. Under external perturbations, such as nucleus-nucleus collisions, however, a nucleus can be excited to a higher-energy state.

While most excited nuclear states are extremely short-lived and rapidly decay back to the ground state, some nuclei remain “trapped” in an excited state for a remarkably long time. Such isomeric states help reveal the structure of the nucleus due to its high sensitivity to the underlying shell structure as well as to changes in single-particle levels.

World’s most advanced supercomputers decode nuclear reactor turbulence

At Argonne National Laboratory, researchers are trading in old-school approximations for raw supercomputing power, proving that the secret to a safer carbon-free future lies in mastering the math of chaos.

Researchers are advancing nuclear safety by using high-performance computing to model turbulent flow — the chaotic movement of fluids and gases that governs heat transfer and gas mixing within a reactor.

Fluid simulation at unprecedented scale provides toolkit for fundamental physics and applied fluid engineering

What governs the speed at which raindrops fall, sediment settles in river estuaries, and matter is ejected during a supernova? These questions circle around one, deceitfully simple factor: the rate at which a fluid filled with particles mixes with a particle-free one. Raindrops travel from one layer of air to another; sediment falls from river to seawater, and ejecta travels from the exploding star through the surrounding dust cloud. The same principle dictates sediment mixing in rising smoke, dust storms, nuclear explosions, hydrocarbon refining, metal smelting, wastewater treatment, and more.

New simulations have now provided researchers and engineers with unprecedented access to these fundamental fluid mechanics. While plainly visible in everyday life, the phenomenon has eluded scientific scrutiny due to their complexity. For the first time, researchers have derived a general formulation of how layers of heavy particles mix and described the common characteristics of the phenomena.

Simone Tandurella, study first author and Ph.D. student in the Complex Fluids and Flows Unit at OIST, explains, “Both the simulations and the model we obtain enable exciting research into a wide range of fundamental physics phenomena, as well as applied research in fluid engineering. They provide the basic puzzle pieces that can help us understand fluid-particle instabilities at large scales.”

Compact vacuum ultraviolet laser may improve nanotechnology and power nuclear clocks

Physicists at the University of Colorado Boulder have demonstrated a new kind of vacuum ultraviolet laser that is 100 to 1,000 times more efficient than existing technologies of its kind. The researchers say the device could one day allow scientists to observe phenomena currently out of reach for even the most powerful microscopes—such as following fuel molecules in real time as they undergo combustion, spotting incredibly small defects in nanoelectronics and more.

The new laser might also allow for practical, ultraprecise nuclear clocks that rely on an energy transition in the nuclei of thorium atoms. These long sought-after devices could, theoretically, allow researchers to robustly track time with unprecedented precision.

The group is led by physicists Henry Kapteyn and Margaret Murnane, fellows of JILA, a joint research institute between CU Boulder and the U.S. National Institute of Standards and Technology (NIST). Jeremy Thurston, who earned his doctorate in physics from CU Boulder in 2024, spearheaded work on the new laser.

National report supports measurement innovation to aid commercial fusion energy and enable new plasma technologies

To operate fusion systems safely and reliably, scientists need to monitor plasma fuel conditions and measure properties like temperature and density that can affect fusion reactions. Making these measurements requires specialized sensors known as diagnostics.

A new report sponsored by the U.S. Department of Energy (DOE) recommends increased investment in America’s fusion diagnostic capabilities, a critical new technology that could provide DOE and Congress with information to speed up the delivery of commercial fusion power plants.

The report was produced as part of the DOE’s 2024 Basic Research Needs Workshop on Measurement Innovation, sponsored by the DOE’s Office of Science’s Fusion Energy Sciences (FES) program. It was chaired by Luis Delgado-Aparicio, head of advanced projects at the DOE’s Princeton Plasma Physics Laboratory (PPPL), and co-chaired by Sean Regan, a distinguished scientist and the director of the Experimental Division at the University of Rochester’s Laboratory for Laser Energetics.

Russia forges nuclear steel to brave 1112°F for next-gen reactors

It could solve the corrosion and thermal challenges of lead-cooled reactors.


The development of this steel was conducted under the “Breakthrough” (Proryv) project, which focuses on the implementation of a closed nuclear fuel cycle using fast neutron reactors.

The new steel provides corrosion resistance and thermal stability at temperatures up to 600°C (1,112°F).

According to Sergei Logashov, Director of the Institute of Materials Science at CNIITMASH, the material was designed using computer modeling and data from heavy liquid metal coolant systems.

Tackling industry’s burdensome bubble problem

In industrial plants around the world, tiny bubbles cause big problems. Bubbles clog filters, disrupt chemical reactions, reduce throughput during biomanufacturing, and can even cause overheating in electronics and nuclear power plants. MIT Professor Kripa Varanasi has long studied methods to reduce bubble disruption.

In a new study, Varanasi, along with Ph.D. candidate Bert Vandereydt and former postdoc Saurabh Nath, have uncovered the physics behind a promising type of debubbling membrane material that is “aerophilic”—Greek for “air-loving.” The material can be used in systems of all types, allowing anyone to optimize their machine’s performance by breaking free from bubble-borne disruptions.

“We have figured out the structure of these bubble-attracting membrane materials to allow gas to evacuate in the fastest possible manner,” says Varanasi, the senior author of the study.

Spinning Plasma Solves a Long-Standing Fusion Reactor Mystery

A persistent asymmetry in fusion exhaust has challenged researchers for years. New simulations show that plasma core rotation, working together with cross-field drifts, determines where particles land inside a tokamak. Tokamaks are often described as giant magnetic “doughnuts,” built to keep an u

Tin isotopes reveal clues to nuclear stability

Separated by an ocean and more than a decade, innovative experiments with 31 tin isotopes having either a surplus or shortage of neutrons show how neutrons influence nuclear stability and element formation. The experiments, conducted between 2002 and 2012 at Oak Ridge National Laboratory and more recently at CERN, provide knowledge that impacts nuclear energy and national security applications.

The earlier, influential ORNL measurements contributed to the American Physical Society naming ORNL’s Holifield Radioactive Ion Beam Facility a historic physics site in 2016. Several resulting publications by ORNL scientists and collaborators examined nuclear energy transitions of isotopes of tin and its neighbors and established the “doubly-magic” nature of tin-132 —stability resulting from full outer shells of both protons and neutrons.

Recent laser spectroscopy measurements at CERN’s ISOLDE facility by a team of scientists, including Alfredo Galindo-Uribarri of ORNL, combined with ORNL’s earlier Holifield results, have helped physicists understand how nuclear properties change across isotopes. The results, which help theoretical physicists improve models, are published in the journal Physical Review Letters.

/* */