Toggle light / dark theme

Advancing regulatory variant effect prediction with AlphaGenome

What makes it special is its versatility. Where older models might only predict how a mutation affects gene activity, AlphaGenome forecasts thousands of biological outcomes simultaneously—whether a variant will alter how DNA folds, change how proteins dock onto genes, disrupt the splicing machinery that edits genetic messages, or modify histone “spools” that package DNA. It’s essentially a universal translator for genetic regulatory language.


AlphaGenome is a deep learning model designed to learn the sequence basis of diverse molecular phenotypes from human and mouse DNA (Fig. 1a). It simultaneously predicts 5,930 human or 1,128 mouse genome tracks across 11 modalities covering gene expression (RNA-seq, CAGE and PRO-cap), detailed splicing patterns (splice sites, splice site usage and splice junctions), chromatin state (DNase, ATAC-seq, histone modifications and transcription factor binding) and chromatin contact maps. These span a variety of biological contexts, such as different tissue types, cell types and cell lines (see Supplementary Table 1 for the summary and Supplementary Table 2 for the complete metadata). These predictions are made on the basis of 1-Mb of DNA sequence, a context length designed to encompass a substantial portion of the relevant distal regulatory landscape. For instance, 99% (465 of 471) of validated enhancer–gene pairs fall within 1 Mb (ref. 12).

AlphaGenome uses a U-Net-inspired2,13 backbone architecture (Fig. 1a and Extended Data Fig. 1a) to efficiently process input sequences into two types of sequence representations: one-dimensional embeddings (at 1-bp and 128-bp resolutions), which correspond to representations of the linear genome, and two-dimensional embeddings (2,048-bp resolution), which correspond to representations of spatial interactions between genomic segments. The one-dimensional embeddings serve as the basis for genomic track predictions, whereas the two-dimensional embeddings are the basis for predicting pairwise interactions (contact maps). Within the architecture, convolutional layers model local sequence patterns necessary for fine-grained predictions, whereas transformer blocks model coarser but longer-range dependencies in the sequence, such as enhancer–promoter interactions.

Researchers discover hundreds of cosmic anomalies with help from AI

A team of astronomers have used a new AI-assisted method to search for rare astronomical objects in the Hubble Legacy Archive. The team sifted through nearly 100 million image cutouts in just two and a half days, uncovering nearly 1400 anomalous objects, more than 800 of which had never been documented before.

Initial access hackers switch to Tsundere Bot for ransomware attacks

A prolific initial access broker tracked as TA584 has been observed using the Tsundere Bot alongside XWorm remote access trojan to gain network access that could lead to ransomware attacks.

Proofpoint researchers have been tracking TA584’s activity since 2020 and say that the threat actor has significantly increased its operations recently, introducing a continuous attack chain that undermines static detection.

Tsundere Bot was first documented by Kaspersky last year and attributed to a Russian-speaking operator with links to the 123 Stealer malware.

New sandbox escape flaw exposes n8n instances to RCE attacks

Two vulnerabilities in the n8n workflow automation platform could allow attackers to fully compromise affected instances, access sensitive data, and execute arbitrary code on the underlying host.

Identified as CVE-2026–1470 and CVE-2026–0863, the vulnerabilities were discovered and reported by researchers at DevSecOps company JFrog.

Despite requiring authentication, CVE-2026–1470 received a critical severity score of 9.9 out of 10. JFrog explained that the critical rating was due to arbitrary code execution occurring in n8n’s main node, which allows complete control over the n8n instance.

Wave of Suicides Hits as India’s Economy Is Ravaged by AI

As Rest of World reports, rising anxiety over the influence of AI, on top of already-grueling 90-hour workweeks, has proven devastating for workers. While it’s hard to single out a definitive cause, a troubling wave of suicides among tech workers highlights these unsustainable conditions.

Complicating the picture is a lack of clear government data on the tragic deaths. While it’s impossible to tell whether they are more prevalent among IT workers, experts told Rest of World that the mental health situation in the tech industry is nonetheless “very alarming.”

The prospect of AI making their careers redundant is a major stressor, with tech workers facing a “huge uncertainty about their jobs,” as Indian Institute of Technology Kharagpur senior professor of computer science and engineering Jayanta Mukhopadhyay told Rest of World.

Deep-learning algorithms enhance mutation detection in cancer and RNA sequencing

Researchers from the Faculty of Engineering at The University of Hong Kong (HKU) have developed two innovative deep-learning algorithms, ClairS-TO and Clair3-RNA, that significantly advance genetic mutation detection in cancer diagnostics and RNA-based genomic studies.

The pioneering research team, led by Professor Ruibang Luo from the School of Computing and Data Science, Faculty of Engineering, has unveiled two groundbreaking deep-learning algorithms—ClairS-TO and Clair3-RNA—set to revolutionize genetic analysis in both clinical and research settings.

Leveraging long-read sequencing technologies, these tools significantly improve the accuracy of detecting genetic mutations in complex samples, opening new horizons for precision medicine and genomic discovery. Both research articles have been published in Nature Communications.

Radiowaves enable energy-efficient AI on edge devices without heavy hardware

As drones survey forests, robots navigate warehouses and sensors monitor city streets, more of the world’s decision-making is occurring autonomously on the edge—on the small devices that gather information at the ends of much larger networks.

But making that shift to edge computing is harder than it seems. Although artificial intelligence (AI) models continue to grow larger and smarter, the hardware inside these devices remains tiny.

Engineers typically have two options, neither are ideal. Storing an entire AI model on the device requires significant memory, data movement and computing power that drains batteries. Offloading the model to the cloud avoids those hardware constraints, but the back-and-forth introduces lag, burns energy and presents security risks.

/* */