Toggle light / dark theme

When AI Builds AI

Leading artificial intelligence companies have started to use their own systems to accelerate research and development, with each generation of AI systems contributing to building the next generation. This report distills points of consensus and disagreement from our July 2025 expert workshop on how far the automation of AI R&D could go, laying bare crucial underlying assumptions and identifying what new evidence could shed light on the trajectory going forward.

Stephen Wolfram: computation is the universe’s OS

Mathematica creator Stephen Wolfram has spent nearly 50 years arguing that simple computational rules underlie everything from animal patterns to the laws of physics. In his 2023 TED talk, he makes the case that computation isn’t just a useful way to model the world — it’s the fundamental operating system of reality itself.

Wolfram introduces “the ruliad,” an abstract concept encompassing all possible computational processes. Space and matter, he argues, consist of discrete elements governed by simple rules. Gravity and quantum mechanics emerge from the same computational framework. The laws of physics themselves are observer-dependent, arising from our limited perspective within an infinite computational structure.

On AI, Wolfram sees large language models as demonstrating deep connections between semantic grammar and computational thinking. The Wolfram Language, he claims, bridges human conceptualization and computational power, letting people operationalize ideas directly — what he calls a “superpower” for thinking and creation.

NVIDIA Offers “Vera” CPU as a Standalone Competitor to Intel’s Xeon and AMD’s EPYC Processors

NVIDIA’s integration of AI systems now extends beyond GPUs with generic Arm CPUs. The company is introducing its high-performance “Vera” CPUs as a standalone product, marking its first entry as a competitor to Intel Xeon and AMD EPYC server-grade CPUs. NVIDIA CEO Jensen Huang confirmed this new venture in an interview with Bloomberg, stating, “For the very first time, we’re going to be offering Vera CPUs. Vera is such an incredible CPU. We’re going to offer Vera CPUs as a standalone part of the infrastructure. You can now run your computing stack not only on NVIDIA GPUs but also on NVIDIA CPUs. Vera is completely revolutionary… Coreweave will have to act quickly if they want to be the first to implement Vera CPUs. We haven’t announced any of our CPU design wins yet, but there will be many.”

The “Vera” CPU is equipped with 88 custom Armv9.2 “Olympus” cores that utilize Spatial Multithreading technology, allowing it to handle 176 threads through physical resource partitioning. These custom cores support native FP8 processing, enabling some AI workloads to be executed directly on the CPU with 6x128-bit SVE2 implementation. The chip offers 1.2 TB/s of memory bandwidth and supports up to 1.5 TB of LPDDR5X memory, making it ideal for memory-intensive computing tasks. However, with the CPU now being offered as a standalone solution, it is unclear whether there will be any classic memory options like DDR5 RDIMMs, or if the CPU will rely solely on SOCAMM LPDDR5X. A second-generation Scalable Coherency Fabric provides 3.4 TB/s of bisection bandwidth, connecting the cores across a unified monolithic die and eliminating the latency issues common in chiplet architectures. Additionally, NVIDIA has integrated a second-generation NVLink Chip-to-Chip technology, delivering up to 1.

Advancing regulatory variant effect prediction with AlphaGenome

What makes it special is its versatility. Where older models might only predict how a mutation affects gene activity, AlphaGenome forecasts thousands of biological outcomes simultaneously—whether a variant will alter how DNA folds, change how proteins dock onto genes, disrupt the splicing machinery that edits genetic messages, or modify histone “spools” that package DNA. It’s essentially a universal translator for genetic regulatory language.


AlphaGenome is a deep learning model designed to learn the sequence basis of diverse molecular phenotypes from human and mouse DNA (Fig. 1a). It simultaneously predicts 5,930 human or 1,128 mouse genome tracks across 11 modalities covering gene expression (RNA-seq, CAGE and PRO-cap), detailed splicing patterns (splice sites, splice site usage and splice junctions), chromatin state (DNase, ATAC-seq, histone modifications and transcription factor binding) and chromatin contact maps. These span a variety of biological contexts, such as different tissue types, cell types and cell lines (see Supplementary Table 1 for the summary and Supplementary Table 2 for the complete metadata). These predictions are made on the basis of 1-Mb of DNA sequence, a context length designed to encompass a substantial portion of the relevant distal regulatory landscape. For instance, 99% (465 of 471) of validated enhancer–gene pairs fall within 1 Mb (ref. 12).

AlphaGenome uses a U-Net-inspired2,13 backbone architecture (Fig. 1a and Extended Data Fig. 1a) to efficiently process input sequences into two types of sequence representations: one-dimensional embeddings (at 1-bp and 128-bp resolutions), which correspond to representations of the linear genome, and two-dimensional embeddings (2,048-bp resolution), which correspond to representations of spatial interactions between genomic segments. The one-dimensional embeddings serve as the basis for genomic track predictions, whereas the two-dimensional embeddings are the basis for predicting pairwise interactions (contact maps). Within the architecture, convolutional layers model local sequence patterns necessary for fine-grained predictions, whereas transformer blocks model coarser but longer-range dependencies in the sequence, such as enhancer–promoter interactions.

Researchers discover hundreds of cosmic anomalies with help from AI

A team of astronomers have used a new AI-assisted method to search for rare astronomical objects in the Hubble Legacy Archive. The team sifted through nearly 100 million image cutouts in just two and a half days, uncovering nearly 1400 anomalous objects, more than 800 of which had never been documented before.

/* */