Toggle light / dark theme

Michael Freedman | The Poincaré Conjecture and Mathematical Discovery

Millennium Prize Problems Lecture 9/17/2025
Speaker: Michael Freedman, Harvard CMSA and Logical Intelligence.

Title: the poincaré conjecture and mathematical discovery.

Abstract: The AI age requires us to re-examine what mathematics is about. The Seven Millenium Problems provide an ideal lens for doing so. Five of the seven are core mathematical questions, two are meta-mathematical – asking about the scope of mathematics. The Poincare conjecture represents one of the core subjects, manifold topology. I’ll explain what it is about, its broader context, and why people cared so much about finding a solution, which ultimately arrived through the work of R. Hamilton and G. Perelman. Although stated in manifold topology, the proof requires vast developments in the theory of parabolic partial differential equations, some of which I will sketch. Like most powerful techniques, the methods survive their original objectives and are now deployed widely in both three-and four-dimensional manifold topology.

A Review of Artificial Intelligence-Based Down Syndrome Detection Techniques

In this section, the authors reveal the findings of this review. The findings are categorized based on data modalities, showcasing the effectiveness of AI models in terms of evaluation metrics. Figure 1 summarizes the extraction process, providing a clear representation of the progression from article identification to final selection of studies. The initial search yielded a substantial total of 1,175 articles. Based on the inclusion and exclusion criteria, the subsequent screening process excluded irrelevant articles. By meticulously filtering the literature, 25 studies were deemed suitable for inclusion into this review.

A chronology of research studies on the uses of AI in DS diagnosis is shown in Figure 2. This timeline highlights a considerable growth in academic interest over the course of the years. A single study was published per year between the years 2013 and 2017. Technical restrictions and the availability of datasets restricted the early attempts to integrate AI into DS diagnoses. Advancements in deep learning and machine learning technologies have been driven by continuous growth in research, representing a milestone in 2021. These developments are signs of increasing confidence in the ability of artificial intelligence to identify and resolve challenging diagnostic problems. The year 2021 reaches a high with four studies, indicating a surge of innovation. This may result from improved computing tools and a more extensive understanding of the usefulness of artificial intelligence in the medical field. However, the minor decline in 2022 and 2023, with three studies, may indicate difficulties in maintaining the rapid pace of research. These challenges may include restricted access to different datasets or limitations to clinical adoption.

In 2024, there was a significant increase in DS diagnostics approaches, achieving a total of seven studies. This increase is a result of developments in AI algorithms, collaborations across diverse fields, and the significant role of AI in medical diagnosis. It demonstrates the increased academic and multidisciplinary interest in developing effective AI-powered DS detection models. In addition, an increasing trajectory highlights the importance of maintaining research efforts in order to overcome current challenges in implementing AI applications in the healthcare sector.

Novel AI tool opens 3D modeling to blind and low-vision programmers

Blind and low-vision programmers have long been locked out of three-dimensional modeling software, which depends on sighted users dragging, rotating and inspecting shapes on screen.

Now, a multiuniversity research team has developed A11yShape, a new tool designed to help blind and low-vision programmers independently create, inspect and refine three-dimensional models. The study is published on the arXiv preprint server.

The team consists of Anhong Guo, assistant professor of electrical engineering and computer science at the University of Michigan, and researchers from the University of Texas at Dallas, University of Washington, Purdue University and several partner institutions—including Gene S-H Kim of Stanford University, a member of the blind and low-vision community.

Algorithm reveals ‘magic sizes’ for assembling programmable icosahedral shells at minimal cost

Over the past decade, experts in the field of nanotechnology and materials science have been trying to devise architectures composed of small structures that spontaneously arrange themselves following specific patterns. Some of these architectures are based on so-called icosahedral shells, structures with 20 different triangular phases that are symmetrically organized.

2025 Nobel Prize in Physics Peer Review

Introduction.

Grounded in the scientific method, it critically examines the work’s methodology, empirical validity, broader implications, and opportunities for advancement, aiming to foster deeper understanding and iterative progress in quantum technologies. ## Executive Summary.

This work, based on experiments conducted in 1984–1985, addresses a fundamental question in quantum physics: the scale at which quantum effects persist in macroscopic systems.

By engineering a Josephson junction-based circuit where billions of Cooper pairs behave collectively as a single quantum entity, the laureates provided empirical evidence that quantum phenomena like tunneling through energy barriers and discrete energy levels can manifest in human-scale devices.

This breakthrough bridges microscopic quantum mechanics with macroscopic engineering, laying foundational groundwork for advancements in quantum technologies such as quantum computing, cryptography, and sensors.

Overall strengths include rigorous experimental validation and profound implications for quantum information science, though gaps exist in scalability to room-temperature applications and full mitigation of environmental decoherence.

Framed within the broader context, this award highlights the enduring evolution of quantum mechanics from theoretical curiosity to practical innovation, building on prior Nobel-recognized discoveries like the Josephson effect (1973) and superconductivity mechanisms (1972).

Topsicle: a method for estimating telomere length from whole genome long-read sequencing data

Long read sequencing technology (advanced by Pacific Biosciences (PacBio) and Oxford Nanopore Technologies (Nanopore)) is revolutionizing the genomics field [43] and it has major potential to be a powerful computational tool for investigating the telomere length variation within populations and between species. Read length from long read sequencing platforms is orders of magnitude longer than short read sequencing platforms (tens of kilobase pairs versus 100–300 bp). These long reads have greatly aided in resolving the complex and highly repetitive regions of the genome [44], and near gapless genome assemblies (also known as telomere-to-telomere assembly) are generated for multiple organisms [45, 46]. The long read sequences can also be used for estimating telomere length, since whole genome sequencing using a long read sequencing platform would contain reads that span the entire telomere and subtelomere region. Computational methods can then be developed to determine the telomere–subtelomere boundary and use it to estimate the telomere length. As an example, telomere-to-telomere assemblies have been used for estimating telomere length by analyzing the sequences at the start and end of the gapless chromosome assembly [47,48,49,50]. But generating gapless genome assemblies is resource intensive and cannot be used for estimating the telomeres of multiple individuals. Alternatively, methods such as TLD [51], Telogator [52], and TeloNum [53] analyze raw long read sequences to estimate telomere lengths. These methods require a known telomere repeat sequence but this can be determined through k-mer based analysis [54]. Specialized methods have also been developed to concentrate long reads originating from chromosome ends. These methods involve attaching sequencing adapters that are complementary to the single-stranded 3′ G-overhang of the telomere, which can subsequently be used for selectively amplifying the chromosome ends for long read sequencing [55,56,57,58]. While these methods can enrich telomeric long reads, they require optimization of the protocol (e.g., designing the adapter sequence to target the G-overhang) and organisms with naturally blunt-ended telomeres [59, 60] would have difficulty implementing the methods.

An explosion of long read sequencing data has been generated for many organisms across the animal and plant kingdom [61, 62]. A computational method that can use this abundant long read sequencing data and estimate telomere length with minimal requirements can be a powerful toolkit for investigating the biology of telomere length variation. But so far, such a method is not available, and implementing one would require addressing two major algorithmic considerations before it can be widely used across many different organisms. The first algorithmic consideration is the ability to analyze the diverse telomere sequence variation across the tree of life. All vertebrates have an identical telomere repeat motif TTAGGG [63] and most previous long read sequencing based computational methods were largely designed for analyzing human genomic datasets where the algorithms are optimized on the TTAGGG telomere motif. But the telomere repeat motif is highly diverse across the animal and plant kingdom [64,65,66,67], and there are even species in fungi and plants that utilize a mix of repeat motifs, resulting in a sequence complex telomere structure [64, 68, 69]. A new computational method would need to accommodate the diverse telomere repeat motifs, especially across the inherently noisy and error-prone long read sequencing data [70]. With recent improvements in sequencing chemistry and technology (HiFi sequencing for PacBio and Q20 + Chemistry kit for Nanopore) error rates have been substantially reduced to 1% [71, 72]. But even with this low error rate, a telomeric region that is several kilobase pairs long can harbor substantial erroneous sequences across the read [73] and hinder the identification of the correct telomere–subtelomere boundary. In addition, long read sequencers are especially error-prone to repetitive homopolymer sequences [74,75,76], and the GT-rich microsatellite telomere sequences are predicted to be an especially erroneous region for long read sequencing. A second algorithmic consideration relates to identifying the telomere–subtelomere boundary. Prior long read sequencing based methods [51, 52] have used sliding windows to calculate summary statistics and a threshold to determine the boundary between the telomere and subtelomere. Sliding window and threshold based analyses are commonly used in genome analysis, but they place the burden on the user to determine the appropriate cutoff, which for telomere length measuring computational methods may differ depending on the sequenced organism. In addition, threshold based sliding window scans can inflate both false positive and false negative results [77,78,79,80,81,82] if the cutoff is improperly determined.

Here, we introduce Topsicle, a computational method that uses a novel strategy to estimate telomere lengths from raw long read sequences from the entire whole genome sequencing library. Methodologically, Topsicle iterates through different substring sizes of the telomere repeat sequence (i.e., telomere k-mer) and different phases of the telomere k-mer are used to summarize the telomere repeat content of each sequencing read. The k-mer based summary statistics of telomere repeats are then used for selecting long reads originating from telomeric regions. Topsicle uses those putative reads from the telomere region to estimate the telomere length by determining the telomere–subtelomere boundary through a binary segmentation change point detection analysis [83]. We demonstrate the high accuracy of Topsicle through simulations and apply our new method on long read sequencing datasets from three evolutionarily diverse plant species (A. thaliana, maize, and Mimulus) and human cancer cell lines. We believe using Topsicle will enable high-resolution explorations of telomere length for more species and achieve a broad understanding of the genetics and evolution underlying telomere length variation.

Researchers develop the first miniaturized ultraviolet spectrometer chip

Recently, the iGaN Laboratory led by Professor Haiding Sun at the School of Microelectronics, University of Science and Technology of China (USTC), together with the team of academician Sheng Liu from Wuhan University, has successfully developed the world’s first miniaturized ultraviolet (UV) spectrometer chip and realized on-chip spectral imaging.

Based on a novel gallium nitride (GaN) cascaded photodiode architecture and integrated with (DNN) algorithms, the device achieves high-precision spectral detection and high-resolution multispectral imaging.

With a response speed on the nanosecond scale, it sets a new world record for the fastest reported miniaturized spectrometer. The work, titled “A miniaturized cascaded-diode-array spectral imager,” was published online in Nature Photonics on September 26, 2025.

Matter wave

Schrödinger applied Hamilton’s optico-mechanical analogy to develop his wave mechanics for subatomic particles. [ 67 ] : xi Consequently, wave solutions to the Schrödinger equation share many properties with results of light wave optics. In particular, Kirchhoff’s diffraction formula works well for electron optics [ 29 ] : 745 and for atomic optics. [ 68 ] The approximation works well as long as the electric fields change more slowly than the de Broglie wavelength. Macroscopic apparatus fulfill this condition; slow electrons moving in solids do not.

Cracking a long-standing weakness in a classic algorithm for programming reconfigurable chips

Researchers from EPFL, AMD, and the University of Novi Sad have uncovered a long-standing inefficiency in the algorithm that programs millions of reconfigurable chips used worldwide, a discovery that could reshape how future generations of these are designed and programmed.

Many industries, including telecoms, automotive, aerospace and rely on a special breed of chip called the Field-Programmable Gate Array (FPGA). Unlike traditional chips, FPGAs can be reconfigured almost endlessly, making them invaluable in fast-moving fields where designing a custom chip would take years and cost a fortune. But this flexibility comes with a catch: FPGA efficiency depends heavily on the software used to program them.

Since the late 1990s, an algorithm known as PathFinder has been the backbone of FPGA routing. Its job: connecting thousands of tiny circuit components without creating overlaps.

View a PDF of the paper titled Vision-Zero: Scalable VLM Self-Improvement via Strategic Gamified Self-Play, by Qinsi Wang and 8 other authors

Although reinforcement learning (RL) can effectively enhance the reasoning capabilities of vision-language models (VLMs), current methods remain heavily dependent on labor-intensive datasets that require extensive manual construction and verification, leading to extremely high training costs and consequently constraining the practical deployment of VLMs. To address this challenge, we propose Vision-Zero, a domain-agnostic framework enabling VLM self-improvement through competitive visual games generated from arbitrary image pairs. Specifically, Vision-Zero encompasses three main attributes: Strategic Self-Play Framework: Vision-Zero trains VLMs in “Who Is the Spy”-style games, where the models engage in strategic reasoning and actions across multiple roles. Through interactive gameplay, models autonomously generate their training data without human annotation. Gameplay from Arbitrary Images: Unlike existing gamified frameworks, Vision-Zero can generate games from arbitrary images, thereby enhancing the model’s reasoning ability across diverse domains and showing strong generalization to different tasks. We demonstrate this versatility using three distinct types of image datasets: CLEVR-based synthetic scenes, charts, and real-world images. Sustainable Performance Gain: We introduce Iterative Self-Play Policy Optimization (Iterative-SPO), a novel training algorithm that alternates between Self-Play and reinforcement learning with verifiable rewards (RLVR), mitigating the performance plateau often seen in self-play-only training and achieving sustained long-term improvements. Despite using label-free data, Vision-Zero achieves state-of-the-art performance on reasoning, chart question answering, and vision-centric understanding tasks, surpassing other annotation-based methods. Models and code has been released at https://github.com/wangqinsi1/Vision-Zero.

/* */