Toggle light / dark theme

A New Ingredient for Quantum Error Correction

Entanglement and so-called magic states have long been viewed as the key resources for quantum error correction. Now contextuality, a hallmark of quantum theory, joins them as a complementary resource.

Machines make mistakes, and as they scale up, so too do the opportunities for error. Quantum computers are no exception; in fact, their errors are especially frequent and difficult to control. This fragility has long been a central obstacle to building large-scale devices capable of practical, universal quantum computation. Quantum error correction attempts to circumvent this obstacle, not by eliminating sources of error but by encoding quantum information in such a way that errors can be detected and corrected as they occur [1]. In doing so, the approach enables fault-tolerant quantum computation. Over the past few decades, researchers have learned that this robustness relies on intrinsically quantum resources, most notably, entanglement [2] and, more recently, so-called magic states [3].

Quantum batteries could quadruple qubit capacity while reducing energy infrastructure requirements

Scientists have unveiled a new approach to powering quantum computers using quantum batteries—a breakthrough that could make future computers faster, more reliable, and more energy efficient.

Quantum computers rely on the rules of quantum physics to solve problems that could transform computing, medicine, energy, finance, communications, and many other fields in the years ahead.

But sustaining their delicate quantum states typically requires room-sized, energy-intensive cryogenic cooling systems, as well as a system of room-temperature electronics.

Chip-sized optical amplifier can intensify light 100-fold with minimal energy

Light does a lot of work in the modern world, enabling all types of information technology, from TVs to satellites to fiber-optic cables that carry the internet across oceans. Stanford physicists recently found a way to make that light work even harder with an optical amplifier that requires low amounts of energy without any loss of bandwidth, all on a device the size of a fingertip.

Similar to sound amplifiers, optical amplifiers take a light signal and intensify it. Current small-sized optical amplifiers need a lot of power to function. The new optical amplifier, detailed in the journal Nature, solves this problem by using a method that essentially recycles the energy used to power it.

“We’ve demonstrated, for the first time, a truly versatile, low-power optical amplifier, one that can operate across the optical spectrum and is efficient enough that it can be integrated on a chip,” said Amir Safavi-Naeini, the study’s senior author and associate professor of physics in Stanford’s School of Humanities and Sciences. “That means we can now build much more complex optical systems than were possible before.”

Exclusive: Nvidia to reportedly shift 2028 chip production to Intel, reshaping TSMC strategy

TSMC’s dominance in advanced process and packaging has made it a prime target amid US manufacturing mandates. Chip customers now face mounting pressure to diversify supply chains due to cost and capacity constraints, accelerating the shift toward multi-sourcing strategies.

Recent supply chain reports reveal that Nvidia, alongside Apple, plans to collaborate with Intel on its 2028 Feynman architecture platform. Both companies are targeting “low volume, low-tier, non-core” production runs to align with Trump administration directives while preserving their core TSMC(2330.TW) relationships. This dual-foundry approach is designed to minimize mass production risks while satisfying political pressures.

Milky Way is embedded in a ‘large-scale sheet’ of dark matter, which explains motions of nearby galaxies

Computer simulations carried out by astronomers from the University of Groningen in collaboration with researchers from Germany, France and Sweden show that most of the (dark) matter beyond the Local Group of galaxies (which includes the Milky Way and the Andromeda galaxy) must be organized in an extended plane. Above and below this plane are large voids. The observed motions of nearby galaxies and the joint masses of the Milky Way and the Andromeda galaxy can only be properly explained with this “flat” mass distribution. The research, led by Ph.D. graduate Ewoud Wempe and Professor Amina Helmi, is published in Nature Astronomy.

Almost a century ago, astronomer Edwin Hubble discovered that virtually all galaxies are moving away from the Milky Way. This is important evidence for the expansion of the universe and for the Big Bang. But even in Hubble’s time, it was clear that there were exceptions. For example, our neighboring galaxy, Andromeda, is moving toward us at a speed of about 100 kilometers per second.

In fact, for half a century, astronomers have been wondering why most large nearby galaxies—with the exception of Andromeda—are moving away from us and do not seem to be affected by the mass and gravity of the so-called Local Group (the Milky Way, the Andromeda galaxy and dozens of smaller galaxies).

3D material mimics graphene’s electron flow for green computing

University of Liverpool researchers have discovered a way to host some of the most significant properties of graphene in a three-dimensional (3D) material, potentially removing the hurdles for these properties to be used at scale in green computing. The work is published in the journal Matter.

Graphene is famous for being incredibly strong, lightweight, and an excellent conductor of electricity and its applications range from electronics to aerospace and medical technologies. However, its two-dimensional (2D) structure makes it mechanically fragile and limits its use in demanding environments and large-scale applications.

Thinking on different wavelengths: New approach to circuit design introduces next-level quantum computing

Quantum computing represents a potential breakthrough technology that could far surpass the technical limitations of modern-day computing systems for some tasks. However, putting together practical, large-scale quantum computers remains challenging, particularly because of the complex and delicate techniques involved.

In some quantum computing systems, single ions (charged atoms such as strontium) are trapped and exposed to electromagnetic fields including laser light to produce certain effects, used to perform calculations. Such circuits require many different wavelengths of light to be introduced into different positions of the device, meaning that numerous laser beams have to be properly arranged and delivered to the designated area. In these cases, the practical limitations of delivering many different beams of light around within a limited space become a difficulty.

To address this, researchers from The University of Osaka investigated unique ways to deliver light in a limited space. Their work revealed a power-efficient nanophotonic circuit with optical fibers attached to waveguides to deliver six different laser beams to their destinations. The findings have been published in APL Quantum.

New approach to circuit design introduces next-level quantum computing

Quantum computing represents a potential breakthrough technology that could far surpass the technical limitations of modern-day computing systems for some tasks. However, putting together practical, large-scale quantum computers remains challenging, particularly because of the complex and delicate techniques involved.

An example configuration of the proposed laser delivery photonic circuit chip. (Image: Reproduced from DOI:10.1063/5.0300216, CC BY)

Moore’s law: the famous rule of computing has reached the end of the road, so what comes next?

That sense of certainty and predictability has now gone, and not because innovation has stopped, but because the physical assumptions that once underpinned it no longer hold.

So what replaces the old model of automatic speed increases? The answer is not a single breakthrough, but several overlapping strategies.

One involves new materials and transistor designs. Engineers are refining how transistors are built to reduce wasted energy and unwanted electrical leakage. These changes deliver smaller, more incremental improvements than in the past, but they help keep power use under control.

Superconducting nanowire memory array achieves significantly lower error rate

Quantum computers, systems that process information leveraging quantum mechanical effects, will require faster and energy-efficient memory components, which will allow them to perform well on complex tasks. Superconducting memories are promising memory devices that are made from superconductors, materials that conduct electricity with a resistance of zero when cooled below a critical temperature.

These memory devices could be faster and consume significantly less energy than existing memories based on superconductors. Despite their potential, most existing superconducting memories are prone to errors and are difficult to scale up to create larger systems containing several memory cells.

Researchers at Massachusetts Institute of Technology (MIT) recently developed a new scalable superconducting memory that is based on nanowires, one-dimensional (1D) nanostructures with unique optoelectronic properties. This memory, introduced in a paper published in Nature Electronics, was found to be less prone to errors than many other superconducting nanowire-based memories introduced in the past.

/* */