Toggle light / dark theme

The AI system is dubbed a “quantum-tunneling deep neural network” and combines neural networks with quantum tunneling. A deep neural network is a collection of machine learning algorithms inspired by the structure and function of the brain — with multiple layers of nodes between the input and output. It can model complex non-linear relationships and, unlike conventional neural networks (which include a single layer between input and output) deep neural networks include many hidden layers.

Quantum tunneling, meanwhile, occurs when a subatomic particle, such as an electron or photon (particle of light), effectively passes through an impenetrable barrier. Because a subatomic particle like light can also behave as a wave — when it is not directly observed it is not in any fixed location — it has a small but finite probability of being on the other side of the barrier. When sufficient subatomic particles are present, some will “tunnel” through the barrier.

After the data representing the optical illusion passes through the quantum tunneling stage, the slightly altered image is processed by a deep neural network.

“If you constantly use an AI to find the music, career or political candidate you like, you might eventually forget how to do this yourself.” Ethicist Muriel Leuenberger considers the personal impact of relying on AI.

A new proof shows that an upgraded version of the 70-year-old Dijkstra’s algorithm reigns supreme: It finds the most efficient pathways through any graph.

It doesn’t just tell you the fastest route to one destination.


In an interview toward the end of his life, Dijkstra credited his algorithm’s enduring appeal in part to its unusual origin story. “Without pencil and paper you are almost forced to avoid all avoidable complexities,” he said.

Dijkstra’s algorithm doesn’t just tell you the fastest route to one destination. Instead, it gives you an ordered list of travel times from your current location to every other point that you might want to visit — a solution to what researchers call the single-source shortest-paths problem. The algorithm works in an abstracted road map called a graph: a network of interconnected points (called vertices) in which the links between vertices are labeled with numbers (called weights). These weights might represent the time required to traverse each road in a network, and they can change depending on traffic patterns. The larger a weight, the longer it takes to traverse that path.

PRESS RELEASE — After over a year of evaluation, NIST has selected 14 candidates for the second round of the Additional Digital Signatures for the NIST PQC Standardization Process. The advancing digital signature algorithms are:

NIST Internal Report (IR) 8528 describes the evaluation criteria and selection process. Questions may be directed to [email protected]. NIST thanks all of the candidate submission teams for their efforts in this standardization process as well as the cryptographic community at large, which helped analyze the signature schemes.

Moving forward, the second-round candidates have the option of submitting updated specifications and implementations (i.e., “tweaks”). NIST will provide more details to the submission teams in a separate message. This second phase of evaluation and review is estimated to last 12–18 months.

In 2022, a nuclear-fusion experiment yielded more energy than was delivered by the lasers that ignited the fusion reaction (see Viewpoint: Nuclear-Fusion Reaction Beats Breakeven). That demonstration was an example of indirect-drive inertial-confinement fusion, in which lasers collapse a fuel pellet by heating a gold can that surrounds it. This approach is less efficient than heating the pellet directly since the pellet absorbs less of the lasers’ energy. Nevertheless, it has been favored by researchers at the largest laser facilities because it is less sensitive to nonuniform laser illumination. Now Duncan Barlow at the University of Bordeaux, France, and his colleagues have devised an efficient way to improve illumination uniformity in direct-drive inertial-confinement fusion [1]. This advance helps overcome a remaining barrier to high-yield direct-drive fusion using existing facilities.

Triggering self-sustaining fusion by inertial confinement requires pressures and temperatures that are achievable only if the fuel pellet implodes with high uniformity. Such uniformity can be prevented by heterogeneities in the laser illumination and in the way the beams interact with the resulting plasma. Usually, researchers identify the laser configuration that minimizes these heterogeneities by iterating radiation-hydrodynamics simulations that are computationally expensive and labor intensive. Barlow and his colleagues developed an automatic, algorithmic approach that bypasses the need for such iterative simulations by approximating some of the beam–plasma interactions.

Compared with an experiment using a spherical, plastic target at the National Ignition Facility in California, the team’s optimization method should deliver an implosion that reaches 2 times the density and 3 times the pressure. But the approach can also be applied to other pellet geometries and at other facilities.

In a world powered by artificial intelligence applications, data is king, but it’s also the crown’s biggest burden.


As described in the article, quantum memory stores data in ways that classical memory systems cannot match. In quantum systems, information is stored in quantum states, using the principles of superposition and entanglement to represent data more efficiently. This ability allows quantum systems to process and store vastly more information, potentially impacting data-heavy industries like AI.

In a 2021 study from the California Institute of Technology, researchers showed that quantum memory could dramatically reduce the number of steps needed to model complex systems. Their method proved that quantum algorithms using memory could require exponentially fewer steps, cutting down on both time and energy. However, this early work required vast amounts of quantum memory—an obstacle that could have limited its practical application.

Now, two independent teams have derived additional insights, demonstrating how these exponential advantages can be achieved with much less quantum memory. Sitan Chen from Harvard University, along with his team, found that just two quantum copies of a system were enough to provide the same computational efficiency previously thought to require many more.

Predicting the behavior of many interacting quantum particles is a complex task, but it’s essential for unlocking the potential of quantum computing in real-world applications. A team of researchers, led by EPFL, has developed a new method to compare quantum algorithms and identify the most challenging quantum problems to solve.

Quantum systems, from subatomic particles to complex molecules, hold the key to understanding the workings of the universe. However, modeling these systems quickly becomes overwhelming due to their immense complexity. It’s like trying to predict the behavior of a massive crowd where everyone constantly influences everyone else. When you replace the crowd with quantum particles, you encounter what’s known as the “quantum many-body problem.”

Quantum many-body problems involve predicting the behavior of numerous interacting quantum particles. Solving these problems could lead to major breakthroughs in fields like chemistry and materials science, and even accelerate the development of technologies like quantum computers.

In 2005, the futurist Ray Kurzweil predicted that by 2045, machines would become smarter than humans. He called this inflection point the “singularity,” and it struck a chord. Kurzweil, who’s been tracking artificial intelligence since 1963, gained a fanatical following, especially in Silicon Valley.

Now comes The Singularity is Nearer: When We Merge with A.I. where Kurzweil steps up the Singularity’s arrival timeline to 2029. “Algorithmic innovations and the emergence of big data have allowed AI to achieve startling breakthroughs sooner than expected,” reports Kurzweil. From winning at games like Jeopardy! and Go to driving automobiles, writing essays, passing bar exams, and diagnosing cancer, chunks of the Singularity are arriving daily, and there’s more good news just ahead.

Very soon, predicts Kurzweil, artificial general intelligence will be able to do anything a human can do, only better. Expect 3D printed clothing and houses by the end of this decade. Look for medical cures that will “add decades to human life spans” just ahead. “These are the most exciting and momentous years in all of history,” Kurzweil noted in an interview with Boston Globe science writer Brian Bergstein.

A team of Chinese researchers, led by Wang Chao from Shanghai University, has demonstrated that D-Wave’s quantum annealing computers can crack encryption methods that safeguard sensitive global data.

This breakthrough, published in the Chinese Journal of Computers, emphasizes that quantum machines are closer than expected to threatening widely used cryptographic systems, including RSA and Advanced Encryption Standard (AES).

The research team’s experiments focused on leveraging D-Wave’s quantum technology to solve cryptographic problems. In their paper, titled “Quantum Annealing Public Key Cryptographic Attack Algorithm Based on D-Wave Advantage,” the researchers explained how quantum annealing could transform cryptographic attacks into combinatorial optimization problems, making them more manageable for quantum systems.

How do we assess quantum advantage when exact classical solutions are not available?

A quantum advantage is a demonstration of a solution for a problem for which a quantum computer can provide a demonstrable improvement over any classical method and classical resources in terms of accuracy, runtime…


Today, algorithms designed to solve this problem mostly rely on what we call variational methods, which are algorithms guaranteed to output an energy for a target system which cannot be lower than the exact solution — or the deepest valley — up to statistical uncertainties. An ideal quality metric for the ground state problem would not only allow the user to benchmark different methods against the same problem, but also different target problems when tackled by the same method.

So, how can such an absolute metric be defined? And what would be the consequences of finding this absolute accuracy metric?