Toggle light / dark theme

Cross-institutional collaboration leads to new control over quantum dot qubits

Qubits are the building blocks of quantum computers, which have the potential to revolutionize many fields of research by solving problems that classical computers can’t.

But creating that have the perfect quality necessary for can be challenging.

Researchers at the University of Wisconsin–Madison, HRL Laboratories LLC, and University of New South Wales (UNSW) collaborated on a project to better control silicon quantum dot qubits, allowing for higher-quality fabrication and use in wider applications. All three institutions are affiliated with the Chicago Quantum Exchange. The work was published in Physical Review Letters, and the lead author, J. P. Dodson, has recently transitioned from UW–Madison to HRL.

‘Naturally insulating’ material emits pulses of superfluorescent light at room temperature

Researchers looking to synthesize a brighter and more stable nanoparticle for optical applications found that their creation instead exhibited a more surprising property: bursts of superfluorescence that occurred at both room temperature and regular intervals. The work could lead to the development of faster microchips, neurosensors, or materials for use in quantum computing applications, as well as a number of biological studies.

Superfluorescence occurs when atoms within a material synchronize and simultaneously emit a short but intense burst of light. The property is valuable for quantum optical applications, but extremely difficult to achieve at room temperatures and for intervals long enough to be useful.

The material in question—lanthanide-doped upconversion nanoparticle, or UCNP—was synthesized by the research team in an effort to create a “brighter” optical material. They produced hexagonal ceramic crystals ranging from 50 nanometers (nm) to 500 nm in size and began testing their lasing properties, which resulted in several impressive breakthroughs.

Quantum AI Breakthrough: New Theorem Shrinks Need for Training Data

Aug. 24, 2022 — Training a quantum neural network requires only a small amount of data, according to a new proof that upends previous assumptions stemming from classical computing’s huge appetite for data in machine learning, or artificial intelligence. The theorem has several direct applications, including more efficient compiling for quantum computers and distinguishing phases of matter for materials discovery.

“Many people believe that quantum machine learning will require a lot of data. We have rigorously shown that for many relevant problems, this is not the case,” said Lukasz Cincio, a quantum theorist at Los Alamos National Laboratory and co-author of the paper containing the proof published in the journal Nature Communications. “This provides new hope for quantum machine learning. We’re closing the gap between what we have today and what’s needed for quantum advantage, when quantum computers outperform classical computers.”

“The need for large data sets could have been a roadblock to quantum AI, but our work removes this roadblock. While other issues for quantum AI could still exist, at least now we know that the size of the data set is not an issue,” said Patrick Coles, a quantum theorist at the Laboratory and co-author of the paper.

Entangled photons tailor-made

This will be very useful in progressing the field of quantum computers and communication.

Researchers at the Max Planck Institute of Quantum Optics set a new record after achieving a quantum entanglement of 14 photons, the largest on record so far, an institutional press release said.

Quantum entanglement, famously described by Albery Einstein as “spooky action at a distance” is a phenomenon where particles become intertwined in such a way that they cease to exist individually, and changing the specific property of one results in an instant change of its partner, even if it is far away.


Physicists at the Max Planck Institute of Quantum Optics have managed to entangle more than a dozen photons efficiently and in a defined way. They are thus creating a basis for a new type of quantum computer.

Research

In order to effectively use a quantum computer, a larger number of specially prepared – in technical terms: entangled – basic building blocks are needed to carry out computational operations. A team of physicists at the Max Planck Institute of Quantum Optics in Garching has now for the very first time demonstrated this task with photons emitted by a single atom. Following a novel technique, the researchers generated up to 14 entangled photons in an optical resonator, which can be prepared into specific quantum physical states in a targeted and very efficient manner. The new method could facilitate the construction of powerful and robust quantum computers, and serve the secure transmission of data in the future.

How does classical, Newtonian inertia emerge from quantum mechanics?

From my understanding, inertia is typically taken as an axiom rather than something that can be explained by some deeper phenomenon. However, it’s also my understanding that quantum mechanics must reduce to classical, Newtonian mechanics in the macroscopic limit.

By inertia, I mean the resistance to changes in velocity — the fact of more massive objects (or paticles, let’s say) accelerating more slowly given the same force.

What is the quantum mechanical mechanism that, in its limit, leads to Newtonian inertia? Is there some concept of axiomatic inertia that applies to the quantum mechanical equations and explains Newtonian inertia, even if it remains a fundamental assumption of quantum theory?

/* */