Toggle light / dark theme

The field of machine learning on quantum computers got a boost from new research removing a potential roadblock to the practical implementation of quantum neural networks. While theorists had previously believed an exponentially large training set would be required to train a quantum neural network, the quantum No-Free-Lunch theorem developed by Los Alamos National Laboratory shows that quantum entanglement eliminates this exponential overhead.

“Our work proves that both and big entanglement are valuable in quantum machine learning. Even better, entanglement leads to scalability, which solves the roadblock of exponentially increasing the size of the data in order to learn it,” said Andrew Sornborger, a computer scientist at Los Alamos and a coauthor of the paper published Feb. 18 in Physical Review Letters. “The theorem gives us hope that quantum neural networks are on track towards the goal of quantum speed-up, where eventually they will outperform their counterparts on classical computers.”

The classical No-Free-Lunch theorem states that any machine-learning algorithm is as good as, but no better than, any other when their performance is averaged over all possible functions connecting the data to their labels. A direct consequence of this theorem that showcases the power of data in classical machine learning is that the more data one has, the better the average performance. Thus, data is the currency in machine learning that ultimately limits performance.

Hannah wraps up the series by meeting DeepMind co-founder and CEO, Demis Hassabis. In an extended interview, Demis describes why he believes AGI is possible, how we can get there, and the problems he hopes it will solve. Along the way, he highlights the important role of consciousness and why he’s so optimistic that AI can help solve many of the world’s major challenges. As a final note, Demis shares the story of a personal meeting with Stephen Hawking to discuss the future of AI and discloses Hawking’s parting message.

For questions or feedback on the series, message us on Twitter @DeepMind or email [email protected].

Interviewee: Deepmind co-founder and CEO, Demis Hassabis.

Credits.
Presenter: Hannah Fry.
Series Producer: Dan Hardoon.
Production support: Jill Achineku.
Sounds design: Emma Barnaby.
Music composition: Eleni Shaw.
Sound Engineer: Nigel Appleton.
Editor: David Prest.
Commissioned by DeepMind.

Thank you to everyone who made this season possible!

Further reading:

Over the past decade or so, many researchers worldwide have been trying to develop brain-inspired computer systems, also known as neuromorphic computing tools. The majority of these systems are currently used to run deep learning algorithms and other artificial intelligence (AI) tools.

Researchers at Sandia National Laboratories have recently conducted a study assessing the potential of neuromorphic architectures to perform a different type of computations, namely random walk computations. These are computations that involve a succession of random steps in the mathematical space. The team’s findings, published in Nature Electronics, suggest that neuromorphic architectures could be well-suited for implementing these computations and could thus reach beyond machine learning applications.

“Most past studies related to focused on cognitive applications, such as ,” James Bradley Aimone, one of the researchers who carried out the study, told TechXplore. “While we are also excited about that direction, we wanted to ask a different and complementary question: can neuromorphic computing excel at complex math tasks that our brains cannot really tackle?”

Now, a developed by Brown University bioengineers could be an important step toward such adaptive DBS. The algorithm removes a key hurdle that makes it difficult for DBS systems to sense while simultaneously delivering .

“We know that there are in the associated with disease states, and we’d like to be able to record those signals and use them to adjust neuromodulation therapy automatically,” said David Borton, an assistant professor of biomedical engineering at Brown and corresponding author of a study describing the algorithm. “The problem is that stimulation creates electrical artifacts that corrupt the signals we’re trying to record. So we’ve developed a means of identifying and removing those artifacts, so all that’s left is the signal of interest from the brain.”

From chatbots that answer tax questions to algorithms that drive autonomous vehicles and dish out medical diagnoses, artificial intelligence undergirds many aspects of daily life. Creating smarter, more accurate systems requires a hybrid human-machine approach, according to researchers at the University of California, Irvine. In a study published this month in Proceedings of the National Academy of Sciences, they present a new mathematical model that can improve performance by combining human and algorithmic predictions and confidence scores.

“Humans and machine algorithms have complementary strengths and weaknesses. Each uses different sources of information and strategies to make predictions and decisions,” said co-author Mark Steyvers, UCI professor of cognitive sciences. “We show through empirical demonstrations as well as theoretical analyses that humans can improve the predictions of AI even when human accuracy is somewhat below [that of] the AI—and vice versa. And this accuracy is higher than combining predictions from two individuals or two AI algorithms.”

To test the framework, researchers conducted an image classification experiment in which human participants and computer algorithms worked separately to correctly identify distorted pictures of animals and everyday items—chairs, bottles, bicycles, trucks. The human participants ranked their confidence in the accuracy of each image identification as low, medium or high, while the machine classifier generated a continuous score. The results showed large differences in confidence between humans and AI algorithms across images.

Amazon and Virginia Tech today announced the establishment of the Amazon – Virginia Tech Initiative for Efficient and Robust Machine Learning.

The initiative will provide an opportunity for doctoral students in the College of Engineering who are conducting AI and ML research to apply for Amazon fellowships, and it will support research efforts led by Virginia Tech faculty members. Under the initiative, Virginia Tech will host an annual public research symposium to share knowledge with the machine learning and related research communities. And in collaboration with Amazon, Virginia Tech will co-host two annual workshops, and training and recruiting events for Virginia Tech students.

“This initiative’s emphasis will be on efficient and robust machine learning, such as ensuring algorithms and models are resistant to errors and adversaries,” said Naren Ramakrishnan, the director of the Sanghani Center and the Thomas L. Phillips Professor of Engineering. “We’re pleased to continue our work with Amazon and expand machine learning research capabilities that could address worldwide industry-focused problems.”

Both quantum computing and machine learning have been touted as the next big computer revolution for a fair while now.

However, experts have pointed out that these techniques aren’t generalized tools – they will only be the great leap forward in computer power for very specialized algorithms, and even more rarely will they be able to work on the same problem.

One such example of where they might work together is modeling the answer to one of the thorniest problems in physics: How does General Relativity relate to the Standard Model?

In a paper published on February 23, 2022 in Nature Machine Intelligence, a team of scientists at the Max Planck Institute for Intelligent Systems (MPI-IS) introduce a robust soft haptic sensor named “Insight” that uses computer vision and a deep neural network to accurately estimate where objects come into contact with the sensor and how large the applied forces are. The research project is a significant step toward robots being able to feel their environment as accurately as humans and animals. Like its natural counterpart, the fingertip sensor is very sensitive, robust, and high-resolution.

The thumb-shaped sensor is made of a soft shell built around a lightweight stiff skeleton. This skeleton holds up the structure much like bones stabilize the soft finger tissue. The shell is made from an elastomer mixed with dark but reflective aluminum flakes, resulting in an opaque grayish color that prevents any external light finding its way in. Hidden inside this finger-sized cap is a tiny 160-degree fish-eye camera, which records colorful images, illuminated by a ring of LEDs.

When any objects touch the sensor’s shell, the appearance of the color pattern inside the sensor changes. The camera records images many times per second and feeds a deep neural network with this data. The algorithm detects even the smallest change in light in each pixel. Within a fraction of a second, the trained machine-learning model can map out where exactly the finger is contacting an object, determine how strong the forces are, and indicate the force direction. The model infers what scientists call a force map: It provides a force vector for every point in the three-dimensional fingertip.