БЛОГ

Archive for the ‘information science’ category: Page 112

Dec 10, 2021

DeepMind Says Its New AI Has Almost the Reading Comprehension of a High Schooler

Posted by in categories: information science, robotics/AI

Alphabet’s AI research company DeepMind has released the next generation of its language model, and it says that it has close to the reading comprehension of a high schooler — a startling claim.

It says the language model, called Gopher, was able to significantly improve its reading comprehension by ingesting massive repositories of texts online.

DeepMind boasts that its algorithm, an “ultra-large language model,” has 280 billion parameters, which are a measure of size and complexity. That means it falls somewhere between OpenAI’s GPT-3 (175 billion parameters) and Microsoft and NVIDIA’s Megatron, which features 530 billion parameters, The Verge points out.

Dec 10, 2021

Crucial leap in error mitigation for quantum computers

Posted by in categories: computing, information science, quantum physics

Researchers at Lawrence Berkeley National Laboratory’s Advanced Quantum Testbed (AQT) demonstrated that an experimental method known as randomized compiling (RC) can dramatically reduce error rates in quantum algorithms and lead to more accurate and stable quantum computations. No longer just a theoretical concept for quantum computing, the multidisciplinary team’s breakthrough experimental results are published in Physical Review X.

The experiments at AQT were performed on a four-qubit superconducting quantum processor. The researchers demonstrated that RC can suppress one of the most severe types of errors in quantum computers: coherent errors.

Akel Hashim, AQT researcher, involved in the experimental breakthrough and a graduate student at the University of California, Berkeley explained: “We can perform quantum computations in this era of noisy intermediate-scale quantum (NISQ) computing, but these are very noisy, prone to errors from many different sources, and don’t last very long due to the decoherence—that is, information loss—of our qubits.”

Dec 9, 2021

DeepMind’s new language model kicks GPT-3’s butt

Posted by in categories: information science, robotics/AI

Bigger isn’t always better. DeepMind’s Gopher system uses smarter algorithms to make better choices. And it blows GPT-3 away.

Dec 8, 2021

Studying Quantum Walks on Near-Term Quantum Computers

Posted by in categories: computing, information science, quantum physics

By Stina Andersson and Ellinor Wanzambi

Researchers have been working on quantum algorithms since physicists first proposed using principles of quantum physics to simulate nature decades. One important component in many quantum algorithms is quantum walks, which are the quantum equivalent of the classical Markov chain, i.e., a random walk without memory. Quantum walks are used in algorithms in areas such as searching, node ranking in networks, and element distinctness.

Consider the graph in Figure 1 and imagine that we randomly want to move between nodes A, B, C, and D in the graph. We can only move between nodes that are connected by an edge, and each edge has an associated probability that decides how likely we are to move to the connected node. This is a random walk. In this article, we are working only with Markov chains, also called the memory-less random walks, meaning that the probabilities are independent of the previous steps. For example, the probabilities of arriving at node A are the same no matter if we got there from node B or node D.

Dec 8, 2021

Algorithm to increase the efficiency of quantum computers

Posted by in categories: information science, quantum physics, supercomputing

Quantum computers have the potential to solve important problems that are beyond reach even for the most powerful supercomputers, but they require an entirely new way of programming and creating algorithms.

Universities and major tech companies are spearheading research on how to develop these new algorithms. In a recent collaboration between University of Helsinki, Aalto University, University of Turku, and IBM Research Europe-Zurich, a team of researchers have developed a new method to speed up calculations on quantum computers. The results are published in the journal PRX Quantum of the American Physical Society.

“Unlike classical computers, which use bits to store ones and zeros, information is stored in the qubits of a quantum processor in the form of a , or a wavefunction,” says postdoctoral researcher Guillermo García-Pérez from the Department of Physics at the University of Helsinki, first author of the paper.

Dec 8, 2021

Physical features boost the efficiency of quantum simulations

Posted by in categories: computing, information science, quantum physics

Recent theoretical breakthroughs have settled two long-standing questions about the viability of simulating quantum systems on future quantum computers, overcoming challenges from complexity analyses to enable more advanced algorithms. Featured in two publications, the work by a quantum team at Los Alamos National Laboratory shows that physical properties of quantum systems allow for faster simulation techniques.

“Algorithms based on this work will be needed for the first full-scale demonstration of quantum simulations on quantum computers,” said Rolando Somma, a quantum theorist at Los Alamos and coauthor on the two papers.

Dec 8, 2021

Consciousness & Time | Part III of Consciousness: Evolution of the Mind (2021) Documentary

Posted by in categories: computing, education, information science, neuroscience, quantum physics, singularity

Most physicists and philosophers now agree that time is emergent while Digital Presentism denotes: Time emerges from complex qualia computing at the level of observer experiential reality. Time emerges from experiential data, it’s an epiphenomenon of consciousness. From moment to moment, you are co-writing your own story, co-producing your own “participatory reality” — your stream of consciousness is not subject to some kind of deterministic “script.” You are entitled to degrees of freedom. If we are to create high fidelity first-person simulated realities that also may be part of intersubjectivity-based Metaverse, then D-Theory of Time gives us a clear-cut guiding principle for doing just that.

Here’s Consciousness: Evolution of the Mind (2021) documentary, Part III: CONSCIOUSNESS & TIME #consciousness #evolution #mind #time #DTheoryofTime #DigitalPresentism #CyberneticTheoryofMind

Continue reading “Consciousness & Time | Part III of Consciousness: Evolution of the Mind (2021) Documentary” »

Dec 8, 2021

How AI Could Help Screen for Autism in Children

Posted by in categories: information science, robotics/AI

Summary: A new machine-learning algorithm could help practitioners identify autism in children more effectively.

Source: USC

For children with autism spectrum disorder (ASD), receiving an early diagnosis can make a huge difference in improving behavior, skills and language development. But despite being one of the most common developmental disabilities, impacting 1 in 54 children in the U.S., it’s not that easy to diagnose.

Dec 8, 2021

Player of Games

Posted by in categories: entertainment, information science, robotics/AI

Games have a long history of serving as a benchmark for progress in artificial intelligence. Recently, approaches using search and learning have shown strong performance across a set of perfect information games, and approaches using game-theoretic reasoning and learning have shown strong performance for specific imperfect information poker variants. We introduce, a general-purpose algorithm that unifies previous approaches, combining guided search, self-play… See more.


Games have a long history of serving as a benchmark for progress in.

Artificial intelligence. Recently, approaches using search and learning have.

Continue reading “Player of Games” »

Dec 8, 2021

UC Berkeley’s Sergey Levine Says Combining Self-Supervised and Offline RL Could Enable Algorithms That Understand the World Through Actions

Posted by in categories: information science, robotics/AI

The idiom “actions speak louder than words” first appeared in print almost 300 years ago. A new study echoes this view, arguing that combining self-supervised and offline reinforcement learning (RL) could lead to a new class of algorithms that understand the world through actions and enable scalable representation learning.

Machine learning (ML) systems have achieved outstanding performance in domains ranging from computer vision to speech recognition and natural language processing, yet still struggle to match the flexibility and generality of human reasoning. This has led ML researchers to search for the “missing ingredient” that might boost these systems’ ability to understand, reason and generalize.

In the paper Understanding the World Through Action, UC Berkeley assistant professor in the department of electrical engineering and computer sciences Sergey Levine suggests that a general, principled, and powerful framework for utilizing unlabelled data could be derived from RL to enable ML systems leveraging large datasets to better understand the real world.