БЛОГ

Archive for the ‘information science’ category: Page 102

Mar 15, 2022

When It Comes to AI, Can We Ditch the Datasets?

Posted by in categories: information science, robotics/AI

Summary: Training a machine learning algorithm with synthetic data for image classification can rival one trained on traditional datasets.

Source: MIT

Huge amounts of data are needed to train machine-learning models to perform image classification tasks, such as identifying damage in satellite photos following a natural disaster. However, these data are not always easy to come by. Datasets may cost millions of dollars to generate, if usable data exist in the first place, and even the best datasets often contain biases that negatively impact a model’s performance.

Mar 15, 2022

Entanglement unlocks scaling for quantum machine learning

Posted by in categories: information science, quantum physics, robotics/AI

The field of machine learning on quantum computers got a boost from new research removing a potential roadblock to the practical implementation of quantum neural networks. While theorists had previously believed an exponentially large training set would be required to train a quantum neural network, the quantum No-Free-Lunch theorem developed by Los Alamos National Laboratory shows that quantum entanglement eliminates this exponential overhead.

“Our work proves that both and big entanglement are valuable in quantum machine learning. Even better, entanglement leads to scalability, which solves the roadblock of exponentially increasing the size of the data in order to learn it,” said Andrew Sornborger, a computer scientist at Los Alamos and a coauthor of the paper published Feb. 18 in Physical Review Letters. “The theorem gives us hope that quantum neural networks are on track towards the goal of quantum speed-up, where eventually they will outperform their counterparts on classical computers.”

The classical No-Free-Lunch theorem states that any machine-learning algorithm is as good as, but no better than, any other when their performance is averaged over all possible functions connecting the data to their labels. A direct consequence of this theorem that showcases the power of data in classical machine learning is that the more data one has, the better the average performance. Thus, data is the currency in machine learning that ultimately limits performance.

Mar 15, 2022

The promise of AI with Demis Hassabis — DeepMind: The Podcast (Season 2, Episode 9)

Posted by in categories: information science, media & arts, robotics/AI

Hannah wraps up the series by meeting DeepMind co-founder and CEO, Demis Hassabis. In an extended interview, Demis describes why he believes AGI is possible, how we can get there, and the problems he hopes it will solve. Along the way, he highlights the important role of consciousness and why he’s so optimistic that AI can help solve many of the world’s major challenges. As a final note, Demis shares the story of a personal meeting with Stephen Hawking to discuss the future of AI and discloses Hawking’s parting message.

For questions or feedback on the series, message us on Twitter @DeepMind or email [email protected].

Continue reading “The promise of AI with Demis Hassabis — DeepMind: The Podcast (Season 2, Episode 9)” »

Mar 14, 2022

Study highlights the potential of neuromorphic architectures to perform random walk computations

Posted by in categories: information science, mathematics, robotics/AI, space

Over the past decade or so, many researchers worldwide have been trying to develop brain-inspired computer systems, also known as neuromorphic computing tools. The majority of these systems are currently used to run deep learning algorithms and other artificial intelligence (AI) tools.

Researchers at Sandia National Laboratories have recently conducted a study assessing the potential of neuromorphic architectures to perform a different type of computations, namely random walk computations. These are computations that involve a succession of random steps in the mathematical space. The team’s findings, published in Nature Electronics, suggest that neuromorphic architectures could be well-suited for implementing these computations and could thus reach beyond machine learning applications.

“Most past studies related to focused on cognitive applications, such as ,” James Bradley Aimone, one of the researchers who carried out the study, told TechXplore. “While we are also excited about that direction, we wanted to ask a different and complementary question: can neuromorphic computing excel at complex math tasks that our brains cannot really tackle?”

Mar 13, 2022

New algorithm could help enable next-generation deep brain stimulation devices

Posted by in categories: bioengineering, biotech/medical, information science, neuroscience

Now, a developed by Brown University bioengineers could be an important step toward such adaptive DBS. The algorithm removes a key hurdle that makes it difficult for DBS systems to sense while simultaneously delivering .

“We know that there are in the associated with disease states, and we’d like to be able to record those signals and use them to adjust neuromodulation therapy automatically,” said David Borton, an assistant professor of biomedical engineering at Brown and corresponding author of a study describing the algorithm. “The problem is that stimulation creates electrical artifacts that corrupt the signals we’re trying to record. So we’ve developed a means of identifying and removing those artifacts, so all that’s left is the signal of interest from the brain.”

Mar 13, 2022

AI Overcomes Stumbling Block on Brain-Inspired Hardware

Posted by in categories: information science, robotics/AI

Algorithms that use the brain’s communication signal can now work on analog neuromorphic chips, which closely mimic our energy-efficient brains.

Mar 12, 2022

Researchers develop hybrid human-machine framework for building smarter AI

Posted by in categories: biotech/medical, information science, mathematics, robotics/AI

From chatbots that answer tax questions to algorithms that drive autonomous vehicles and dish out medical diagnoses, artificial intelligence undergirds many aspects of daily life. Creating smarter, more accurate systems requires a hybrid human-machine approach, according to researchers at the University of California, Irvine. In a study published this month in Proceedings of the National Academy of Sciences, they present a new mathematical model that can improve performance by combining human and algorithmic predictions and confidence scores.

“Humans and machine algorithms have complementary strengths and weaknesses. Each uses different sources of information and strategies to make predictions and decisions,” said co-author Mark Steyvers, UCI professor of cognitive sciences. “We show through empirical demonstrations as well as theoretical analyses that humans can improve the predictions of AI even when human accuracy is somewhat below [that of] the AI—and vice versa. And this accuracy is higher than combining predictions from two individuals or two AI algorithms.”

To test the framework, researchers conducted an image classification experiment in which human participants and computer algorithms worked separately to correctly identify distorted pictures of animals and everyday items—chairs, bottles, bicycles, trucks. The human participants ranked their confidence in the accuracy of each image identification as low, medium or high, while the machine classifier generated a continuous score. The results showed large differences in confidence between humans and AI algorithms across images.

Mar 11, 2022

Amazon and Virginia Tech launch AI and ML research initiative

Posted by in categories: information science, robotics/AI

Amazon and Virginia Tech today announced the establishment of the Amazon – Virginia Tech Initiative for Efficient and Robust Machine Learning.

The initiative will provide an opportunity for doctoral students in the College of Engineering who are conducting AI and ML research to apply for Amazon fellowships, and it will support research efforts led by Virginia Tech faculty members. Under the initiative, Virginia Tech will host an annual public research symposium to share knowledge with the machine learning and related research communities. And in collaboration with Amazon, Virginia Tech will co-host two annual workshops, and training and recruiting events for Virginia Tech students.

“This initiative’s emphasis will be on efficient and robust machine learning, such as ensuring algorithms and models are resistant to errors and adversaries,” said Naren Ramakrishnan, the director of the Sanghani Center and the Thomas L. Phillips Professor of Engineering. “We’re pleased to continue our work with Amazon and expand machine learning research capabilities that could address worldwide industry-focused problems.”

Mar 11, 2022

Will Transformers Take Over Artificial Intelligence?

Posted by in categories: information science, robotics/AI

A simple algorithm that revolutionizes how neural networks approach language is now taking on image classification as well. It may not stop there.

Mar 4, 2022

What’s Inside a Black Hole? Quantum Computers May Be Able to Simulate It

Posted by in categories: cosmology, information science, quantum physics, robotics/AI

Both quantum computing and machine learning have been touted as the next big computer revolution for a fair while now.

However, experts have pointed out that these techniques aren’t generalized tools – they will only be the great leap forward in computer power for very specialized algorithms, and even more rarely will they be able to work on the same problem.

Continue reading “What’s Inside a Black Hole? Quantum Computers May Be Able to Simulate It” »