БЛОГ

Archive for the ‘information science’ category: Page 148

Mar 18, 2022

Future evolution: from looks to brains and personality, how will humans change in the next 10,000 years?

Posted by in categories: biotech/medical, computing, food, genetics, information science, mobile phones, neuroscience

And going forward, we’ll do this with far more knowledge of what we’re doing, and more control over the genes of our progeny. We can already screen ourselves and embryos for genetic diseases. We could potentially choose embryos for desirable genes, as we do with crops. Direct editing of the DNA of a human embryo has been proven to be possible — but seems morally abhorrent, effectively turning children into subjects of medical experimentation. And yet, if such technologies were proven safe, I could imagine a future where you’d be a bad parent not to give your children the best genes possible.

Computers also provide an entirely new selective pressure. As more and more matches are made on smartphones, we are delegating decisions about what the next generation looks like to computer algorithms, who recommend our potential matches. Digital code now helps choose what genetic code passed on to future generations, just like it shapes what you stream or buy online. This might sound like dark science fiction, but it’s already happening. Our genes are being curated by computer, just like our playlists. It’s hard to know where this leads, but I wonder if it’s entirely wise to turn over the future of our species to iPhones, the internet and the companies behind them.

Discussions of human evolution are usually backward looking, as if the greatest triumphs and challenges were in the distant past. But as technology and culture enter a period of accelerating change, our genes will too. Arguably, the most interesting parts of evolution aren’t life’s origins, dinosaurs, or Neanderthals, but what’s happening right now, our present – and our future.

Mar 18, 2022

Artificial intelligence paves the way to discovering new rare-earth compounds

Posted by in categories: chemistry, information science, robotics/AI

Artificial intelligence advances how scientists explore materials. Researchers from Ames Laboratory and Texas A&M University trained a machine-learning (ML) model to assess the stability of rare-earth compounds. This work was supported by Laboratory Directed Research and Development Program (LDRD) program at Ames Laboratory. The framework they developed builds on current state-of-the-art methods for experimenting with compounds and understanding chemical instabilities.

Ames Lab has been a leader in rare-earths research since the middle of the 20th century. Rare earth elements have a wide range of uses including clean energy technologies, energy storage, and permanent magnets. Discovery of new rare-earth compounds is part of a larger effort by scientists to expand access to these materials.

The present approach is based on machine learning (ML), a form of artificial intelligence (AI), which is driven by computer algorithms that improve through data usage and experience. Researchers used the upgraded Ames Laboratory Rare Earth database (RIC 2.0) and high-throughput density-functional theory (DFT) to build the foundation for their ML model.

Mar 18, 2022

Artificial neurons help decode cortical signals

Posted by in categories: information science, robotics/AI

Russian scientists have proposed a new algorithm for automatic decoding and interpreting the decoder weights, which can be used both in brain-computer interfaces and in fundamental research. The results of the study were published in the Journal of Neural Engineering.

Brain-computer interfaces are needed to create robotic prostheses and neuroimplants, rehabilitation simulators, and devices that can be controlled by the power of thought. These devices help people who have suffered a stroke or physical injury to move (in the case of a robotic chair or prostheses), communicate, use a computer, and operate household appliances. In addition, in combination with machine learning methods, neural interfaces help researchers understand how the human brain works.

Most frequently brain-computer interfaces use electrical activity of neurons, measured, for example, with electro-or magnetoencephalography. However, a special decoder is needed in order to translate neuronal signals into commands. Traditional methods of signal processing require painstaking work on identifying informative features—signal characteristics that, from a researcher’s point of view, appear to be most important for the decoding task.

Mar 17, 2022

Mathematical paradoxes demonstrate the limits of AI

Posted by in categories: information science, mathematics, robotics/AI

Humans are usually pretty good at recognizing when they get things wrong, but artificial intelligence systems are not. According to a new study, AI generally suffers from inherent limitations due to a century-old mathematical paradox.

Like some people, AI systems often have a degree of confidence that far exceeds their actual abilities. And like an overconfident person, many AI systems don’t know when they’re making mistakes. Sometimes it’s even more difficult for an AI system to realize when it’s making a mistake than to produce a correct result.

Researchers from the University of Cambridge and the University of Oslo say that instability is the Achilles’ heel of modern AI and that a mathematical paradox shows AI’s limitations. Neural networks, the state of the art tool in AI, roughly mimic the links between neurons in the brain. The researchers show that there are problems where stable and accurate exist, yet no algorithm can produce such a . Only in specific cases can algorithms compute stable and accurate neural networks.

Mar 17, 2022

Wormholes May Be Lurking in the Universe — Here Are Proposed Ways of Finding Them

Posted by in categories: cosmology, information science, physics

Albert Einstein’s theory of general relativity profoundly changed our thinking about fundamental concepts in physics, such as space and time. But it also left us with some deep mysteries. One was black holes, which were only unequivocally detected over the past few years. Another was “wormholes” – bridges connecting different points in spacetime, in theory providing shortcuts for space travellers.

Wormholes are still in the realm of the imagination. But some scientists think we will soon be able to find them, too. Over the past few months, several new studies have suggested intriguing ways forward.

Black holes and wormholes are special types of solutions to Einstein’s equations, arising when the structure of spacetime is strongly bent by gravity. For example, when matter is extremely dense, the fabric of spacetime can become so curved that not even light can escape. This is a black hole.

Mar 15, 2022

Machine Learning Reimagines the Building Blocks of Computing

Posted by in categories: information science, robotics/AI

Traditional algorithms power complicated computational tools like machine learning. A new approach, called algorithms with predictions, uses the power of machine learning to improve algorithms.

Mar 15, 2022

When It Comes to AI, Can We Ditch the Datasets?

Posted by in categories: information science, robotics/AI

Summary: Training a machine learning algorithm with synthetic data for image classification can rival one trained on traditional datasets.

Source: MIT

Huge amounts of data are needed to train machine-learning models to perform image classification tasks, such as identifying damage in satellite photos following a natural disaster. However, these data are not always easy to come by. Datasets may cost millions of dollars to generate, if usable data exist in the first place, and even the best datasets often contain biases that negatively impact a model’s performance.

Mar 15, 2022

Entanglement unlocks scaling for quantum machine learning

Posted by in categories: information science, quantum physics, robotics/AI

The field of machine learning on quantum computers got a boost from new research removing a potential roadblock to the practical implementation of quantum neural networks. While theorists had previously believed an exponentially large training set would be required to train a quantum neural network, the quantum No-Free-Lunch theorem developed by Los Alamos National Laboratory shows that quantum entanglement eliminates this exponential overhead.

“Our work proves that both and big entanglement are valuable in quantum machine learning. Even better, entanglement leads to scalability, which solves the roadblock of exponentially increasing the size of the data in order to learn it,” said Andrew Sornborger, a computer scientist at Los Alamos and a coauthor of the paper published Feb. 18 in Physical Review Letters. “The theorem gives us hope that quantum neural networks are on track towards the goal of quantum speed-up, where eventually they will outperform their counterparts on classical computers.”

The classical No-Free-Lunch theorem states that any machine-learning algorithm is as good as, but no better than, any other when their performance is averaged over all possible functions connecting the data to their labels. A direct consequence of this theorem that showcases the power of data in classical machine learning is that the more data one has, the better the average performance. Thus, data is the currency in machine learning that ultimately limits performance.

Mar 15, 2022

The promise of AI with Demis Hassabis — DeepMind: The Podcast (Season 2, Episode 9)

Posted by in categories: information science, media & arts, robotics/AI

Hannah wraps up the series by meeting DeepMind co-founder and CEO, Demis Hassabis. In an extended interview, Demis describes why he believes AGI is possible, how we can get there, and the problems he hopes it will solve. Along the way, he highlights the important role of consciousness and why he’s so optimistic that AI can help solve many of the world’s major challenges. As a final note, Demis shares the story of a personal meeting with Stephen Hawking to discuss the future of AI and discloses Hawking’s parting message.

For questions or feedback on the series, message us on Twitter @DeepMind or email [email protected].

Continue reading “The promise of AI with Demis Hassabis — DeepMind: The Podcast (Season 2, Episode 9)” »

Mar 14, 2022

Study highlights the potential of neuromorphic architectures to perform random walk computations

Posted by in categories: information science, mathematics, robotics/AI, space

Over the past decade or so, many researchers worldwide have been trying to develop brain-inspired computer systems, also known as neuromorphic computing tools. The majority of these systems are currently used to run deep learning algorithms and other artificial intelligence (AI) tools.

Researchers at Sandia National Laboratories have recently conducted a study assessing the potential of neuromorphic architectures to perform a different type of computations, namely random walk computations. These are computations that involve a succession of random steps in the mathematical space. The team’s findings, published in Nature Electronics, suggest that neuromorphic architectures could be well-suited for implementing these computations and could thus reach beyond machine learning applications.

“Most past studies related to focused on cognitive applications, such as ,” James Bradley Aimone, one of the researchers who carried out the study, told TechXplore. “While we are also excited about that direction, we wanted to ask a different and complementary question: can neuromorphic computing excel at complex math tasks that our brains cannot really tackle?”