БЛОГ

Archive for the ‘information science’ category: Page 174

Apr 14, 2020

Researchers design intelligent microsystem for faster, more sustainable industrial chemistry

Posted by in categories: chemistry, engineering, information science, robotics/AI, sustainability

The synthesis of plastic precursors, such as polymers, involves specialized catalysts. However, the traditional batch-based method of finding and screening the right ones for a given result consumes liters of solvent, generates large quantities of chemical waste, and is an expensive, time-consuming process involving multiple trials.

Ryan Hartman, professor of chemical and at the NYU Tandon School of Engineering, and his laboratory developed a lab-based “intelligent microsystem” employing , for modeling that shows promise for eliminating this costly process and minimizing environmental harm.

In their research, “Combining automated microfluidic experimentation with machine learning for efficient polymerization design,” published in Nature Machine Intelligence, the collaborators, including doctoral student Benjamin Rizkin, employed a custom-designed, rapidly prototyped microreactor in conjunction with automation and in situ infrared thermography to study exothermic (heat generating) polymerization—reactions that are notoriously difficult to control when limited experimental kinetic data are available. By pairing efficient microfluidic technology with machine learning algorithms to obtain high-fidelity datasets based on minimal iterations, they were able to reduce chemical waste by two orders of magnitude and catalytic discovery from weeks to hours.

Apr 13, 2020

Using artificial intelligence to search for new exotic particles

Posted by in categories: entertainment, information science, mathematics, particle physics, robotics/AI, transportation

Nowadays, artificial neural networks have an impact on many areas of our day-to-day lives. They are used for a wide variety of complex tasks, such as driving cars, performing speech recognition (for example, Siri, Cortana, Alexa), suggesting shopping items and trends, or improving visual effects in movies (e.g., animated characters such as Thanos from the movie Infinity War by Marvel).

Traditionally, algorithms are handcrafted to solve complex tasks. This requires experts to spend a significant amount of time to identify the optimal strategies for various situations. Artificial neural networks — inspired by interconnected neurons in the brain — can automatically learn from data a close-to-optimal solution for the given objective. Often, the automated learning or “training” required to obtain these solutions is “supervised” through the use of supplementary information provided by an expert. Other approaches are “unsupervised” and can identify patterns in the data. The mathematical theory behind artificial neural networks has evolved over several decades, yet only recently have we developed our understanding of how to train them efficiently. The required calculations are very similar to those performed by standard video graphics cards (that contain a graphics processing unit or GPU) when rendering three-dimensional scenes in video games.

Apr 12, 2020

For First Time in History, AI Learns to Translate Silent Human Brain Activity into Text for Locked-In Syndrome Patients

Posted by in categories: biotech/medical, information science, robotics/AI

Neuroscientists have just created an artificially intelligent algorithm that detects human brain activity and translates it into English sentences—and they said it was the first time such translations could be done on a 1:1 speed with natural human speech; faster-than-light.

Apr 12, 2020

Coronavirus Is Changing How We Live, Work, and Use Tech—Permanently

Posted by in categories: biotech/medical, information science, mathematics

Within a week, many world leaders went from downplaying the seriousness of coronavirus to declaring a state of emergency. Even the most efficacious of nations seem to be simultaneously confused and exasperated, with delayed responses revealing incompetence and inefficiency the world over.

So this begs the question: why is it so difficult for us to comprehend the scale of what an unmitigated global pandemic could do? The answer likely relates to how we process abstract concepts like exponential growth. Part of the reason we’ve struggled so much applying basic math to our practical environment is because humans think linearly. But like much of technology, biological systems such as viruses can grow exponentially.

As we scramble to contain and fight the pandemic, we’ve turned to technology as our saving grace. In doing so, we’ve effectively hit a “fast-forward” button on many tech trends that were already in place. From remote work and virtual events to virus-monitoring big data, technologies that were perhaps only familiar to a fringe tech community are now entering center stage—and as tends to be the case with wartime responses, these changes are likely here to stay.

Apr 9, 2020

Computers Evolve a New Path Toward Human Intelligence

Posted by in categories: information science, robotics/AI

By ignoring their goals, evolutionary algorithms have solved longstanding challenges in artificial intelligence.

Apr 9, 2020

DARPA snags Intel to lead its machine learning security tech

Posted by in categories: cybercrime/malcode, information science, military, robotics/AI

Chip maker Intel has been chosen to lead a new initiative led by the U.S. military’s research wing, DARPA, aimed at improving cyber-defenses against deception attacks on machine learning models.

Machine learning is a kind of artificial intelligence that allows systems to improve over time with new data and experiences. One of its most common use cases today is object recognition, such as taking a photo and describing what’s in it. That can help those with impaired vision to know what’s in a photo if they can’t see it, for example, but it also can be used by other computers, such as autonomous vehicles, to identify what’s on the road.

But deception attacks, although rare, can meddle with machine learning algorithms. Subtle changes to real-world objects can, in the case of a self-driving vehicle, have disastrous consequences.

Apr 6, 2020

AI reveals that mice’s faces express a range of emotions — just like humans

Posted by in categories: information science, robotics/AI

AI has revealed that mice have a range of facial expressions that show they feel — offering fresh clues about how emotional responses arise in human brains.

Scientists at the Max Planck Institute of Neurobiology in Germany made the discovery by recording the faces of lab mice when they were exposed to different stimuli, such as sweet flavors and electric shocks. The researchers then used machine learning algorithms to analyze how the rodents’ faces changed when they experienced different feelings.

Apr 6, 2020

A Robot Stand-Up Comedian Learns The Nuts And Bolts Of Comedy

Posted by in categories: information science, robotics/AI

Social roboticist, Heather Knight, sees robots and entertainment a research-rich coupling. So she programmed a charming humanoid robot named DATA with jokes, and equipped it with sensors and algorithmic capabilities to help with timing and gauging a crowd. Then Knight and DATA hit the road on an international robot stand-up comedy tour. Their act landed stage time at a TED conference and Knight was profiled in Forbes 30 Under 30. Watching Data perform is much like watching an amateur stand-up comedian cutting her/his chops at an open mic night doing light comedy with a sweet but wooden delivery.

Knight’s goal is specific:

Apr 6, 2020

AI techniques used to improve battery health and safety

Posted by in categories: health, information science, mobile phones, robotics/AI, transportation

Researchers have designed a machine learning method that can predict battery health with 10x higher accuracy than current industry standard, which could aid in the development of safer and more reliable batteries for electric vehicles and consumer electronics.

The researchers, from Cambridge and Newcastle Universities, have designed a new way to monitor batteries by sending electrical pulses into them and measuring the response. The measurements are then processed by a to predict the ’s health and useful lifespan. Their method is non-invasive and is a simple add-on to any existing battery system. The results are reported in the journal Nature Communications.

Predicting the state of health and the remaining useful lifespan of lithium-ion batteries is one of the big problems limiting widespread adoption of : it’s also a familiar annoyance to mobile phone users. Over time, battery performance degrades via a complex network of subtle chemical processes. Individually, each of these processes doesn’t have much of an effect on battery performance, but collectively they can severely shorten a battery’s performance and lifespan.

Apr 6, 2020

Alphabet’s DeepMind masters Atari games

Posted by in categories: entertainment, information science, robotics/AI

In order to better solve complex challenges at the dawn of the third decade of the 21st century, Alphabet Inc. has tapped into relics dating to the 1980s: video games.

The parent company of Google reported this week that its DeepMind Technologies Artificial Intelligence unit has successfully learned how to play 57 Atari video games. And the plays better than any human.

Continue reading “Alphabet’s DeepMind masters Atari games” »