БЛОГ

Archive for the ‘robotics/AI’ category: Page 1781

Jul 18, 2019

Tiny vibration-powered robots the size of the world’s smallest ant

Posted by in categories: 3D printing, biotech/medical, robotics/AI

Researchers have created a new type of tiny 3D-printed robot that moves by harnessing vibration from piezoelectric actuators, ultrasound sources or even tiny speakers. Swarms of these “micro-bristle-bots” might work together to sense environmental changes, move materials—or perhaps one day repair injuries inside the human body.

The respond to different frequencies depending on their configurations, allowing researchers to control individual bots by adjusting the vibration. Approximately two millimeters long—about the size of the world’s smallest ant—the bots can cover four times their own length in a second despite the physical limitations of their small size.

Continue reading “Tiny vibration-powered robots the size of the world’s smallest ant” »

Jul 18, 2019

First programmable memristor computer aims to bring AI processing down from the cloud

Posted by in categories: mobile phones, robotics/AI

The first programmable memristor computer—not just a memristor array operated through an external computer—has been developed at the University of Michigan.

It could lead to the processing of artificial intelligence directly on small, energy-constrained devices such as smartphones and sensors. A smartphone AI processor would mean that voice commands would no longer have to be sent to the cloud for interpretation, speeding up response time.

“Everyone wants to put an AI processor on smartphones, but you don’t want your cell phone battery to drain very quickly,” said Wei Lu, U-M professor of electrical and and senior author of the study in Nature Electronics.

Jul 18, 2019

Electronic chip mimics the brain to make memories in a flash

Posted by in categories: biotech/medical, cyborgs, genetics, robotics/AI, transhumanism

Researchers from RMIT University have drawn inspiration from optogenetics, an emerging tool in biotechnology, to develop a device that replicates the way the brain stores and loses information. Optogenetics allows scientists to delve into the body’s electrical system with incredible precision, using light to manipulate neurons so that they can be turned on or off.

The new is based on an ultra-thin material that changes electrical resistance in response to different wavelengths of light, enabling it to mimic the way neurons work to store and delete information in the brain. Research team leader Dr. Sumeet Walia said the technology has applications in (AI) technology that can harness the brain’s full sophisticated functionality.

“Our optogenetically-inspired chip imitates the fundamental biology of nature’s best computer—the human brain,” Walia said. “Being able to store, delete and process information is critical for computing, and the brain does this extremely efficiently. We’re able to simulate the brain’s neural approach simply by shining different colors onto our chip. This technology takes us further on the path towards fast, efficient and secure light-based computing. It also brings us an important step closer to the realization of a bionic brain—a brain-on-a-chip that can learn from its environment just like humans do.”

Jul 18, 2019

Elon Musk wants to connect computers to your brain so we can keep up with robots

Posted by in categories: Elon Musk, robotics/AI

It’s called “neural lace.”

Jul 17, 2019

This Chatbot has Over 660 Million Users—and It Wants to Be Their Best Friend

Posted by in category: robotics/AI

Unlike XiaoIce, most of your human friends don’t possess infinite reserves of patience to comfort you if you’re sad or talk about your favorite band.

Jul 17, 2019

Hajime Robot Restaurant

Posted by in categories: food, robotics/AI

This restaurant hired robot waiters to serve you your food.

Jul 17, 2019

Australian Researchers Have Just Released The World’s First AI-Developed Vaccine

Posted by in categories: biotech/medical, information science, robotics/AI, space

A team at Flinders University in South Australia has developed a new vaccine believed to be the first human drug in the world to be completely designed by artificial intelligence (AI).

While drugs have been designed using computers before, this vaccine went one step further being independently created by an AI program called SAM (Search Algorithm for Ligands).

Flinders University Professor Nikolai Petrovsky who led the development told Business Insider Australia its name is derived from what it was tasked to do: search the universe for all conceivable compounds to find a good human drug (also called a ligand).

Jul 17, 2019

Boston Dynamics’ robots are preparing to leave the lab — is the world ready?

Posted by in category: robotics/AI

Boston Dynamics’ robots get ready to walk the walk.

Jul 17, 2019

Machine Learning Identifies Potential Anti-Cancer Molecules in Food

Posted by in categories: biotech/medical, food, internet, robotics/AI

The internet is rife with myths and articles making dubious claims about certain foods and their anti-cancer properties. We have all seen the articles of questionable scientific merit gracing social media suggesting that such-and-such foods can cure cancer, the majority of which are highly questionable. A new study offers a unique kind of insight into the potential true effectiveness of food in fighting cancer [1].

Investigating molecules in food with machine learning

There is no doubt that there are many foods that contain a myriad of active molecules, and perhaps some of these food myths may have a grain of truth to them. A team of researchers decided to do some real myth-busting and put a variety of bioactive molecules found in foods to the test to see if they might potentially help to combat cancer.

Jul 17, 2019

Towards reconstructing intelligible speech from the human auditory cortex

Posted by in categories: biotech/medical, cyborgs, information science, robotics/AI

Auditory stimulus reconstruction is a technique that finds the best approximation of the acoustic stimulus from the population of evoked neural activity. Reconstructing speech from the human auditory cortex creates the possibility of a speech neuroprosthetic to establish a direct communication with the brain and has been shown to be possible in both overt and covert conditions. However, the low quality of the reconstructed speech has severely limited the utility of this method for brain-computer interface (BCI) applications. To advance the state-of-the-art in speech neuroprosthesis, we combined the recent advances in deep learning with the latest innovations in speech synthesis technologies to reconstruct closed-set intelligible speech from the human auditory cortex. We investigated the dependence of reconstruction accuracy on linear and nonlinear (deep neural network) regression methods and the acoustic representation that is used as the target of reconstruction, including auditory spectrogram and speech synthesis parameters. In addition, we compared the reconstruction accuracy from low and high neural frequency ranges. Our results show that a deep neural network model that directly estimates the parameters of a speech synthesizer from all neural frequencies achieves the highest subjective and objective scores on a digit recognition task, improving the intelligibility by 65% over the baseline method which used linear regression to reconstruct the auditory spectrogram. These results demonstrate the efficacy of deep learning and speech synthesis algorithms for designing the next generation of speech BCI systems, which not only can restore communications for paralyzed patients but also have the potential to transform human-computer interaction technologies.