БЛОГ

Archive for the ‘robotics/AI’ category: Page 564

Apr 7, 2017

Amazon’s delivery drone tests reportedly involve a ‘simulated dog’

Posted by in categories: drones, robotics/AI

Amazon is using a “simulated dog” to test its delivery drones, according to IBTimes.

The e-commerce giant wants to use drones to deliver parcels to customers in less than 30 minutes but it clearly has some concerns about how dogs might interfere.

At least one simulated dog is being used to “help Amazon see how UAVs [unmanned aerial vehicles] would respond to a canine trying to protect its territory,” according to IBTimes.

Continue reading “Amazon’s delivery drone tests reportedly involve a ‘simulated dog’” »

Apr 7, 2017

OpenAI Just Beat Google DeepMind at Atari With an Algorithm From the 80s

Posted by in categories: biological, Elon Musk, information science, robotics/AI

OpenAI vs. Deepmind in river raid ATARI.


AI research has a long history of repurposing old ideas that have gone out of style. Now researchers at Elon Musk’s open source AI project have revisited “neuroevolution,” a field that has been around since the 1980s, and achieved state-of-the-art results.

The group, led by OpenAI’s research director Ilya Sutskever, has been exploring the use of a subset of algorithms from this field, called “evolution strategies,” which are aimed at solving optimization problems.

Continue reading “OpenAI Just Beat Google DeepMind at Atari With an Algorithm From the 80s” »

Apr 6, 2017

AI Learns to Read Sentiment Without Being Trained to Do So

Posted by in categories: Elon Musk, information science, robotics/AI

OpenAI researchers were surprised to discover that a neural network trained to predict the next character in texts from Amazon reviews taught itself to analyze sentiment. This unsupervised learning is the dream of machine learning researchers.

Much of today’s artificial intelligence (AI) relies on machine learning: where machines respond or react autonomously after learning information from a particular data set. Machine learning algorithms, in a sense, predict outcomes using previously established values. Researchers from OpenAI discovered that a machine learning system they created to predict the next character in the text of reviews from Amazon developed into an unsupervised system that could learn representations of sentiment.

“We were very surprised that our model learned an interpretable feature, and that simply predicting the next character in Amazon reviews resulted in discovering the concept of sentiment,” OpenAI, a non-profit AI research company whose investors include Elon Musk, Peter Thiel, and Sam Altman, explained on their blog. OpenAI’s neural network was able to train itself to analyze sentiment by classifying reviews as either positive or negative, and was able to generate text with a desired sentiment.

Continue reading “AI Learns to Read Sentiment Without Being Trained to Do So” »

Apr 6, 2017

Human-Level AI Are Probably A Lot Closer Than You Think

Posted by in categories: business, robotics/AI, singularity

Although some thinkers use the term “singularity” to refer to any dramatic paradigm shift in the way we think and perceive our reality, in most conversations The Singularity refers to the point at which AI surpasses human intelligence. What that point looks like, though, is subject to debate, as is the date when it will happen.

In a recent interview with Inverse, Stanford University business and energy and earth sciences graduate student Damien Scott provided his definition of singularity: the moment when humans can no longer predict the motives of AI. Many people envision singularity as some apocalyptic moment of truth with a clear point of epiphany. Scott doesn’t see it that way.

“We’ll start to see narrow artificial intelligence domains that keep getting better than the best human,” Scott told Inverse. Calculators already outperform us, and there’s evidence that within two to three years, AI will outperform the best radiologists in the world. In other words, the singularity is already happening across each specialty and industry touched by AI — which, soon enough, will be all of them. If you’re of the mind that the singularity means catastrophe for humans, this likens the process for humans to the experience of the frogs placed into the pot of water that slowly comes to a boil: that is to say, killing us so slowly that we don’t notice it’s already begun.

Continue reading “Human-Level AI Are Probably A Lot Closer Than You Think” »

Apr 6, 2017

Robots exchange ‘genetic material’ in mating experiment to evolve

Posted by in categories: genetics, robotics/AI

Researchers from Vassar College expanded on efforts in evolutionary robots to include developmental factors for the first time. Robots were able to ‘reproduce’ to spur 10 generations.

Read more

Apr 6, 2017

Towards an Artificial Brain

Posted by in categories: biological, ethics, information science, neuroscience, robotics/AI

The fast-advancing fields of neuroscience and computer science are on a collision course. David Cox, Assistant Professor of Molecular and Cellular Biology and Computer Science at Harvard, explains how his lab is working with others to reverse engineer how brains learn, starting with rats. By shedding light on what our machine learning algorithms are currently missing, this work promises to improve the capabilities of robots – with implications for jobs, laws and ethics.

http://www.weforum.org/

Continue reading “Towards an Artificial Brain” »

Apr 6, 2017

Electronic synapses that can learn : towards an artificial brain?

Posted by in categories: biological, particle physics, robotics/AI

© Sören Boyn / CNRS/Thales physics joint research unit.

Artist’s impression of the electronic synapse: the particles represent electrons circulating through oxide, by analogy with neurotransmitters in biological synapses. The flow of electrons depends on the oxide’s ferroelectric domain structure, which is controlled by electric voltage pulses.

Download the press release : PR Synapses

Continue reading “Electronic synapses that can learn : towards an artificial brain?” »

Apr 6, 2017

If an AI Doesn’t Take Your Job, It Will Design Your Office

Posted by in categories: food, information science, physics, robotics/AI, space

Arranging employees in an office is like creating a 13-dimensional matrix that triangulates human wants, corporate needs, and the cold hard laws of physics: Joe needs to be near Jane but Jane needs natural light, and Jim is sensitive to smells and can’t be near the kitchen but also needs to work with the product ideation and customer happiness team—oh, and Jane hates fans. Enter Autodesk’s Project Discover. Not only does the software apply the principles of generative design to a workspace, using algorithms to determine all possible paths to your #officegoals, but it was also the architect (so to speak) behind the firm’s newly opened space in Toronto.

That project, overseen by design firm The Living, first surveyed the 300 employees who would be moving in. What departments would you like to sit near? Are you a head-down worker or an interactive one? Project Discover generated 10,000 designs, exploring different combinations of high- and low-traffic areas, communal and private zones, and natural-light levels. Then it matched as many of the 300 workers as possible with their specific preferences, all while taking into account the constraints of the space itself. “Typically this kind of fine-resolution evaluation doesn’t make it into the design of an office space,” says Living founder David Benjamin. OK, humans—you got what you wanted. Now don’t screw it up.

Read more

Apr 5, 2017

First In-Depth Look at Google’s TPU Architecture

Posted by in categories: mobile phones, robotics/AI

Four years ago, Google started to see the real potential for deploying neural networks to support a large number of new services. During that time it was also clear that, given the existing hardware, if people did voice searches for three minutes per day or dictated to their phone for short periods, Google would have to double the number of datacenters just to run machine learning models.

The need for a new architectural approach was clear, Google distinguished hardware engineer, Norman Jouppi, tells The Next Platform, but it required some radical thinking. As it turns out, that’s exactly what he is known for. One of the chief architects of the MIPS processor, Jouppi has pioneered new technologies in memory systems and is one of the most recognized names in microprocessor design. When he joined Google over three years ago, there were several options on the table for an inference chip to churn services out from models trained on Google’s CPU and GPU hybrid machines for deep learning but ultimately Jouppi says he never excepted to return back to what is essentially a CISC device.

We are, of course, talking about Google’s Tensor Processing Unit (TPU), which has not been described in much detail or benchmarked thoroughly until this week. Today, Google released an exhaustive comparison of the TPU’s performance and efficiencies compared with Haswell CPUs and Nvidia Tesla K80 GPUs. We will cover that in more detail in a separate article so we can devote time to an in-depth exploration of just what’s inside the Google TPU to give it such a leg up on other hardware for deep learning inference. You can take a look at the full paper, which was just released, and read on for what we were able to glean from Jouppi that the paper doesn’t reveal.

Continue reading “First In-Depth Look at Google’s TPU Architecture” »

Apr 5, 2017

Self-driving shuttle in London

Posted by in categories: robotics/AI, transportation

London is testing out self-driving shuttles.

Read more