New machine learning training approach could help under-resourced academic labs catch up with big tech.
TOKYO (Reuters) — In August, a robot vaguely resembling a kangaroo will begin stacking sandwiches, drinks and ready meals on shelves at a Japanese convenience store in a test its maker, Telexistence, hopes will help trigger a wave of retail automation.
Following that trial, store operator FamilyMart says it plans to use robot workers at 20 stores around Tokyo by 2022. At first, people will operate them remotely — until the machines’ artificial intelligence (AI) can learn to mimic human movements. Rival convenience store chain Lawson is deploying its first robot in September, according to Telexistence.
“It advances the scope and scale of human existence,” the robot maker’s chief executive, Jin Tomioka, said as he explained how its technology lets people sense and experience places other than where they are.
Dr. Ben Goertzel CEO & Founder of the SingularityNET Foundation is particularly visible and vocal on his thoughts on Artificial Intelligence, AGI, and where research and industry are in regards to AGI. Speaking at the (Virtual) OpenCogCon event this week, Dr. Goertzel is one of the world’s foremost experts in Artificial General Intelligence. He has decades of expertise applying AI to practical problems in areas ranging from natural language processing and data mining to robotics, video gaming, national security, and bioinformatics.
Are we at a turning point in AGI?
Dr. Goertzel believes that we are now at a turning point in the history of AI. Over the next few years he believes the balance of activity in the AI research area is about to shift from highly specialized narrow AIs toward AGIs. Deep neural nets have achieved amazing things but that paradigm is going to run out of steam fairly soon, and rather than this causing another “AI winter” or a shift in focus to some other kind of narrow AI, he thinks it’s going to trigger the AGI revolution.
Can artificial intelligence enhance human surgeons with AI superpowers to reduce medical errors?
In February of last year, the San Francisco–based research lab OpenAI announced that its AI system could now write convincing passages of English. Feed the beginning of a sentence or paragraph into GPT-2, as it was called, and it could continue the thought for as long as an essay with almost human-like coherence.
Now, the lab is exploring what would happen if the same algorithm were instead fed part of an image. The results, which were given an honorable mention for best paper at this week’s International Conference on Machine Learning, open up a new avenue for image generation, ripe with opportunity and consequences.
Android apps targeted by this new trojan include banking, dating, social media, and instant messaging apps.
How do you beat Tesla, Google, Uber and the entire multi-trillion dollar automotive industry with massive brands like Toyota, General Motors, and Volkswagen to a full self-driving car? Just maybe, by finding a way to train your AI systems that is 100,000 times cheaper.
It’s called Deep Teaching.
Perhaps not surprisingly, it works by taking human effort out of the equation.