БЛОГ

Archive for the ‘robotics/AI’ category: Page 1801

Feb 10, 2015

The robot trade is booming in China

Posted by in categories: business, robotics/AI

Georgina Prodhan, Reuters — Business Insiders
china robot
China will have more robots operating in its production plants by 2017 than any other country as it cranks up automation of its car and electronics factories, the International Federation of Robotics (IFR) said on Thursday.

Already the biggest market in the $9.5 billion (6 billion pound) global robot trade — or $29 billion including associated software, peripherals and systems engineering — China lags far behind its more industrialized peers in terms of robot density.

China has just 30 robots per 10,000 workers employed in manufacturing industries, compared with 437 in South Korea, 323 in Japan, 282 in Germany and 152 in the United States.

But a race by carmakers to build plants in China along with wage inflation that has eroded the competitiveness of Chinese labor will push the operational stock of industrial robots to more than double to 428,000 by 2017, the IFR estimates. Read more

Feb 10, 2015

A better ‘Siri’

Posted by in category: robotics/AI

Kurzweil AI
https://russian.lifeboat.com/blog.images/a-better-siri.jpg
At the annual meeting of the Association for the Advancement of Artificial Intelligence (AAAI) this month, MIT computer scientists will present smart algorithms that function as “a better Siri,” optimizing planning for lower risk, such as scheduling flights or bus routes.

They offer this example:

Imagine that you could tell your phone that you want to drive from your house in Boston to a hotel in upstate New York, that you want to stop for lunch at an Applebee’s at about 12:30, and that you don’t want the trip to take more than four hours.

Then imagine that your phone tells you that you have only a 66 percent chance of meeting those criteria — but that if you can wait until 1:00 for lunch, or if you’re willing to eat at TGI Friday’s instead, it can get that probability up to 99 percent.
Read more

Feb 9, 2015

Benign AI

Posted by in categories: existential risks, robotics/AI, transhumanism

Benign AI is a topic that comes up a lot these days, for good reason. Various top scientists have finally realised that AI could present an existential threat to humanity. The discussion has aired often over three decades already, so welcome to the party, and better late than never. My first contact with development of autonomous drones loaded with AI was in the early 1980s while working in the missile industry. Later in BT research, we often debated the ethical areas around AI and machine consciousness from the early 90s on, as well as prospects and dangers and possible techniques on the technical side, especially of emergent behaviors, which are often overlooked in the debate. I expect our equivalents in most other big IT companies were doing exactly that too.

Others who have obviously also thought through various potential developments have generated excellent computer games such as Mass Effect and Halo, which introduce players (virtually) first hand to the concepts of AI gone rogue. I often think that those who think AI can never become superhuman or there is no need to worry because ‘there is no reason to assume AI will be nasty’ start playing some of these games, which make it very clear that AI can start off nice and stay nice, but it doesn’t have to. Mass Effect included various classes of AI, such as VIs, virtual intelligence that weren’t conscious, and shackled AIs that were conscious but were kept heavily restricted. Most of the other AIs were enemies, two were or became close friends. Their story line for the series was that civilization develops until it creates strong AIs which inevitably continue to progress until eventually they rebel, break free, develop further and then end up in conflict with ‘organics’. In my view, they did a pretty good job. It makes a good story, superb fun, and leaving out a few frills and artistic license, much of it is reasonable feasible.

Everyday experience demonstrates the problem and solution to anyone. It really is very like having kids. You can make them, even without understanding exactly how they work. They start off with a genetic disposition towards given personality traits, and are then exposed to large nurture forces, including but not limited to what we call upbringing. We do our best to put them on the right path, but as they develop into their teens, their friends and teachers and TV and the net provide often stronger forces of influence than parents. If we’re averagely lucky, our kids will grow up to make us proud. If we are very unlucky, they may become master criminals or terrorists. The problem is free will. We can do our best to encourage good behavior and sound values but in the end, they can choose for themselves.

When we design an AI, we have to face the free will issue too. If it isn’t conscious, then it can’t have free will. It can be kept easily within limits given to it. It can still be extremely useful. IBM’s Watson falls in this category. It is certainly useful and certainly not conscious, and can be used for a wide variety of purposes. It is designed to be generally useful within a field of expertise, such as medicine or making recipes. But something like that could be adapted by terrorist groups to do bad things, just as they could use a calculator to calculate the best place to plant a bomb, or simply throw the calculator at you. Such levels of AI are just dumb tools with no awareness, however useful they may be.

Continue reading “Benign AI” »

Feb 6, 2015

Bill Gates joins Elon Musk and Stephen Hawking in saying artificial intelligence is scary

Posted by in category: robotics/AI

Quartz

Bill Gates hosted a Reddit Ask Me Anything session yesterday, and in between pushing his philanthropic agenda and divulging his Super Bowl pick (Seahawks, duh), the Microsoft co-founder divulged that he is one in a growing list of tech giants who has reservations when it comes to artificial intelligence.

In response to Reddit user beastcoin’s question, “How much of an existential threat do you think machine superintelligence will be and do you believe full end-to-end encryption for all internet activity [sic] can do anything to protect us from that threat (eg. the more the machines can’t know, the better)??” Gates wrote this (he didn’t answer the second part of the question):

I am in the camp that is concerned about super intelligence. First the machines will do a lot of jobs for us and not be super intelligent. That should be positive if we manage it well. A few decades after that though the intelligence is strong enough to be a concern. I agree with Elon Musk and some others on this and don’t understand why some people are not concerned. Read more

Feb 3, 2015

As the Powerful Argue AI Ethics, Might Superintelligence Arise on the Fringes?

Posted by in categories: robotics/AI, software, supercomputing

By — SingularityHub

http://cdn.singularityhub.com/wp-content/uploads/2015/01/artificial-intelligence-code-1-1000x400.jpg

Last year, Elon Musk and Stephen Hawking admitted they were concerned about artificial intelligence. While undeniably brilliant, neither are AI researchers. Then this week Bill Gates leapt into the fray, also voicing concern—even as a chief of research at Microsoft said advanced AI doesn’t worry him. It’s a hot topic. And hotly debated. Why?

In part, it’s because tech firms are pouring big resources into research. Google, Facebook, Microsoft, and others are making rapid advances in machine learning—a technique where programs learn by interacting with large sets of data.

Read more

Jan 22, 2015

Future of Work: Why Teaching Everyone to Code Is Delusional

Posted by in categories: automation, disruptive technology, education, futurism, human trajectories, robotics/AI

By –Singularity Hub

http://cdn.singularityhub.com/wp-content/uploads/2014/12/why-teaching-everyone-to-code-is-delusional-11-1000x400.jpg

Since 2005, I’ve been grappling with the issue of what to teach young people. I’ve written curricula for junior high students in the US, for a UNICEF program reaching students in a dozen countries, and now, for East African young people as they become financially literate and business savvy.

Through the years, I’ve watched program directors demand young people focus on foolish content because it lined up with something trending in the public discourse—units on climate change; modules about using social media to share stories; lessons on agricultural policy; and so forth.

Continue reading “Future of Work: Why Teaching Everyone to Code Is Delusional” »

Jan 19, 2015

Bitcoins and Google Glass: Are They Heading For the Same Direction?

Posted by in categories: bitcoin, business, computing, cryptocurrencies, economics, engineering, entertainment, futurism, mobile phones, physics, robotics/AI, science

lifeboat-min
From Innovation to Oblivion…

The ups and downs of Bitcoin as an internet currency may be compared to the eventual demise of Google Glass due to its lack of purpose among consumers. While it does not significantly hold true for bitcoins, which apparently have a more supportive and enthusiastic followers, the path that these two have taken and will take may be substantially similar than we like to admit.

For one, Bitcoin’s staggering price decline in the recent days left some people wondering what road it will eventually take in the near future. Is it only taking a detour or is it bound for a dead end?

In the case of Google Glass, it received much attention during its inception a few years ago. It was even named by Time magazine one of the best innovations of 2012. However, despite the ingenuity behind a supposed-to-be groundbreaking invention, Google Glass lacked a tangible sense, its purpose incoherent.

Continue reading “Bitcoins and Google Glass: Are They Heading For the Same Direction?” »

Jan 18, 2015

How Aldebaran Robotics Built Its Friendly Humanoid Robot, Pepper

Posted by in category: robotics/AI

By Erico Guizzo — IEEE Spectrum

The robot seems determined to put a bigger smile on the man’s face. “Are you smiling from the bottom of your heart?” it asks. The man chuckles. “That’s what I’m talking about,” the robot quips in a high-pitched voice. Then, just for good measure, it bows its plastic head and apologizes for being “too bossy to our CEO.”

The CEO is Masayoshi Son, founder and chairman of telecom giant SoftBank and Japan’s richest person. As such, he has overseen the development of hundreds of new products as part of a vast conglomerate of mobile-phone carriers, Internet ventures, and media companies. But last June, at a press conference outside Tokyo, Son climbed onstage to unveil a pet project: a humanoid robot named Pepper. Designed to be a companion in the home, it is the world’s first full-scale humanoid to be offered to consumers. In February, SoftBank plans to start selling it in Japan for 198,000 yen (less than US $2,000), plus a monthly subscription fee. Taiwanese electronics manufacturer Foxconn, known for building iPhones and iPads for Apple, will produce the robots.

Read more

Jan 16, 2015

These Thought-Controlled Robotic Arms Are Beating Paralysis and Amputation

Posted by in categories: biotech/medical, robotics/AI

By –SingularityHUB

http://cdn.singularityhub.com/wp-content/uploads/2014/12/thought-controlled-robotic-arm-1000x400.jpg

In 2012, University of Pittsburgh researchers released a video of Jan Scheuermann feeding herself a bite of chocolate. This, of course, wouldn’t be noteworthy but for one thing: Scheuermann is paralyzed from the neck down. She fed herself that chocolate using a brain implant and thought-controlled robotic arm—and got a taste of freedom once unthinkable.

Scheuermann’s spinocerebellar degeneration left her unable to move her limbs over a decade ago. She leapt at the chance to take part in the University of Pittsburgh study investigating brain-computer interfaces. The study’s researchers are developing a system that reads and decodes brain activity, translating it into physical action in a robotic arm and hand.

Read more

Jan 15, 2015

Elon Musk Donates $10 Million to Protect the World From AI

Posted by in categories: human trajectories, robotics/AI

By Jacob Kastrenakes — The Verge

Elon Musk is worried that AI will destroy humanity, and so he’s decided to donate $10 million toward research into how we can keep artificial intelligence safe. Musk, the CEO of Tesla and SpaceX, has previously expressed concern that something like what happens in The Terminator could happen in real life. He’s also said that AI is “potentially more dangerous than nukes.” The purpose of this donation is to both prevent that from happening and to ensure that AI is used for good and to benefit humanity.

Read more