Comments on: Why AI could fail? https://russian.lifeboat.com/blog/2010/03/why-ai-could-fail Safeguarding Humanity Sun, 14 Mar 2010 14:37:00 +0000 hourly 1 https://wordpress.org/?v=6.6.1 By: LAG https://russian.lifeboat.com/blog/2010/03/why-ai-could-fail#comment-49211 Sun, 14 Mar 2010 14:37:00 +0000 http://lifeboat.com/blog/?p=754#comment-49211 I’m confused. How can presumably ‘true’ intelligence (artificial or organic) be claimed for any system that lacks an appreciation of right and wrong?

By that, I don’t mean the system must accept any particular set of moral principles, but some set of principles is required in an intelligent being who will be expected to make moral decisions.

And those sorts of decisions cannot be put off limits, else we’re still talking about a machine, complicated but still subject to external direction and hence unintelligent.

]]>
By: Alexei Turchin https://russian.lifeboat.com/blog/2010/03/why-ai-could-fail#comment-49173 Sun, 14 Mar 2010 11:03:32 +0000 http://lifeboat.com/blog/?p=754#comment-49173 In fact I do not know yet what is human conscience. May be we can get AI without conscience, or with psevdo-conscience — AI will claim that it has conscience , but in fact it will not.

]]>
By: LAG https://russian.lifeboat.com/blog/2010/03/why-ai-could-fail#comment-49077 Sun, 14 Mar 2010 03:00:47 +0000 http://lifeboat.com/blog/?p=754#comment-49077 Alexei, that’s an interesting notion, that AI can be achieved by doing nothing more than replicating the physical structure of the brain. So I guess that your reductionist argument is that human conscience is nothing more than a manifestation of architecture? I thought that line of reasoning had been put to rest long ago.

]]>
By: Alexei Turchin https://russian.lifeboat.com/blog/2010/03/why-ai-could-fail#comment-48724 Fri, 12 Mar 2010 18:53:53 +0000 http://lifeboat.com/blog/?p=754#comment-48724 The main objection to all these arguments is that we know fairly straightforward (from a theoretical point of view) way to create human-level intelligence — by scanning the human brain and its simulation on a computer and know how to strengthen joint intelligence groups of people, uniting them in Research Institute, etc. In addition, we know that the electronics operates at 10 million times faster than the human brain for reasons that signals in the axons spread very slowly, and electrical wires — the speed of light. Thus, the brain is scanned a few smart people and combining them into a productive network, we can run it at a rate millions of times higher than the normal speed of the people and get 30 seconds a result equivalent to the group work during the year.

]]>
By: LAG https://russian.lifeboat.com/blog/2010/03/why-ai-could-fail#comment-48723 Fri, 12 Mar 2010 18:44:12 +0000 http://lifeboat.com/blog/?p=754#comment-48723 Your initial statement, “I think most of these points are wrong and AI finaly [sic] will be created” is a fair assertion, but it would be more convincing if you actually provided a few counter-arguments to the subsequent points. Otherwise, it’s simply a weak appeal to some vague authority, which is non-scientific.

Personally, I think the key will be found in an understanding of the mechanism of emergence of intelligence from complex dynamic computing systems. As long as that remains elusive in biological systems (and it is), it will prove elusive in other computing systems.

]]>
By: Alexei Turchin https://russian.lifeboat.com/blog/2010/03/why-ai-could-fail#comment-48571 Thu, 11 Mar 2010 19:28:21 +0000 http://lifeboat.com/blog/?p=754#comment-48571 Yes you are completely right! 22 is quite possible and should be added to polotical reasons. It couild be not so strong — civilization may just degradate to the level where no supercomputers exist, because of war or resourses crisis

]]>
By: John Hunt https://russian.lifeboat.com/blog/2010/03/why-ai-could-fail#comment-48548 Thu, 11 Mar 2010 17:00:14 +0000 http://lifeboat.com/blog/?p=754#comment-48548 Hi Alexi, I don’t think that you covered this but let me add one more.
22) Intelligent civilizations consistently destroy themselves by some other technologic means before AI is achieved.

I agree that all of the 21 items that you mentioned are not likely to turn out to be true except maybe for the political reasons. What do you think about the likelihood of my suggestion #22?

]]>