Archive for the ‘existential risks’ category: Page 78

Dec 30, 2009

Ark-starship – too early or too late?

Posted by in categories: existential risks, lifeboat, space

It is interesting to note that the technical possibility to send interstellar Ark appeared in 1960th, and is based on the concept of “Blust-ship” of Ulam. This blast-ship uses the energy of nuclear explosions to move forward. Detailed calculations were carried out under the project “Orion”. http://en.wikipedia.org/wiki/Project_Orion_(nuclear_propulsion) In 1968 Dyson published an article “Interstellar Transport”, which shows the upper and lower bounds of the projects. In conservative (ie not imply any technical achievements) valuation it would cost 1 U.S. GDP (600 billion U.S. dollars at the time of writing) to launch the spaceship with mass of 40 million tonnes (of which 5 million tons of payload), and its time of flight to Alpha Centauri would be 1200 years. In a more advanced version the price is 0.1 U.S. GDP, the flight time is 120 years and starting weight 150 000 tons (of which 50 000 tons of payload). In principle, using a two-tier scheme, more advanced thermonuclear bombs and reflectors the flying time to the nearest star can reduce to 40 years.
Of course, the crew of the spaceship is doomed to extinction if they do not find a habitable and fit for human planet in the nearest star system. Another option is that it will colonize uninhabited planet. In 1980, R. Freitas proposed a lunar exploration using self-replicating factory, the original weight of 100 tons, but to control that requires artificial intelligence. “Advanced Automation for Space Missions” http://www.islandone.org/MMSG/aasm/ Artificial intelligence yet not exist, but the management of such a factory could be implemented by people. The main question is how much technology and equipment should be enough to throw at the moonlike uninhabited planet, so that people could build on it completely self-sustaining and growing civilization. It is about creating something like inhabited von Neumann probe. Modern self-sustaining state includes at least a few million people (like Israel), with hundreds of tons of equipment on each person, mainly in the form of houses, roads. Weight of machines is much smaller. This gives us the upper boundary of the able to replicate human colony in the 1 billion tons. The lower estimate is that there would be about 100 people, each of which accounts for approximately 100 tons (mainly food and shelter), ie 10 000 tons of mass. A realistic assessment should be somewhere in between, and probably in the tens of millions of tons. All this under the assumption that no miraculous nanotechnology is not yet open.
The advantage of a spaceship as Ark is that it is non-specific reaction to a host of different threats with indeterminate probabilities. If you have some specific threat (the asteroid, the epidemic), then there is better to spend money on its removal.
Thus, if such a decision in the 1960th years were taken, now such a ship could be on the road.
But if we ignore the technical side of the issue, there are several trade-offs on strategies for creating such a spaceship.
1. The sooner such a project is started, the lesser technically advanced it would be, the lesser would be its chances of success and higher would be cost. But if it will be initiated later, the greater would be chances that it will not be complete until global catastrophe.
2. The later the project starts, the greater are the chance that it will take “diseases” of mother civilization with it (e.g. ability to create dangerous viruses ).
3. The project to create a spaceship could lead to the development of technologies that threaten civilization itself. Blast-ship used as fuel hundreds of thousands of hydrogen bombs. Therefore, it can either be used as a weapon, or other party may be afraid of it and respond. In addition, the spaceship can turn around and hit the Earth, as star-hammer — or there maybe fear of it. During construction of the spaceship could happen man-made accidents with enormous consequences, equal as maximum to detonation of all bombs on board. If the project is implementing by one of the countries in time of war, other countries could try to shoot down the spaceship when it launched.
4. The spaceship is a means of protection against Doomsday machine as strategic response in Khan style. Therefore, the creators of such a Doomsday machine can perceive the Ark as a threat to their power.
5. Should we implement a more expensive project, or a few cheaper projects?
6. Is it sufficient to limit the colonization to the Moon, Mars, Jupiter’s moons or objects in the Kuiper belt? At least it can be fallback position at which you can check the technology of autonomous colonies.
7. The sooner the spaceship starts, the less we know about exoplanets. How far and how fast the Ark should fly in order to be in relative safety?
8. Could the spaceship hide itself so that the Earth did not know where it is, and should it do that? Should the spaceship communicate with Earth? Or there is a risk of attack of a hostile AI in this case?
9. Would not the creation of such projects exacerbate the arms race or lead to premature depletion of resources and other undesirable outcomes? Creating of pure hydrogen bombs would simplify the creation of such a spaceship, or at least reduce its costs. But at the same time it would increase global risks, because nuclear non-proliferation will suffer complete failure.
10. Will the Earth in the future compete with its independent colonies or will this lead to Star Wars?
11. If the ship goes off slowly enough, is it possible to destroy it from Earth, by self-propelling missile or with radiation beam?
12. Is this mission a real chance for survival of the mankind? Flown away are likely to be killed, because the chance of success of the mission is no more than 10 per cent. Remaining on the Earth may start to behave more risky, in logic: “Well, if we have protection against global risks, now we can start risky experiments.” As a result of the project total probability of survival decreases.
13. What are the chances that its computer network of the Ark will download the virus, if it will communicate with Earth? And if not, it will reduce the chances of success. It is possible competition for nearby stars, and faster machines would win it. Eventually there are not many nearby stars at distance of about 5 light years — Alpha Centauri, the Barnard star, and the competition can begin for them. It is also possible the existence of dark lonely planets or large asteroids without host-stars. Their density in the surrounding space should be 10 times greater than the density of stars, but to find them is extremely difficult. Also if nearest stars have not any planets or moons it would be a problem. Some stars, including Barnard, are inclined to extreme stellar flares, which could kill the expedition.
14. The spaceship will not protect people from hostile AI that finds a way to catch up. Also in case of war starships may be prestigious, and easily vulnerable targets — unmanned rocket will always be faster than a spaceship. If arks are sent to several nearby stars, it does not ensure their secrecy, as the destination will be known in advance. Phase transition of the vacuum, the explosion of the Sun or Jupiter or other extreme event can also destroy the spaceship. See e.g. A.Bolonkin “Artificial Explosion of Sun. AB-Criterion for Solar Detonation” http://www.scribd.com/doc/24541542/Artificial-Explosion-of-S…Detonation
15. However, the spaceship is too expensive protection from many other risks that do not require such far removal. People could hide from almost any pandemic in the well-isolated islands in the ocean. People can hide on the Moon from gray goo, collision with asteroid, supervolcano, irreversible global warming. The ark-spaceship will carry with it problems of genetic degradation, propensity for violence and self-destruction, as well as problems associated with limited human outlook and cognitive biases. Spaceship would only burden the problem of resource depletion, as well as of wars and of the arms race. Thus, the set of global risks from which the spaceship is the best protection, is quite narrow.
16. And most importantly: does it make sense now to begin this project? Anyway, there is no time to finish it before become real new risks and new ways to create spaceships using nanotech.
Of course it easy to envision nano and AI based Ark – it would be small as grain of sand, carry only one human egg or even DNA information, and could self-replicate. The main problem with it is that it could be created only ARTER the most dangerous period of human existence, which is the period just before Singularity.

Oct 8, 2009

Fermi Paradox and global catastrophes

Posted by in category: existential risks

The main ways of solving the Fermi Paradox are:
1) They are already here (at least in the form of their signals)
2) They do not disseminate in the universe, do not leave traces, and not send signals. That is, they do not start a shock wave of intelligence.
3) The civilizations are extremely rare.
Additional way of thinking is 4): we are unique civilization because of observation selection
All of them have a sad outlook for global risk:
In the first case, we are under threat of conflict with superior aliens.
1A) If they are already here, we can do something that will encourage them to destroy us, or restrict us. For example, turn off the simulation. Or start the program of probes-berserkers. This probes cold be nanobots. In fact it could be something like “Space gray goo” with low intelligence but very wide spreading. It could even be in my room. The only goal of it could be to destroy other nanobots (like our Nanoshield would do). And so we will see it until we create our own nanobots.
1b) If they open up our star system right now and, moreover, focused on total colonization of all systems, we are also will fight with them and are likely to lose. Not probable.
1c) If a large portion of civilization is infected with SETI-virus and distributes signals, specially designed to infect naive civilizations — that is, encourage them to create a computer with AI, aimed at the further replication by SETI channels. This is what I write in the article Is SETI dangerous? http://www.proza.ru/texts/2008/04/12/55.html
1d) By the means of METI signal we attract attention of dangerous civilization and it will send to the solar system a beam of death (probably commonly known as gamma-ray burst). This scenario seems unlikely, since for the time until they receive the signal and have time to react, we have time to fly away from the solar system — if they are far away. And if they are close, it is not clear why they were not here. However, this risk was intensely discussed, for example by D. Brin.
2. They do not disseminate in space. This means that either:
2a) Civilizations are very likely to destroy themselves in very early stages, before it could start wave of robots replicators and we are not exception. This is reinforced by the Doomsday argument – namely the fact that I’m discovering myself in a young civilization suggests that they are much more common than the old. However, based on the expected rate of development of nanotechnology and artificial intelligence, we can start a wave of replicators have in 10–20 years, and even if we die then, this wave will continue to spread throughout the universe. Given the uneven development of civilizations, it is difficult to assume that none of them do not have time to launch a wave of replicators before their death. This is possible only if we a) do not see an inevitable and universal threat looming directly on us in the near future, b) significantly underestimate the difficulty of creating artificial intelligence and nanoreplicators. с) The energy of the inevitable destruction is so great that it manages to destroy all replicators, which were launched by civilization — that is it is of the order of a supernova explosion.
2b) Every civilization sharply limit itself — and this limitation is very hard and long as it is simple enough to run at least one probe-replicator. This restriction may be based either on a powerful totalitarianism, or the extreme depletion of resources. Again in this case, our prospects are quite unpleasant. Bur this solution is not very plausible.
3) If civilization are rare, it means that the universe is much less friendly place to live, and we are on an island of stability, which is likely to be an exception from the rule. This may mean that we underestimate the time of the future sustainability of the important processes for us (the solar luminosity, the earth’s crust), and most importantly, the sustainability of these processes to small influences, that is their fragility. I mean that we can inadvertently break their levels of resistance, carrying out geo-engineering activities, the complex physics experiments and mastering space. More I speak about this in the article: “Why antropic principle stopped to defend us. Observation selection and fragility of our environment”. http://www.scribd.com/doc/8729933/Why-antropic-principle-sto…vironment– See also the works of M.Circovic on the same subject.
However, this fragility is not inevitable and depends on what factors were critical in the Great filter. In addition, we are not necessarily would pressure on this fragile, even if it exist.
4) Observation selection makes us unique civilization.
4a. We are the first civilization, because any civilization which is the first captures the whole galaxy. Likewise, the earthly life is the first life on Earth, because it would require all swimming pools with a nutrient broth, in which could appear another life. In any case, sooner or later we will face another first civilization.
4b. Vast majority of civilizations are being destroyed in the process of colonization of the galaxy, and so we can find ourselves only in the civilization which is not destroyed by chance. Here the obvious risk is that those who made this error, would try to correct it.
4c. We wonder about the absence of contact, because we are not in contact. That is, we are in a unique position, which does not allow any conclusions about the nature of the universe. This clearly contradicts the Copernican principle.
The worst variant for us here is 2a — imminent self-destruction, which, however, has independent confirmation through the Doomsday Argument, but is undermine by the fact that we do not see alien von Neuman probes. I still believe that the most likely scenario is a Rare earth.

Oct 1, 2009

Post-human Earth: How the planet will recover from us

Posted by in categories: existential risks, futurism, human trajectories, policy, sustainability

Paul J. Crutzen

Although this is the scenario we all hope (and work hard) to avoid — the consequences should be of interest to all who are interested in mitigation of the risk of mass extinction:

“WHEN Nobel prize-winning atmospheric chemist Paul Crutzen coined the word Anthropocene around 10 years ago, he gave birth to a powerful idea: that human activity is now affecting the Earth so profoundly that we are entering a new geological epoch.

The Anthropocene has yet to be accepted as a geological time period, but if it is, it may turn out to be the shortest — and the last. It is not hard to imagine the epoch ending just a few hundred years after it started, in an orgy of global warming and overconsumption.

Continue reading “Post-human Earth: How the planet will recover from us” »

Sep 25, 2009

Asteroid attack: Putting Earth’s defences to the test

Posted by in categories: asteroid/comet impacts, defense, existential risks

Peter Garretson from the Lifeboat Advisory Board appears in the latest edition of New Scientist:

“IT LOOKS inconsequential enough, the faint little spot moving leisurely across the sky. The mountain-top telescope that just detected it is taking it very seriously, though. It is an asteroid, one never seen before. Rapid-survey telescopes discover thousands of asteroids every year, but there’s something very particular about this one. The telescope’s software decides to wake several human astronomers with a text message they hoped they would never receive. The asteroid is on a collision course with Earth. It is the size of a skyscraper and it’s big enough to raze a city to the ground. Oh, and it will be here in three days.

Far-fetched it might seem, but this scenario is all too plausible. Certainly it is realistic enough that the US air force recently brought together scientists, military officers and emergency-response officials for the first time to assess the nation’s ability to cope, should it come to pass.

Continue reading “Asteroid attack: Putting Earth's defences to the test” »

Sep 1, 2009

Keeping genes out of terrorists’ hands

Posted by in categories: biological, biotech/medical, chemistry, counterterrorism, existential risks, policy

Nature News reports of a growing concern over different standards for DNA screening and biosecurity:

“A standards war is brewing in the gene-synthesis industry. At stake is the way that the industry screens orders for hazardous toxins and genes, such as pieces of deadly viruses and bacteria. Two competing groups of companies are now proposing different sets of screening standards, and the results could be crucial for global biosecurity.

“If you have a company that persists with a lower standard, you can drag the industry down to a lower level,” says lawyer Stephen Maurer of the University of California, Berkeley, who is studying how the industry is developing responsible practices. “Now we have a standards war that is a race to the bottom.”

Continue reading “Keeping genes out of terrorists' hands” »

May 3, 2009

Swine Flu Update: are we entering an Age of Pandemics?

Posted by in categories: biological, biotech/medical, existential risks, futurism, geopolitics, nanotechnology, space, sustainability

May 2: Many U.S. emergency rooms and hospitals crammed with people… ”Walking well” flood hospitals… Clinics double their traffic in major cities … ER rooms turn away EMT cases. — CNN

Update May 4: Confirmed cases of H1N1 virus now at 985 in 20 countries (Mexico: 590, 25 deaths) — WHO. In U.S.: 245 confirmed U.S. cases in 35 states. — CDC.

“We might be entering an Age of Pandemics… a broad array of dangerous emerging 21st-century diseases, man-made or natural, brand-new or old, newly resistant to our current vaccines and antiviral drugs…. Martin Rees bet $1,000 that bioterror or bioerror would unleash a catastrophic event claiming one million lives in the next two decades…. Why? Less forest, more contact with animals… more meat eating (Africans last year consumed nearly 700 million wild animals… numbers of chickens raised for food in China have increased 1,000-fold over the past few decades)… farmers cut down jungle, creating deforested areas that once served as barriers to the zoonotic viruses…” — Larry Brilliant, Wall Street Journal

May 2, 2009

From financial crisis to global catastrophe

Posted by in categories: economics, existential risks

From financial crisis to global catastrophe

Financial crisis which manifested in the 2008 (but started much earlier) has led to discussion in alarmists circles — is this crisis the beginning of the final sunset of mankind? In this article we will not consider the view that the crisis will suddenly disappear and everything returns to its own as trivial and in my opinion false. Transition of the crisis into the global catastrophe emerged the following perspective:
1) The crisis is the beginning of long slump (E. Yudkowsky term), which gradually lead mankind to a new Middle Ages. This point of view is supported by proponents of Peak Oil theory, who believe that recently was passed peak of production of liquid fuels, and since that time, the number of oil production begins to drop a few percent each year, according to bell curve, and that fossil fuel is a necessary resource for the existence of modern civilization, which will not be able to switch to alternative energy sources. They see the current financial crisis as a direct consequence of high oil prices, which brace immoderate consumption. The maintenance is the point of view is the of «The peak all theory», which shows that not only oil but also the other half of the required resources of modern civilization will be exhausted in the next quarter of century. (Note that the possibility of replacing some of resources with other leads to that peaks of each resource flag to one moment in time.) Finally, there is a theory of the «peak demand» — namely, that in circumstances where the goods produced more then effective demand, the production in general is not fit, which includes the deflationary spiral that could last indefinitely.
2) Another view is that the financial crisis will inevitably lead to a geopolitical crisis, and then to nuclear war. This view can be reinforced by the analogy between the Great Depression and novadays. The Great Depression ended with the start of the Second World War. But this view is considering nuclear war as the inevitable end of human existence, which is not necessarily true.
3) In the article “Scaling law of the biological evolution and the hypothesis of the self-consistent Galaxy origin of life”. (Advances in Space Research V.36 (2005), P.220–225” http://dec1.sinp.msu.ru/~panov/ASR_Panov_Life.pdf) Russian scientist A. D. Panov showed that the crises in the history of humanity became more frequent in curse of history. Each crisis is linked with the destruction of some old political system, and with the creation principle technological innovation at the exit from the crisis. 1830 technological revolution lead to industrial world (but peak of crisis was of course near 1815 – Waterloo, eruption of Tambora, Byron on the Geneva lake create new genre with Shelly and her Frankeshtain.) One such crisis happened in 1945 (dated 1950 in Panov’s paper – as a date of not the beginning of the crisis, but a date of exit from it and creation of new reality) when the collapse of fascism occurred and arose computers, rockets and atomic bomb, and bipolar world. An important feature of these crises is that they follow a simple law: namely, the next crisis is separated from the preceding interval of time to 2.67+/- 0.15 shorter. The last such crisis occurred in the vicinity of 1991 (1994 if use Panov’s formula from the article), when the USSR broke up and began the march of the Internet. However, the schedule of crisis lies on the hyperbole that comes to the singularity in the region in 2020 (Panov gave estimate 2004+/-15, but information about 1991 crisis allows to sharpen the estimate). If this trend continues to operate, the next crisis must come after 17 years from 1991 , in 2008, and another- even after 6.5 years in 2014 and then the next in 2016 and so on. Naturally it is desirable to compare the Panov’s forecast and the current financial crisis.
Current crisis seems to change world politically and technologically, so it fit to Panov’s theory which predict it with high accuracy long before. (At least at 2005 – but as I now Panov do not compare this crisis with his theory.) But if we agree with Panov’s theory we should not expect global catastrophe now, but only near 2020. So we have long way to it with many crisises which will be painful but not final. (more…)

Feb 24, 2009

I Don’t Want To Live in a Post-Apocalyptic World

Posted by in categories: asteroid/comet impacts, defense, existential risks, futurism, habitats, robotics/AI, space

Image from The Road film, based on Cormac McCarthy's book

How About You?
I’ve just finished reading Cormac McCarthy’s The Road at the recommendation of my cousin Marie-Eve. The setting is a post-apocalyptic world and the main protagonists — a father and son — basically spend all their time looking for food and shelter, and try to avoid being robbed or killed by other starving survivors.

It very much makes me not want to live in such a world. Everybody would probably agree. Yet few people actually do much to reduce the chances of of such a scenario happening. In fact, it’s worse than that; few people even seriously entertain the possibility that such a scenario could happen.

People don’t think about such things because they are unpleasant and they don’t feel they can do anything about them, but if more people actually did think about them, we could do something. We might never be completely safe, but we could significantly improve our odds over the status quo.

Continue reading “I Don't Want To Live in a Post-Apocalyptic World” »

Feb 20, 2009

Bill Joy: What I’m worried about, what I’m excited about

Posted by in categories: education, existential risks


Technologist and futurist Bill Joy talks about several big worries for humanity — and several big hopes in the fields of health, education and future tech.

Feb 14, 2009

Russian Lifeboat Foundation NanoShield

Posted by in categories: cybercrime/malcode, existential risks, nanotechnology, policy

I have translated into Russian “Lifeboat Foundation Nanoshield” http://www.scribd.com/doc/12113758/Nano-Shield and I have some thoughts about it:

1) The effective mean of defense against ecofagy would be to turn in advance all the matter on the Earth into nanorobots. Just as every human body is composed of living cells (although this does not preclude the emergence of cancer cells). The visible world would not change. All object will consist of nano-cells, which would have sufficient immune potential to resist almost any foreseeable ecofagy. (Except purely informational like computer viruses). Even in each leaving cell would be small nanobot, which would control it. Maybe the world already consists of nanobots.
2) The authors of the project suggest that ecofagic attack would consist of two phases — reproduction and destruction. However, creators of ecofagy, could make three phases — first phase would be a quiet distribution throughout the Earth’s surface, under surfase, in the water and air. In this phase nanorobots will multiply in slow rate, and most importantly, sought to be removed from each other on the maximum distance. In this case, their concentration everywhere on the Earth as a result would be 1 unit on the cube meter (which makes them unrecognazible). And only after it they would start to proliferate intensely, simultaneously creating nanorobots soldiers who did not replicate, but attack the defensive system. In doing so, they first have to suppress protection systems, like AIDS. Or as a modern computer viruses switches off the antivirus. Creators of the future ecofagy must understand it. As the second phase of rapid growth begins everywhere on the surface of the Earth, then it would be impossible to apply the tools of destruction such as nuclear strikes or aimed rays, as this would mean the death of the planet in any case — and simply would not be in store enough bombs.
3) The authors overestimate the reliability of protection systems. Any system has a control center, which is a blank spot. The authors implicitly assume that any person with a certain probability can suddenly become terrorist willing to destroy the world (and although the probability is very small, a large number of people living on Earth make it meaningful). But because such a system will be managed by people, they may also want to destroy the world. Nanoshield could destroy the entire world after one erroneous command. (Even if the AI manages it, we cannot say a priori that the AI cannot go mad.) The authors believe that multiple overlapping of Nanoshield protection from hackers will make it 100 % safe, but no known computer system is 100 % safe – but all major computer programs were broken by hackers, including Windows and IPod.
4) Nanoshield could develop something like autoimmunity reaction. The author’s idea that it is possible to achieve 100 % reliability by increasing the number of control systems is very superficial, as well as the more complex is the system, the more difficult is to calculate all the variants of its behavior, and the more likely it will fail in the spirit of the chaos theory.
5) Each cubic meter of oceanic water contains 77 million living beings (on the northern Atlantic, as the book «Zoology of Invertebrates» tells). Hostile ecofages can easily camouflage under natural living beings, and vice versa; the ability of natural living beings to reproduce, move and emit heat will significantly hamper detection of ecofages, creating high level of false alarms. Moreover, ecofages may at some stage in their development be fully biological creatures, where all blueprints of nanorobot will be recorded in DNA, and thus be almost no distinguishable from the normal cell.
6) There are significant differences between ecofages and computer viruses. The latter exist in the artificial environment that is relatively easy to control — for example, turn off the power, get random access to memory, boot from other media, antivirus could be instantaneous delivered to any computer. Nevertheless, a significant portion of computers were infected with a virus, but many users are resigned to the presence of a number of malware on their machines, if it does not slow down much their work.
7) Compare: Stanislaw Lem wrote a story “Darkness and mold” with main plot about ecofages.
8 ) The problem of Nanoshield must be analyzed dynamically in time — namely, the technical perfection of Nanoshield should precede technical perfection of nanoreplikators in any given moment. From this perspective, the whole concept seems very vulnerable, because to create an effective global Nanoshield require many years of development of nanotechnology — the development of constructive, and political development — while creating primitive ecofages capable, however, completely destroy the biosphere, is required much less effort. Example: Creating global missile defense system (ABM – still not exist) is much more complex technologically and politically, than the creation of intercontinental nuclear missiles.
9) You should be aware that in the future will not be the principal difference between computer viruses and biological viruses and nanorobots — all them are information, in case of availability of any «fabs» which can transfer information from one carrier to another. Living cells could construct nanorobots, and vice versa; spreading over computer networks, computer viruses can capture bioprinters or nanofabs and force them to perform dangerous bioorganizms or nanorobots (or even malware could be integrated into existing computer programs, nanorobots or DNA of artificial organisms). These nanorobots can then connect to computer networks (including the network which control Nanoshield) and send their code in electronic form. In addition to these three forms of the virus: nanotechnology, biotechnology and computer, are possible other forms, for example, cogno — that is transforming the virus in some set of ideas in the human brain which push the man to re-write computer viruses and nanobots. Idea of “hacking” is now such a meme.
10) It must be noted that in the future artificial intelligence will be much more accessible, and thus the viruses would be much more intelligent than today’s computer viruses, also applies to nanorobots: they will have a certain understanding of reality, and the ability to quickly rebuild itself, even to invent its innovative design and adapt to new environments. Essential question of ecofagy is whether individual nanorobots are independent of each other, as the bacteria cells, or they will act as a unified army with a single command and communication systems. In the latter case, it is possible to intercept the management of hostile army ecofages.
11) All that is suitable to combat ecofagy, is suitable as a defensive (and possibly offensive) weapons in nanowar.
12) Nanoshield is possible only as global organization. If there is part of the Earth which is not covered by it, Nanoshield will be useless (because there nanorobots will multiply in such quantities that it would be impossible to confront them). It is an effective weapon against people and organizations. So, it should occur only after full and final political unification of the globe. The latter may result from either World War for the unification of the planet, either by merging of humanity in the face of terrible catastrophes, such as flash of ecofagy. In any case, the appearance of Nanoshield must be preceded by some accident, which means a great chance of loss of humanity.
13) Discovery of «cold fusion» or other non-conventional energy sources will make possible much more rapid spread of ecofagy, as they will be able to live in the bowels of the earth and would not require solar energy.
14) It is wrong to consider separately self-replicating and non-replitcating nanoweapons. Some kinds of ecofagy can produce nano-soldiers attacking and killing all life. (This ecofagy can become a global tool of blackmail.) It has been said that to destroy all people on the Earth can be enough a few kilograms of nano-soldiers. Some kinds of ecofagy in early phase could dispersed throughout the world, very slowly and quietly multiply and move, and then produce a number of nano-soldiers and attack humans and defensive systems, and then begin to multiply intensively in all areas of the globe. But man, stuffed with nano-medicine, can resist attack of nanosoldier as well as medical nanorobots will be able to neutralize any poisons and tears arteries. In this small nanorobot must attack primarily informational, rather than from a large selection of energy.
15) Did the information transparency mean that everyone can access code of dangerous computer virus, or description of nanorobot-ecofage? A world where viruses and knowledge of mass destruction could be instantly disseminated through the tools of information transparency is hardly possible to be secure. We need to control not only nanorobots, but primarily persons or other entities which may run ecofagy. The smaller is the number of these people (for example, scientists-nanotechnologist), the easier would be to control them. On the contrary, the diffusion of knowledge among billions of people will make inevitable emergence of nano-hackers.
16) The allegation that the number of creators of defense against ecofagy will exceed the number of creators of ecofagy in many orders of magnitude, seems doubtful, if we consider an example of computer viruses. Here we see that, conversely, the number of virus writers in the many orders of magnitude exceeds the number of firms and projects on anti-virus protection, and moreover, the majority of anti-virus systems cannot work together as they stops each other. Terrorists may be masked by people opposing ecofagy and try to deploy their own system for combat ecofagy, which will contain a tab that allows it to suddenly be reprogrammed for the hostile goal.
17) The text implicitly suggests that Nanoshield precedes to the invention of self improving AI of superhuman level. However, from other prognosis we know that this event is very likely, and most likely to occur simultaneously with the flourishing of advanced nanotechnology. Thus, it is not clear in what timeframe the project Nanoshield exist. The developed artificial intelligence will be able to create a better Nanoshield and Infoshield, and means to overcome any human shields.
18) We should be aware of equivalence of nanorobots and nanofabrics — first can create second, and vice versa. This erases the border between the replicating and non-replicating nanomachines, because a device not initially intended to replicate itself can construct somehow nanorobot or to reprogram itself into capable for replication nanorobot.

Page 78 of 83First7576777879808182Last