БЛОГ

Archive for the ‘singularity’ category: Page 51

Dec 7, 2013

Our Final Invention: How the Human Race Goes and Gets Itself Killed

Posted by in categories: complex systems, defense, ethics, evolution, existential risks, futurism, homo sapiens, human trajectories, posthumanism, robotics/AI, singularity, supercomputing

By Greg Scoblete — Real Clear Technology

We worry about robots.

Hardly a day goes by where we’re not reminded about how robots are taking our jobs and hollowing out the middle class. The worry is so acute that economists are busy devising new social contracts to cope with a potentially enormous class of obsolete humans.

Continue reading “Our Final Invention: How the Human Race Goes and Gets Itself Killed” »

Dec 1, 2013

Military–Industrial Complex Supermanagement!

Posted by in categories: business, complex systems, economics, education, engineering, ethics, existential risks, finance, futurism, information science, science, singularity, sustainability, transparency

EXCERPT

To further underpin this statement, I will share Peter Drucker’s quote, “…The greatest danger in times of turbulence is not the turbulence; it is to act with yesterday’s logic…” And also that of Dr. Stephen Covey, “…Again, yesterday holds tomorrow hostage .… Memory is past. It is finite. Vision is future. It is infinite. Vision is greater than history…” And that of Sir Francis Bacon, “… He that will not apply new remedies must expect new evils, for time is the greatest innovator …”

And that of London Business School Professor Gary Hamel, PhD., “…You cannot get to a new place with an old map…” And that of Alvin Toffler, “…The future always comes too fast and in the wrong order…”

View the entire presentation at http://lnkd.in/dP2PmCP

Nov 30, 2013

Supermanagement!

Posted by in categories: bitcoin, business, complex systems, economics, education, engineering, ethics, existential risks, finance, futurism, geopolitics, information science, physics, robotics/AI, science, singularity, sustainability, transparency

Supermanagement! by Mr. Andres Agostini (Excerpt)

DEEPEST

“…What distinguishes our age from every other is not the world-flattening impact of communications, not the economic ascendance of China and India, not the degradation of our climate, and not the resurgence of ancient religious animosities. Rather, it is a frantically accelerating pace of change…”


Read the entire piece at http://lnkd.in/bYP2nDC

Nov 20, 2013

Can We Live Forever?

Posted by in categories: evolution, futurism, human trajectories, life extension, nanotechnology, philosophy, robotics/AI, science, singularity

The Lifeboat community doesn’t need me to tell them that a growing number of scientists are dedicating their time and energy into research that could radically alter the human aging trajectory. As a result we could be on the verge of the end of aging. But from an anthropological and evolutionary perspective, humans have always had the desire to end aging. Most human culture groups on the planet did this by inventing some belief structure incorporating eternal consciousness. In my mind this is a logical consequence of A) realizing you are going to die and B) not knowing how to prevent that tragedy. So from that perspective, I wanted to create a video that contextualized the modern scientific belief in radical life extension with the religious/mythological beliefs of our ancestors.

Continue reading “Can We Live Forever?” »

Nov 14, 2013

The Disruptional Singularity

Posted by in categories: business, climatology, complex systems, cosmology, counterterrorism, cybercrime/malcode, defense, economics, education, engineering, ethics, existential risks, finance, futurism, nanotechnology, physics, policy, robotics/AI, science, singularity, supercomputing, sustainability, transparency

(Excerpt)

Beyond the managerial challenges (downside risks) presented by the exponential technologies as it is understood in the Technological Singularity and its inherent futuristic forces impacting the present and the future now, there are also some grave global risks that many forms of management have to tackle with immediately.

These grave global risks have nothing to do with advanced science or technology. Many of these hazards stem from nature and some are, as well, man made.

For instance, these grave global risks ─ embodying the Disruptional Singularity ─ are geological, climatological, political, geopolitical, demographic, social, economic, financial, legal and environmental, among others. The Disruptional Singularity’s major risks are gravely threatening us right now, not later.

Read the full document at http://lnkd.in/bYP2nDC

Nov 12, 2013

The Future of Scientific Management, Today!

Posted by in categories: business, counterterrorism, defense, economics, education, engineering, ethics, existential risks, finance, futurism, science, singularity, sustainability, transparency

The Future of Scientific Management, Today! (Excerpt)

Transformative and Integrative Risk Management
Andres Agostini was asked this question:

Mr. David Shaw’s question, “…Andres, from your work on the future which management skills need to be developed? Classically the management role is about planning, organizing, leading and controlling. With the changes coming in the future what’s your view on how this management mix needs to change and adapt?…” Question was posited on an Internet Forum, formulated by Mr. David Shaw (Peterborough, United Kingdom) on October 09, 2013.

Continue reading “The Future of Scientific Management, Today!” »

Jul 8, 2013

The Post-Human World

Posted by in categories: biological, complex systems, evolution, futurism, robotics/AI, singularity

3j0evbm2zqijaw_small

Originally posted via The Advanced Apes

Through my writings I have tried to communicate ideas related to how unique our intelligence is and how it is continuing to evolve. Intelligence is the most bizarre of biological adaptations. It appears to be an adaptation of infinite reach. Whereas organisms can only be so fast and efficient when it comes to running, swimming, flying, or any other evolved skill; it appears as though the same finite limits are not applicable to intelligence.

What does this mean for our lives in the 21st century?

Continue reading “The Post-Human World” »

Jun 16, 2013

Vaccinate against B.S.O.D — Insure your Memories.

Posted by in categories: ethics, evolution, futurism, robotics/AI, singularity

BSOD_dirrogate_SIM_mind_upload_reboot_error_transhumanism

“…and on the third day he rose again…”

If we approach the subject from a non theist point of view, what we have is a re-boot. A restore of a previously working “system image”. Can we restore a person to the last known working state prior to system failure?

As our Biological (analog) life get’s more entwined with the Digital world we have created, chances are, there might be options worth exploring. It all comes down to “Sampling” — taking snapshots of our analog lives and storing them digitally. Today, with reasonable precision we can sample, store and re-create most of our primary senses, digitally. Sight via cameras, sound via microphones, touch via haptics and even scents can be sampled and/or synthesized with remarkable accuracy.

analog_digital_life_sampling_immortality

Continue reading “Vaccinate against B.S.O.D — Insure your Memories.” »

Jun 4, 2013

Recreating Heaven and Earth…in real-time.

Posted by in categories: evolution, futurism, habitats, media & arts, singularity

Prologue:

‘Let there be light,’ said the Cgi-God, and there was light…and God Rays.

We were out in the desert; barren land, and our wish was that it be transformed into a green oasis; a tropical paradise.

Continue reading “Recreating Heaven and Earth...in real-time.” »

May 31, 2013

How Could WBE+AGI be Easier than AGI Alone?

Posted by in categories: complex systems, engineering, ethics, existential risks, futurism, military, neuroscience, singularity, supercomputing

This essay was also published by the Institute for Ethics & Emerging Technologies and by Transhumanity under the title “Is Price Performance the Wrong Measure for a Coming Intelligence Explosion?”.

Introduction

Most thinkers speculating on the coming of an intelligence explosion (whether via Artificial-General-Intelligence or Whole-Brain-Emulation/uploading), such as Ray Kurzweil [1] and Hans Moravec [2], typically use computational price performance as the best measure for an impending intelligence explosion (e.g. Kurzweil’s measure is when enough processing power to satisfy his estimates for basic processing power required to simulate the human brain costs $1,000). However, I think a lurking assumption lies here: that it won’t be much of an explosion unless available to the average person. I present a scenario below that may indicate that the imminence of a coming intelligence-explosion is more impacted by basic processing speed – or instructions per second (ISP), regardless of cost or resource requirements per unit of computation, than it is by computational price performance. This scenario also yields some additional, counter-intuitive conclusions, such as that it may be easier (for a given amount of “effort” or funding) to implement WBE+AGI than it would be to implement AGI alone – or rather that using WBE as a mediator of an increase in the rate of progress in AGI may yield an AGI faster or more efficiently per unit of effort or funding than it would be to implement AGI directly.

Loaded Uploads:

Continue reading “How Could WBE+AGI be Easier than AGI Alone?” »

Page 51 of 53First4647484950515253