БЛОГ

Archive for the ‘defense’ category: Page 3

Dec 29, 2014

Corporate Reconnoitering?

Posted by in categories: cybercrime/malcode, cyborgs, defense, economics, electronics, encryption, engineering, ethics, existential risks, finance, futurism, information science, innovation, life extension, physics, science, security, sustainability

Corporate Reconnoitering?

000000000 blitz 400
ABSOLUTE END.

Authored By Copyright Mr. Andres Agostini

White Swan Book Author (Source of this Article)

http://www.LINKEDIN.com/in/andresagostini

Continue reading “Corporate Reconnoitering?” »

Dec 28, 2014

Kaizen and Six Sigma Vs. White Swan “…Transformative and Integrative Risk Management …” [Pictorial] — Andres Agostini

Posted by in categories: big data, business, complex systems, cyborgs, defense, economics, education, engineering, ethics, existential risks, futurism, geopolitics, information science, innovation, lifeboat, physics, science, security, transparency

Kaizen and Six Sigma Vs. White Swan “…Transformative and Integrative Risk Management …”

000  a 24 hours

ABSOLUTE END.

Continue reading “Kaizen and Six Sigma Vs. White Swan ‘…Transformative and Integrative Risk Management …’ [Pictorial] — Andres Agostini” »

Dec 22, 2014

Homer Simpson on NASA and Bart Simpson on Book Of Five Rings and the Noda Secret! By Mr. Andres Agostini

Posted by in categories: automation, big data, business, complex systems, computing, defense, disruptive technology, economics, education, engineering, ethics, existential risks, finance, futurism, innovation, physics, robotics/AI, science, security, strategy, transparency

Homer Simpson on NASA and Bart Simpson on Book Of Five Rings and the Noda Secret!THE HANSDS OF THE SWAN IN COREL DRAW

Homer: Son, it has been said that Kaizen is “good change.”

Bart: Dad, good change, Do you mean the throttle?

Homer: Son, What do you mean by throttle?

Bart: Dad, the gas pedal gone lunatic!

Continue reading “Homer Simpson on NASA and Bart Simpson on Book Of Five Rings and the Noda Secret! By Mr. Andres Agostini” »

Oct 22, 2014

Pentagon preparing for mass civil breakdown

Posted by in categories: defense, government, security

— The Guardian

Pentagon Building in Washington

A US Department of Defense (DoD) research programme is funding universities to model the dynamics, risks and tipping points for large-scale civil unrest across the world, under the supervision of various US military agencies. The multi-million dollar programme is designed to develop immediate and long-term “warfighter-relevant insights” for senior officials and decision makers in “the defense policy community,” and to inform policy implemented by “combatant commands.”

Launched in 2008 – the year of the global banking crisis – the DoD ‘Minerva Research Initiative’ partners with universities “to improve DoD’s basic understanding of the social, cultural, behavioral, and political forces that shape regions of the world of strategic importance to the US.”

Read more

Sep 25, 2014

Question: A Counterpoint to the Technological Singularity?

Posted by in categories: defense, disruptive technology, economics, education, environmental, ethics, existential risks, finance, futurism, lifeboat, policy, posthumanism, science, scientific freedom

Question: A Counterpoint to the Technological Singularity?

0  wildest

Douglas Hofstadter, a professor of cognitive science at Indiana University, indicated about The Singularity is Near Book (ISBN: 978–0143037880),

“ … A very bizarre mixture of ideas that are solid and good with ideas that are crazy. It’s as if you took a lot of very good food and some dog excrement and blended it all up so that you can’t possibly figure out what’s good or bad …”

Continue reading “Question: A Counterpoint to the Technological Singularity?” »

Sep 18, 2014

Why Superintelligence May Not Help Us Think about Existential Risks — or Transhumanism

Posted by in categories: alien life, biological, cyborgs, defense, disruptive technology, ethics, existential risks, futurism, homo sapiens, human trajectories, internet, military, philosophy, policy, posthumanism, science, singularity, transhumanism

Among transhumanists, Nick Bostrom is well-known for promoting the idea of ‘existential risks’, potential harms which, were they come to pass, would annihilate the human condition altogether. Their probability may be relatively small, but the expected magnitude of their effects are so great, so Bostrom claims, that it is rational to devote some significant resources to safeguarding against them. (Indeed, there are now institutes for the study of existential risks on both sides of the Atlantic.) Moreover, because existential risks are intimately tied to the advancement of science and technology, their probability is likely to grow in the coming years.

Contrary to expectations, Bostrom is much less concerned with ecological suicide from humanity’s excessive carbon emissions than with the emergence of a superior brand of artificial intelligence – a ‘superintelligence’. This creature would be a human artefact, or at least descended from one. However, its self-programming capacity would have run amok in positive feedback, resulting in a maniacal, even self-destructive mission to rearrange the world in the image of its objectives. Such a superintelligence may appear to be quite ruthless in its dealings with humans, but that would only reflect the obstacles that we place, perhaps unwittingly, in the way of the realization of its objectives. Thus, this being would not conform to the science fiction stereotype of robots deliberately revolting against creators who are now seen as their inferiors.

I must confess that I find this conceptualisation of ‘existential risk’ rather un-transhumanist in spirit. Bostrom treats risk as a threat rather than as an opportunity. His risk horizon is precautionary rather than proactionary: He focuses on preventing the worst consequences rather than considering the prospects that are opened up by whatever radical changes might be inflicted by the superintelligence. This may be because in Bostrom’s key thought experiment, the superintelligence turns out to be the ultimate paper-clip collecting machine that ends up subsuming the entire planet to its task, destroying humanity along the way, almost as an afterthought.

But is this really a good starting point for thinking about existential risk? Much more likely than total human annihilation is that a substantial portion of humanity – but not everyone – is eliminated. (Certainly this captures the worst case scenarios surrounding climate change.) The Cold War remains the gold standard for this line of thought. In the US, the RAND Corporation’s chief analyst, Herman Kahn — the model for Stanley Kubrick’s Dr Strangelove – routinely, if not casually, tossed off scenarios of how, say, a US-USSR nuclear confrontation would serve to increase the tolerance for human biological diversity, due to the resulting proliferation of genetic mutations. Put in more general terms, a severe social disruption provides a unique opportunity for pursuing ideals that might otherwise be thwarted by a ‘business as usual’ policy orientation.

Continue reading “Why Superintelligence May Not Help Us Think about Existential Risks -- or Transhumanism” »

Sep 13, 2014

Neuromodulation 2.0: New Developments in Brain Implants, Super Soldiers and the Treatment of Chronic Disease

Posted by in categories: biotech/medical, defense, transhumanism

Written By: — Sigularity Hub

neuro-modulation

Brain implants here we come.

DARPA just announced the ElectRX program, a $78.9 million attempt to develop miniscule electronic devices that interface directly with the nervous system in the hopes of curing a bunch of chronic conditions, ranging from the psychological (depression, PTSD) to the physical (Crohn’s, arthritis). Of course, the big goal here is to usher in a revolution in neuromodulation—that is, the science of modulating the nervous system to fix an underlying problem.

Read more

Sep 4, 2014

Navy’s Next Fighter Likely to Feature Artificial Intelligence

Posted by in categories: defense, robotics/AI

By: — USNI News

Boeing concept for F/A-XX. Boeing Image

Artificial intelligence will likely feature prominently onboard the Pentagon’s next-generation successors to the Boeing F/A-18E/F Super Hornet and the Lockheed Martin F-22 Raptor.

“AI is going to be huge,” said one U.S. Navy official familiar with the service’s F/A-XX effort to replace the Super Hornet starting around 2030.

Further, while there are significant differences between the U.S. Air Force’s vision for its F-X air superiority fighter and the Navy’s F/A-XX, the two services agree on some fundamental aspects about what characteristics the jet will need to share.

Read more

Aug 28, 2014

Funding Request

Posted by in categories: astronomy, business, cosmology, defense, disruptive technology, general relativity, physics, quantum physics, science, space, space travel

Astrophysicists like Robert Nemiroff have shown, using Hubble photographs, that quantum foam does not exist. Further, the famous string theorists, Michio Kaku, in his April 2008 Space Show interview stated that string theories will require hundreds of years before gravity modification is feasible.

Therefore the need to fund research into alternative propulsion technologies to get us into space cheaper and quicker. We can be assured that such space technologies will filter down into terrestrial technologies.

Continue reading “Funding Request” »

Jul 1, 2014

E.Q.-Focused Nations (suboptimal) Versus I.Q.-Centric Countries (optimal)

Posted by in categories: business, defense, economics, education, ethics, existential risks, science, scientific freedom, security

E.Q.-Focused Nations (suboptimal) Versus I.Q.-Centric Countries (optimal)

047

1.- E.Q.-Focused Nations argue that the millenarian applied terms such as: Prudence, Tact, Sincerity, Kindness and Unambiguous Language DO NOT SUFFICE and hence they need to invent a marketeer’s stunt: Emotional Intelligence. I.Q.-Centric Countries argue that the millenarian applied terms are beyond utility and desirability and that stunts are to social-engineer and brain-wash the weak: Ergo, all of these are optimal: Prudence, Tact, Sincerity, Kindness and Unambiguous Language, as well as plain-vanilla Psychology 101.

2.- E.Q.-Focused Nations are mired with universal corruption, both in private and public office. I.Q.-Centric Countries are mired with transparency, accountability and reliability, as well as collective integrity and ethics.

Continue reading “E.Q.-Focused Nations (suboptimal) Versus I.Q.-Centric Countries (optimal)” »

Page 3 of 2012345678Last