БЛОГ

Archive for the ‘ethics’ category: Page 67

Sep 26, 2014

Review: When Google Met WikiLeaks (2014) by Julian Assange

Posted by in categories: big data, bitcoin, computing, encryption, ethics, events, futurism, geopolitics, government, hacking, internet, journalism, law, law enforcement, media & arts, military, transhumanism, transparency
Julian Assange’s 2014 book When Google Met WikiLeaks consists of essays authored by Assange and, more significantly, the transcript of a discussion between Assange and Google’s Eric Schmidt and Jared Cohen.

Continue reading “Review: When Google Met WikiLeaks (2014) by Julian Assange” »

Sep 25, 2014

Question: A Counterpoint to the Technological Singularity?

Posted by in categories: defense, disruptive technology, economics, education, environmental, ethics, existential risks, finance, futurism, lifeboat, policy, posthumanism, science, scientific freedom

Question: A Counterpoint to the Technological Singularity?

0  wildest

Douglas Hofstadter, a professor of cognitive science at Indiana University, indicated about The Singularity is Near Book (ISBN: 978–0143037880),

“ … A very bizarre mixture of ideas that are solid and good with ideas that are crazy. It’s as if you took a lot of very good food and some dog excrement and blended it all up so that you can’t possibly figure out what’s good or bad …”

Continue reading “Question: A Counterpoint to the Technological Singularity?” »

Sep 18, 2014

Why Superintelligence May Not Help Us Think about Existential Risks — or Transhumanism

Posted by in categories: alien life, biological, cyborgs, defense, disruptive technology, ethics, existential risks, futurism, homo sapiens, human trajectories, internet, military, philosophy, policy, posthumanism, science, singularity, transhumanism

Among transhumanists, Nick Bostrom is well-known for promoting the idea of ‘existential risks’, potential harms which, were they come to pass, would annihilate the human condition altogether. Their probability may be relatively small, but the expected magnitude of their effects are so great, so Bostrom claims, that it is rational to devote some significant resources to safeguarding against them. (Indeed, there are now institutes for the study of existential risks on both sides of the Atlantic.) Moreover, because existential risks are intimately tied to the advancement of science and technology, their probability is likely to grow in the coming years.

Contrary to expectations, Bostrom is much less concerned with ecological suicide from humanity’s excessive carbon emissions than with the emergence of a superior brand of artificial intelligence – a ‘superintelligence’. This creature would be a human artefact, or at least descended from one. However, its self-programming capacity would have run amok in positive feedback, resulting in a maniacal, even self-destructive mission to rearrange the world in the image of its objectives. Such a superintelligence may appear to be quite ruthless in its dealings with humans, but that would only reflect the obstacles that we place, perhaps unwittingly, in the way of the realization of its objectives. Thus, this being would not conform to the science fiction stereotype of robots deliberately revolting against creators who are now seen as their inferiors.

I must confess that I find this conceptualisation of ‘existential risk’ rather un-transhumanist in spirit. Bostrom treats risk as a threat rather than as an opportunity. His risk horizon is precautionary rather than proactionary: He focuses on preventing the worst consequences rather than considering the prospects that are opened up by whatever radical changes might be inflicted by the superintelligence. This may be because in Bostrom’s key thought experiment, the superintelligence turns out to be the ultimate paper-clip collecting machine that ends up subsuming the entire planet to its task, destroying humanity along the way, almost as an afterthought.

But is this really a good starting point for thinking about existential risk? Much more likely than total human annihilation is that a substantial portion of humanity – but not everyone – is eliminated. (Certainly this captures the worst case scenarios surrounding climate change.) The Cold War remains the gold standard for this line of thought. In the US, the RAND Corporation’s chief analyst, Herman Kahn — the model for Stanley Kubrick’s Dr Strangelove – routinely, if not casually, tossed off scenarios of how, say, a US-USSR nuclear confrontation would serve to increase the tolerance for human biological diversity, due to the resulting proliferation of genetic mutations. Put in more general terms, a severe social disruption provides a unique opportunity for pursuing ideals that might otherwise be thwarted by a ‘business as usual’ policy orientation.

Continue reading “Why Superintelligence May Not Help Us Think about Existential Risks -- or Transhumanism” »

Sep 11, 2014

Justice Beyond Privacy

Posted by in categories: computing, disruptive technology, ethics, government, hacking, internet, law, policy, privacy, security

As the old social bonds unravel, philosopher and member of the Lifeboat Foundation’s advisory board Professor Steve Fuller asks: can we balance free expression against security?

justice

Justice has been always about modes of interconnectivity. Retributive justice – ‘eye for an eye’ stuff – recalls an age when kinship was how we related to each other. In the modern era, courtesy of the nation-state, bonds have been forged in terms of common laws, common language, common education, common roads, etc. The internet, understood as a global information and communication infrastructure, is both enhancing and replacing these bonds, resulting in new senses of what counts as ‘mine’, ‘yours’, ‘theirs’ and ‘ours’ – the building blocks of a just society…

Read the full article at IAI.TV

Aug 21, 2014

Getting Sexy and the Undivided Attention of Your Fortune-500 Client CEOs! Aug 22 2014

Posted by in categories: architecture, big data, business, complex systems, disruptive technology, economics, education, engineering, ethics, existential risks, finance, futurism, government, information science, innovation, physics, science, scientific freedom, security

Getting Sexy and the Undivided Attention of Your Fortune-500 Client CEOs! (Excerpt from the White Swan book) By Andres Agostini at www.linkedin.com/in/andresagostini

0 ab

(1.- of 17 ).- If you want to seize the undivided attention of top executives at Los Alamos National Laboratory and Procter & Gamble, talk to them through the notions of and by Process Re-engineering.

(2.- of 17 ).- If you want to seize the undivided attention of top executives at GE, talk to them through the notions of and by Six Sigma, and Peter F. Drucker’s Management by Objective (MBO). While you are with them, remember to commend on the Jack Welch’ and Jeff Immelt’s master lectures at GE’s Crotonville.

Continue reading “Getting Sexy and the Undivided Attention of Your Fortune-500 Client CEOs! Aug 22 2014” »

Jul 15, 2014

Political futurism, ethics energized by sci-fi

Posted by in categories: entertainment, ethics, existential risks, philosophy, transhumanism
.@hjbentham . @clubofinfo. @dissidentvoice_ . @ieet. #scifi. #philosophy. #ethics.
Literature has served an indispensable purpose in exploring ethical and political themes. This remains true of sci-fi and fantasy, even if there is such a thing as reading too much politics into fictional work or over-analyzing.


Since Maquis Books published The Traveller and Pandemonium, a novel authored by me from 2011–2014, I have been responding as insightfully as possible to reviews and also discussing the book’s political and philosophical themes wherever I can. Set in a fictional alien world, much of this book’s 24 chapters are politically themed on the all too real human weakness of infighting and resorting to hardline, extremist and even messianic plans when faced with a desperate situation.

The story tells about human cultures battling to survive in a deadly alien ecosystem. There the human race, rather than keeping animals in cages, must keep their own habitats in cages as protection from the world outside. The human characters of the story live out a primitive existence not typical of science-fiction, mainly aiming at their own survival. Technological progress is nonexistent, as all human efforts have been redirected to self-defense against the threat of the alien predators.

Even though The Traveller and Pandemonium depicts humanity facing a common alien foe, the various struggling human factions still fail to cooperate. In fact, they turn ever more hostilely on each other even as the alien planet’s predators continue to close in on the last remaining human states. At the time the story is set, the human civilization on the planet is facing imminent extinction from its own infighting and extremism, as well as the aggressive native plant and animal life of the planet.

Continue reading “Political futurism, ethics energized by sci-fi” »

Jul 1, 2014

E.Q.-Focused Nations (suboptimal) Versus I.Q.-Centric Countries (optimal)

Posted by in categories: business, defense, economics, education, ethics, existential risks, science, scientific freedom, security

E.Q.-Focused Nations (suboptimal) Versus I.Q.-Centric Countries (optimal)

047

1.- E.Q.-Focused Nations argue that the millenarian applied terms such as: Prudence, Tact, Sincerity, Kindness and Unambiguous Language DO NOT SUFFICE and hence they need to invent a marketeer’s stunt: Emotional Intelligence. I.Q.-Centric Countries argue that the millenarian applied terms are beyond utility and desirability and that stunts are to social-engineer and brain-wash the weak: Ergo, all of these are optimal: Prudence, Tact, Sincerity, Kindness and Unambiguous Language, as well as plain-vanilla Psychology 101.

2.- E.Q.-Focused Nations are mired with universal corruption, both in private and public office. I.Q.-Centric Countries are mired with transparency, accountability and reliability, as well as collective integrity and ethics.

Continue reading “E.Q.-Focused Nations (suboptimal) Versus I.Q.-Centric Countries (optimal)” »

Jul 1, 2014

Data Science: What the Facebook Controversy is Really About

Posted by in category: ethics

— The Atlantic

Facebook has always “manipulated” the results shown in its users’ News Feeds by filtering and personalizing for relevance. But this weekend, the social giant seemed to cross a line, when it announced that it engineered emotional responses two years ago in an “emotional contagion” experiment, published in the Proceedings of the National Academy of Sciences (PNAS).

Since then, critics have examined many facets of the experiment, including its design, methodology, approval process, and ethics. Each of these tacks tacitly accepts something important, though: the validity of Facebook’s science and scholarship. There is a more fundamental question in all this: What does it mean when we call proprietary data research data science?

As a society, we haven’t fully established how we ought to think about data science in practice. It’s time to start hashing that out.

Read more

Jun 30, 2014

New book: The Beginning and the End by Clément Vidal

Posted by in categories: alien life, complex systems, ethics, philosophy, physics, posthumanism, singularity

By Clément Vidal — Vrije Universiteit Brussel, Belgium.

I am happy to inform you that I just published a book which deals at length with our cosmological future. I made a short book trailer introducing it, and the book has been mentioned in the Huffington Post and H+ Magazine.

Inline image 1
About the book:
In this fascinating journey to the edge of science, Vidal takes on big philosophical questions: Does our universe have a beginning and an end, or is it cyclic? Are we alone in the universe? What is the role of intelligent life, if any, in cosmic evolution? Grounded in science and committed to philosophical rigor, this book presents an evolutionary worldview where the rise of intelligent life is not an accident, but may well be the key to unlocking the universe’s deepest mysteries. Vidal shows how the fine-tuning controversy can be advanced with computer simulations. He also explores whether natural or artificial selection could hold on a cosmic scale. In perhaps his boldest hypothesis, he argues that signs of advanced extraterrestrial civilizations are already present in our astrophysical data. His conclusions invite us to see the meaning of life, evolution, and intelligence from a novel cosmological framework that should stir debate for years to come.
About the author:
Dr. Clément Vidal is a philosopher with a background in logic and cognitive sciences. He is co-director of the ‘Evo Devo Universe’ community and founder of the ‘High Energy Astrobiology’ prize. To satisfy his intellectual curiosity when facing the big questions, he brings together many areas of knowledge such as cosmology, physics, astrobiology, complexity science, evolutionary theory and philosophy of science.
http://clement.vidal.philosophons.com

You can get 20% off with the discount code ‘Vidal2014′ (valid until 31st July)!

Jun 12, 2014

Could a machine or an AI ever feel human-like emotions ?

Posted by in categories: bionic, cyborgs, ethics, existential risks, futurism, neuroscience, philosophy, posthumanism, robotics/AI, singularity, transhumanism

Computers will soon be able to simulate the functioning of a human brain. In a near future, artificial superintelligence could become vastly more intellectually capable and versatile than humans. But could machines ever truly experience the whole range of human feelings and emotions, or are there technical limitations ?

In a few decades, intelligent and sentient humanoid robots will wander the streets alongside humans, work with humans, socialize with humans, and perhaps one day will be considered individuals in their own right. Research in artificial intelligence (AI) suggests that intelligent machines will eventually be able to see, hear, smell, sense, move, think, create and speak at least as well as humans. They will feel emotions of their own and probably one day also become self-aware.

There may not be any reason per se to want sentient robots to experience exactly all the emotions and feelings of a human being, but it may be interesting to explore the fundamental differences in the way humans and robots can sense, perceive and behave. Tiny genetic variations between people can result in major discrepancies in the way each of us thinks, feels and experience the world. If we appear so diverse despite the fact that all humans are in average 99.5% identical genetically, even across racial groups, how could we possibly expect sentient robots to feel the exact same way as biological humans ? There could be striking similarities between us and robots, but also drastic divergences on some levels. This is what we will investigate below.

Continue reading “Could a machine or an AI ever feel human-like emotions ?” »

Page 67 of 82First6465666768697071Last