БЛОГ

Archive for the ‘ethics’ category: Page 66

Nov 20, 2014

Bitcoin, Cryptocurrency, and Blockchain Technology — Voting Systems

Posted by in categories: automation, big data, bitcoin, business, complex systems, computing, disruptive technology, economics, encryption, engineering, ethics, geopolitics, government, hacking, hardware, information science, innovation, law, materials, open access, open source, philosophy, policy, polls, privacy, science, security, software, supercomputing, transparency, treaties

Quoted: “Bitcoin technology offers a fundamentally different approach to vote collection with its decentralized and automated secure protocol. It solves the problems of both paper ballot and electronic voting machines, enabling a cost effective, efficient, open system that is easily audited by both individual voters and the entire community. Bitcoin technology can enable a system where every voter can verify that their vote was counted, see votes for different candidates/issues cast in real time, and be sure that there is no fraud or manipulation by election workers.”


Read the article here » http://www.entrepreneur.com/article/239809?hootPostID=ba473f…aacc8412c7

Nov 19, 2014

BitCoin, Cryptocurrency, and Blockchain Technology — FACTOM

Posted by in categories: automation, big data, biotech/medical, bitcoin, business, complex systems, computing, disruptive technology, economics, education, encryption, engineering, environmental, ethics, finance, futurism, geopolitics, hacking, information science, law, materials, open access, policy, science, security, software, supercomputing, transparency

Quoted: “The Factom team suggested that its proposal could be leveraged to execute some of the crypto 2.0 functionalities that are beginning to take shape on the market today. These include creating trustless audit chains, property title chains, record keeping for sensitive personal, medical and corporate materials, and public accountability mechanisms.

During the AMA, the Factom president was asked how the technology could be leveraged to shape the average person’s daily life.”

Kirby responded:

“Factom creates permanent records that can’t be changed later. In a Factom world, there’s no more robo-signing scandals. In a Factom world, there are no more missing voting records. In a Factom world, you know where every dollar of government money was spent. Basically, the whole world is made up of record keeping and, as a consumer, you’re at the mercy of the fragmented systems that run these records.”

Continue reading “BitCoin, Cryptocurrency, and Blockchain Technology — FACTOM” »

Nov 17, 2014

A New Economic Layer — BitCoin, Cryptorcurrency, and Blockchain Technology

Posted by in categories: big data, bitcoin, business, complex systems, computing, disruptive technology, economics, electronics, encryption, engineering, ethics, finance, futurism, geopolitics, hacking, human trajectories, information science, innovation, internet, law, materials, media & arts, military, open access, open source, policy, privacy, science, scientific freedom, security, software, supercomputing

Preamble: Bitcoin 1.0 is currency — the deployment of cryptocurrencies in applications related to cash such as currency transfer, remittance, and digital payment systems. Bitcoin 2.0 is contracts — the whole slate of economic, market, and financial applications using the blockchain that are more extensive than simple cash transactions like stocks, bonds, futures, loans, mortgages, titles, smart property, and smart contracts

Bitcoin 3.0 is blockchain applications beyond currency, finance, and markets, particularly in the areas of government, health, science, literacy, culture, and art.

Read the article here » http://ieet.org/index.php/IEET/more/swan20141110

Oct 1, 2014

The Abolition of Medicine as a Goal for Humanity 2.0

Posted by in categories: biological, bionic, biotech/medical, ethics, futurism, genetics, homo sapiens, human trajectories, life extension, philosophy, policy, transhumanism

What follows is my position piece for London’s FutureFest 2013, the website for which no longer exists.

Medicine is a very ancient practice. In fact, it is so ancient that it may have become obsolete. Medicine aims to restore the mind and body to their natural state relative to an individual’s stage in the life cycle. The idea has been to live as well as possible but also die well when the time came. The sense of what is ‘natural’ was tied to statistically normal ways of living in particular cultures. Past conceptions of health dictated future medical practice. In this respect, medical practitioners may have been wise but they certainly were not progressive.

However, this began to change in the mid-19th century when the great medical experimenter, Claude Bernard, began to champion the idea that medicine should be about the indefinite delaying, if not outright overcoming, of death. Bernard saw organisms as perpetual motion machines in an endless struggle to bring order to an environment that always threatens to consume them. That ‘order’ consists in sustaining the conditions needed to maintain an organism’s indefinite existence. Toward this end, Bernard enthusiastically used animals as living laboratories for testing his various hypotheses.

Historians identify Bernard’s sensibility with the advent of ‘modern medicine’, an increasingly high-tech and aspirational enterprise, dedicated to extending the full panoply of human capacities indefinitely. On this view, scientific training trumps practitioner experience, radically invasive and reconstructive procedures become the norm, and death on a physician’s watch is taken to be the ultimate failure. Humanity 2.0 takes this way of thinking to the next level, which involves the abolition of medicine itself. But what exactly would that mean – and what would replace it?

Continue reading “The Abolition of Medicine as a Goal for Humanity 2.0” »

Sep 29, 2014

Towards a ‘Right to Science’

Posted by in categories: ethics, genetics, government, law, philosophy, policy, science

In 1906 the great American pragmatist philosopher William James delivered a public lecture entitled, ‘The Moral Equivalent of War’. James imagined a point in the foreseeable future when states would rationally decide against military options to resolve their differences. While he welcomed this prospect, he also believed that the abolition of warfare would remove an important pretext for people to think beyond their own individual survival and toward some greater end, perhaps one that others might end up enjoying more fully. What then might replace war’s altruistic side?

It is telling that the most famous political speech to adopt James’ title was US President Jimmy Carter’s 1977 call for national energy independence in response to the Arab oil embargo. Carter characterised the battle ahead as really about America’s own ignorance and complacency rather than some Middle Eastern foe. While Carter’s critics pounced on his trademark moralism, they should have looked instead to his training as a nuclear scientist. Historically speaking, nothing can beat a science-led agenda to inspire a long-term, focused shift in a population’s default behaviours. Louis Pasteur perhaps first exploited this point by declaring war on the germs that he had shown lay behind not only human and animal disease but also France’s failing wine and silk industries. Moreover, Richard Nixon’s ‘war on cancer’, first declared in 1971, continues to be prosecuted on the terrain of genomic medicine, even though arguably a much greater impact on the human condition could have been achieved by equipping the ongoing ‘war on poverty’ with comparable resources and resoluteness.

Science’s ability to step in as war’s moral equivalent has less to do with whatever personal authority scientists command than with the universal scope of scientific knowledge claims. Even if today’s science is bound to be superseded, its import potentially bears on everyone’s life. Once that point is understood, it is easy to see how each person could be personally invested in advancing the cause of scientific research. In the heyday of the welfare state, that point was generally understood. Thus, in The Gift Relationship, perhaps the most influential work in British social policy of the past fifty years, Richard Titmuss argued, by analogy with voluntary blood donation, that citizens have a duty to participate as research subjects, but not because of the unlikely event that they might directly benefit from their particular experiment. Rather, citizens should participate because they would have already benefitted from experiments involving their fellow citizens and will continue to benefit similarly in the future.

However, this neat fit between science and altruism has been undermined over the past quarter-century on two main fronts. One stems from the legacy of Nazi Germany, where the duty to participate in research was turned into a vehicle to punish undesirables by studying their behaviour under various ‘extreme conditions’. Indicative of the horrific nature of this research is that even today few are willing to discuss any scientifically interesting results that might have come from it. Indeed, the pendulum has swung the other way. Elaborate research ethics codes enforced by professional scientific bodies and university ‘institutional review boards’ protect both scientist and subject in ways that arguably discourage either from having much to do with the other. Even defenders of today’s ethical guidelines generally concede that had such codes been in place over the past two centuries, science would have progressed at a much slower pace.

The other and more current challenge to the idea that citizens have a duty to participate in research comes from the increasing privatisation of science. If a state today were to require citizen participation in drug trials, as it might jury duty or military service, the most likely beneficiary would be a transnational pharmaceutical firm capable of quickly exploiting the findings for profitable products. What may be needed, then, is not a duty but a right to participate in science. This proposal, advanced by Sarah Chan at the University of Manchester’s Institute for Bioethics, looks like a slight shift in legal language. But it is the difference between science appearing as an obligation and an opportunity for the ordinary citizen. In the latter case, one does not simply wait for scientists to invite willing subjects. Rather, potential subjects are invited to organize themselves and lobby the research community with their specific concerns. In our recent book, The Proactionary Imperative, Veronika Lipinska and I propose the concept of ‘hedgenetics’ to capture just this prospect for those who share socially relevant genetic traits. It may mean that scientists no longer exert final control over their research agenda, but the benefit is that they can be assured of steady public support for their work.

Sep 27, 2014

Getting Apple, Microsoft and Fortune-500s to Uninterruptedly Buy From You!

Posted by in categories: business, computing, disruptive technology, economics, education, electronics, engineering, ethics, information science, science, scientific freedom

Getting Apple, Microsoft and Fortune-500s to Uninterruptedly Buy From You!

0    FORESIGHT

Apple, Berkshire Hathaway Corporation, Mitsubishi Motors, Honda, Daimler-Chrysler’s Mercedes-Benz, Toyota, Royal Dutch Shell Oil Company, Google, Xerox, Exxon-Mobil, Boeing, Amazon, Procter & Gamble, NASA and DARPA, Lockheed Martin, RAND Corporation and HUDSON Institute, Northrop Grumman Corporation, GEICO, Microsoft, etc.

FOREWORD:

You are going to need to prepare thyself breathtakingly. You will need a Brioni suit and a silk tie and understand, later on below this material, how to get lucky via Rampant Rocket Science.

Continue reading “Getting Apple, Microsoft and Fortune-500s to Uninterruptedly Buy From You!” »

Sep 26, 2014

Review: When Google Met WikiLeaks (2014) by Julian Assange

Posted by in categories: big data, bitcoin, computing, encryption, ethics, events, futurism, geopolitics, government, hacking, internet, journalism, law, law enforcement, media & arts, military, transhumanism, transparency
Julian Assange’s 2014 book When Google Met WikiLeaks consists of essays authored by Assange and, more significantly, the transcript of a discussion between Assange and Google’s Eric Schmidt and Jared Cohen.

Continue reading “Review: When Google Met WikiLeaks (2014) by Julian Assange” »

Sep 25, 2014

Question: A Counterpoint to the Technological Singularity?

Posted by in categories: defense, disruptive technology, economics, education, environmental, ethics, existential risks, finance, futurism, lifeboat, policy, posthumanism, science, scientific freedom

Question: A Counterpoint to the Technological Singularity?

0  wildest

Douglas Hofstadter, a professor of cognitive science at Indiana University, indicated about The Singularity is Near Book (ISBN: 978–0143037880),

“ … A very bizarre mixture of ideas that are solid and good with ideas that are crazy. It’s as if you took a lot of very good food and some dog excrement and blended it all up so that you can’t possibly figure out what’s good or bad …”

Continue reading “Question: A Counterpoint to the Technological Singularity?” »

Sep 18, 2014

Why Superintelligence May Not Help Us Think about Existential Risks — or Transhumanism

Posted by in categories: alien life, biological, cyborgs, defense, disruptive technology, ethics, existential risks, futurism, homo sapiens, human trajectories, internet, military, philosophy, policy, posthumanism, science, singularity, transhumanism

Among transhumanists, Nick Bostrom is well-known for promoting the idea of ‘existential risks’, potential harms which, were they come to pass, would annihilate the human condition altogether. Their probability may be relatively small, but the expected magnitude of their effects are so great, so Bostrom claims, that it is rational to devote some significant resources to safeguarding against them. (Indeed, there are now institutes for the study of existential risks on both sides of the Atlantic.) Moreover, because existential risks are intimately tied to the advancement of science and technology, their probability is likely to grow in the coming years.

Contrary to expectations, Bostrom is much less concerned with ecological suicide from humanity’s excessive carbon emissions than with the emergence of a superior brand of artificial intelligence – a ‘superintelligence’. This creature would be a human artefact, or at least descended from one. However, its self-programming capacity would have run amok in positive feedback, resulting in a maniacal, even self-destructive mission to rearrange the world in the image of its objectives. Such a superintelligence may appear to be quite ruthless in its dealings with humans, but that would only reflect the obstacles that we place, perhaps unwittingly, in the way of the realization of its objectives. Thus, this being would not conform to the science fiction stereotype of robots deliberately revolting against creators who are now seen as their inferiors.

I must confess that I find this conceptualisation of ‘existential risk’ rather un-transhumanist in spirit. Bostrom treats risk as a threat rather than as an opportunity. His risk horizon is precautionary rather than proactionary: He focuses on preventing the worst consequences rather than considering the prospects that are opened up by whatever radical changes might be inflicted by the superintelligence. This may be because in Bostrom’s key thought experiment, the superintelligence turns out to be the ultimate paper-clip collecting machine that ends up subsuming the entire planet to its task, destroying humanity along the way, almost as an afterthought.

But is this really a good starting point for thinking about existential risk? Much more likely than total human annihilation is that a substantial portion of humanity – but not everyone – is eliminated. (Certainly this captures the worst case scenarios surrounding climate change.) The Cold War remains the gold standard for this line of thought. In the US, the RAND Corporation’s chief analyst, Herman Kahn — the model for Stanley Kubrick’s Dr Strangelove – routinely, if not casually, tossed off scenarios of how, say, a US-USSR nuclear confrontation would serve to increase the tolerance for human biological diversity, due to the resulting proliferation of genetic mutations. Put in more general terms, a severe social disruption provides a unique opportunity for pursuing ideals that might otherwise be thwarted by a ‘business as usual’ policy orientation.

Continue reading “Why Superintelligence May Not Help Us Think about Existential Risks -- or Transhumanism” »

Sep 11, 2014

Justice Beyond Privacy

Posted by in categories: computing, disruptive technology, ethics, government, hacking, internet, law, policy, privacy, security

As the old social bonds unravel, philosopher and member of the Lifeboat Foundation’s advisory board Professor Steve Fuller asks: can we balance free expression against security?

justice

Justice has been always about modes of interconnectivity. Retributive justice – ‘eye for an eye’ stuff – recalls an age when kinship was how we related to each other. In the modern era, courtesy of the nation-state, bonds have been forged in terms of common laws, common language, common education, common roads, etc. The internet, understood as a global information and communication infrastructure, is both enhancing and replacing these bonds, resulting in new senses of what counts as ‘mine’, ‘yours’, ‘theirs’ and ‘ours’ – the building blocks of a just society…

Read the full article at IAI.TV

Page 66 of 82First6364656667686970Last