February 2013 – Lifeboat News: The Blog https://lifeboat.com/blog Safeguarding Humanity Sat, 29 Apr 2017 22:46:19 +0000 en-US hourly 1 https://wordpress.org/?v=6.6.1 Keeping Humans Safe in Space: Meet Robot Torsos Justin, Robonaut, SAR-400, & AILA https://russian.lifeboat.com/blog/2013/02/keeping-humans-safe-in-space-meet-robot-torsos-justin-robonaut-sar-400-aila Sat, 23 Feb 2013 08:58:11 +0000 http://lifeboat.com/blog/?p=6691 JUSTIN.SPACE.ROBOT.GUY
A Point too Far to Astronaut

It’s cold out there beyond the blue. Full of radiation. Low on breathable air. Vacuous.
Machines and organic creatures, keeping them functioning and/or alive — it’s hard.
Space to-do lists are full of dangerous, fantastically boring, and super-precise stuff.

We technological mammals assess thusly:
Robots. Robots should be doing this.

Enter Team Space Torso
As covered by IEEE a few days ago, the DLR (das German Aerospace Center) released a new video detailing the ins & outs of their tele-operational haptic feedback-capable Justin space robot. It’s a smooth system, and eventually ground-based or orbiting operators will just strap on what look like two extra arms, maybe some VR goggles, and go to work. Justin’s target missions are the risky, tedious, and very precise tasks best undertaken by something human-shaped, but preferably remote-controlled. He’s not a new robot, but Justin’s skillset is growing (video is down at the bottom there).

Now, Meet the Rest of the Gang:SPACE.TORSO.LINEUPS
NASA’s Robonaut2 (full coverage), the first and only humanoid robot in space, has of late been focusing on the ferociously mundane tasks of button pushing and knob turning, but hey, WHO’S IN SPACE, HUH? Then you’ve got Russia’s elusive SAR-400, which probably exists, but seems to hide behind… an iron curtain? Rounding out the team is another German, AILA. The nobody-knows-why-it’s-feminized AILA is another DLR-funded project from a university robotics and A.I. lab with a 53-syllable name that takes too long to type but there’s a link down below.

Why Humanoid Torso-Bots?
Robotic tools have been up in space for decades, but they’ve basically been iterative improvements on the same multi-joint single-arm grabber/manipulator. NASA’s recent successful Robotic Refueling Mission is an expansion of mission-capable space robots, but as more and more vital satellites age, collect damage, and/or run out of juice, and more and more humans and their stuff blast into orbit, simple arms and auto-refuelers aren’t going to cut it.

Eventually, tele-operable & semi-autonomous humanoids will become indispensable crew members, and the why of it breaks down like this: 1. space stations, spacecraft, internal and extravehicular maintenance terminals, these are all designed for human use and manipulation; 2. what’s the alternative, a creepy human-to-spider telepresence interface? and 3. humanoid space robots are cool and make fantastic marketing platforms.

A space humanoid, whether torso-only or legged (see: Robotnaut’s new legs), will keep astronauts safe, focused on tasks machines can’t do, and prevent space craziness from trying to hold a tiny pinwheel perfectly still next to an air vent for 2 hours — which, in fact, is slated to become one of Robonaut’s ISS jobs.

Make Sciencey Space Torsos not MurderDeathKillBots
As one is often want to point out, rather than finding ways to creatively dismember and vaporize each other, it would be nice if we humans could focus on the lovely technologies of space travel, habitation, and exploration. Nations competing over who can make the most useful and sexy space humanoid is an admirable step, so let the Global Robot Space Torso Arms Race begin!

“Torso Arms Race!“
Keepin’ it real, yo.

• • •

DLR’s Justin Tele-Operation Interface:

• • •

[JUSTIN TELE-OPERATION SITUATION — IEEE]

Robot Space Torso Projects:
[JUSTIN — GERMANY/DLR • FACEBOOK • TWITTER]
[ROBONAUT — U.S.A./NASA • FACEBOOK • TWITTER]
[SAR-400 — RUSSIA/ROSCOSMOS — PLASTIC PALS • ROSCOSMOS FACEBOOK]
[AILA — GERMANY/DAS DFKI]

This piece originally appeared at Anthrobotic.com on February 21, 2013.

]]>
ATLAS — Watchmen To The Hour That The Sky Falls In https://russian.lifeboat.com/blog/2013/02/atlas-watchmen-to-the-hour-that-the-sky-falls-in https://russian.lifeboat.com/blog/2013/02/atlas-watchmen-to-the-hour-that-the-sky-falls-in#comments Wed, 20 Feb 2013 15:01:04 +0000 http://lifeboat.com/blog/?p=6681 With the recent meteor explosion over Russia coincident with the safe-passing of asteroid 2012 DA14, and an expectant spectacular approach by comet ISON due towards the end of 2013, one could suggest that the Year of the Snake is one where we should look to the skies and consider our long term safeguard against rocks from space.

Indeed, following the near ‘double whammy’ last week, where a 15 meter meteor caught us by surprise and caused extensive damage and injury in central Russia, while the larger anticipated 50 meter asteroid swept to within just 27,000 km of Earth, media reported an immediate response from astronomers with plans to create state-of-the-art detection systems to give warning of incoming asteroids and meteoroids. Concerns can be abated.
ATLAS, the Advanced Terrestrial-Impact Last Alert System is due to begin operations in 2015, and expects to give a one-week warning for a small asteroid – called “a city killer” – and three weeks for a larger “county killer” — providing time for evacuation of risk areas.

Deep Space Industries (a US Company), which is preparing to launch a series of small spacecraft later this decade aimed at surveying nearby asteroids for mining opportunities, could also be used to monitor smaller difficult-to-detect objects that threaten to strike Earth.

However — despite ISON doom-merchants — we are already in relatively safe hands. The SENTRY MONITORING SYSTEM maintains a Sentry Risk Table of possible future Earth impact events, typically tracking objects 50 meters or larger — none of which are currently expected to hit Earth. Other sources will tell you that comet ISON is not expected to pass any closer than 0.42 AU (63,000,000 km) from Earth — though it should still provide spectacular viewing in our night skies come December 2013. A recently trending threat, 140-metre wide asteroid AG5 was given just a 1-in-625 chance of hitting Earth in February 2040, though more recent measurements have reduced this risk to almost nil. The Torino Scale is currently used to rate the risk category of asteroid and comet impacts on a scale of 0 (no hazard) to 10 (globally-impacting certain collisions). At present, almost all known asteroids and comets are categorized as level 0 on this scale (AG5 was temporarily categorized at level 1 until recent measurements, and 2007 VK184, a 130 meter asteroid due for approach circa 2048–2057 is the only currently listed one categorized at level 1 or more).

An asteroid striking land will cause a crater far larger than its size. The diameter calculated in kilometers is = (energy of impact)(1/3.4)/106.77. As such, if an asteroid the size of AG5 (140-meter wide) were to strike Earth, it would create a crater over twice the diameter of Barringer Meteor Crater in northern Arizona and affect an area far larger — or on striking water, it would create a global-reach tsunami. Fortunately, the frequency of such an object striking Earth is quite low — perhaps once every 100,000 years. It is the smaller ones, such as the one which exploded over Russia last week which are the greater concern. These occur perhaps once every 100 years and are not easily detectable by our current methods — justifying the $5m funding NASA contributed to the new ATLAS development in Hawaii.

We are a long way from deploying a response system to deflect/destroy incoming meteors, though at least with ATLAS we will be more confident of getting out of the way when the sky falls in. More information on ATLAS: http://www.fallingstar.com/index.php

]]>
https://russian.lifeboat.com/blog/2013/02/atlas-watchmen-to-the-hour-that-the-sky-falls-in/feed 5
Human Extinction Looms https://russian.lifeboat.com/blog/2013/02/human-extinction-looms Tue, 19 Feb 2013 17:33:17 +0000 http://lifeboat.com/blog/?p=6676 Humanities wake-up call has been ignored and we are probably doomed.

The Chelyabinsk event is a warning. Unfortunately, it seems to be a non-event in the great scheme of things and that means the human race is probably also a non-starter. For years I have been hoping for such an event- and saw it as the start of a new space age. Just as Sputnik indirectly resulted in a man on the Moon I predicted an event that would launch humankind into deep space.

Now I wait for ISON. Thirteen may be the year of the comet and if that does not impress upon us the vulnerability of Earth to impacts then only an impact will. If the impact throws enough particles into the atmosphere then no food will grow and World War C will begin. The C stands for cannibalism. If the impact hits the ring of fire it may generate volcanic effects that may have the same effect. If whatever hits Earth is big enough it will render all life above the size of microbes extinct. We have spent trillions of dollars on defense- yet we are defenceless.

Our instinctive optimism bias continues to delude us with the idea that we will survive no matter what happens. Beside the impact threat is the threat of an engineered pathogen. While naturally evolved epidemics always leave a percentage of survivors, a bug designed to be 100 percent lethal will leave none alive. And then there is the unknown- Earth changes, including volcanic activity, can also wreck our civilization. We go on as a species the same way we go on with our own lives- ignoring death for the most part. And that is our critical error.

The universe does not care if we thrive or go extinct. If we do not care then a quick end is inevitable.

I have given the world my best answer to the question. That is all I can do:

http://voices.yahoo.com/water-bombs-8121778.html?cat=15

]]>
Told ya so https://russian.lifeboat.com/blog/2013/02/told-ya-so https://russian.lifeboat.com/blog/2013/02/told-ya-so#comments Fri, 15 Feb 2013 22:09:00 +0000 http://lifeboat.com/blog/?p=6670 http://news.yahoo.com/meteor-explodes-over-russia-1-100-…38744.html

I have been hoping for exactly this event; now we will see if we are actually an intelligent species and protect the planet from impacts.

]]>
https://russian.lifeboat.com/blog/2013/02/told-ya-so/feed 1
Machine Morality: a Survey of Thought and a Hint of Harbinger https://russian.lifeboat.com/blog/2013/02/machine-morality-a-survey-of-thought-and-a-hint-of-harbinger https://russian.lifeboat.com/blog/2013/02/machine-morality-a-survey-of-thought-and-a-hint-of-harbinger#comments Fri, 08 Feb 2013 12:28:53 +0000 http://lifeboat.com/blog/?p=6652 KILL.THE.ROBOTS
The Golden Rule is Not for Toasters

Simplistically nutshelled, talking about machine morality is picking apart whether or not we’ll someday have to be nice to machines or demand that they be nice to us.

Well, it’s always a good time to address human & machine morality vis-à-vis both the engineering and philosophical issues intrinsic to the qualification and validation of non-biological intelligence and/or consciousness that, if manifested, would wholly justify consideration thereof.

Uhh… yep!

But, whether at run-on sentence dorkville or any other tech forum, right from the jump one should know that a single voice rapping about machine morality is bound to get hung up in and blinded by its own perspective, e.g., splitting hairs to decide who or what deserves moral treatment (if a definition of that can even be nailed down), or perhaps yet another justification for the standard intellectual cul de sac:
“Why bother, it’s never going to happen.“
That’s tired and lame.

One voice, one study, or one robot fetishist with a digital bullhorn — one ain’t enough. So, presented and recommended here is a broad-based overview, a selection of the past year’s standout pieces on machine morality.The first, only a few days old, is actually an announcement of intent that could pave the way to forcing the actual question.
Let’s then have perspective:

Building a Brain — Being Humane — Feeling our Pain — Dude from the NYT
• February 3, 2013 — Human Brain Project: Simulate One
Serious Euro-Science to simulate a human brain. Will it behave? Will we?

• January 28, 2013 — NPR: No Mercy for Robots
A study of reciprocity and punitive reaction to non-human actors. Bad robot.

• April 25, 2012 — IEEE Spectrum: Attributing Moral Accountability to Robots
On the human expectation of machine morality. They should be nice to me.

• December 25, 2011 — NYT: The Future of Moral Machines
Engineering (at least functional) machine morality. Broad strokes NYT-style.

Expectations More Human than Human?
Now, of course you’re going to check out those pieces you just skimmed over, after you finish trudging through this anti-brevity technosnark©®™ hybrid, of course. When you do — you might notice the troubling rub of expectation dichotomy. Simply put, these studies and reports point to a potential showdown between how we treat our machines, how we might expect others to treat them, and how we might one day expect to be treated by them. For now morality is irrelevant, it is of no consideration nor consequence in our thoughts or intentions toward machines. But, at the same time we hold dear the expectation of reasonable treatment, if not moral, by any intelligent agent — even an only vaguely human robot.

Well what if, for example: 1. AI matures, and 2. machines really start to look like us?
(see: Leaping Across Mori’s Uncanny Valley: Androids Probably Won’t Creep Us Out)

Even now should someone attempt to smash your smartphone or laptop (or just touch it), you of course protect the machine. Extending beyond concerns over the mere destruction of property or loss of labor, could one morally abide harm done to one’s marginally convincing humanlike companion? Even if fully accepting of its artificiality, where would one draw the line between economic and emotional damage? Or, potentially, could the machine itself abide harm done to it? Even if imbued with a perfectly coded algorithmic moral code mandating “do no harm,” could a machine calculate its passive non-response to intentional damage as an immoral act against itself, and then react?

Yeah, these hypotheticals can go on forever, but it’s clear that blithely ignoring machine morality or overzealously attempting to engineer it might result in… immorality.

Probably Only a Temporary Non-Issue. Or Maybe. Maybe Not.
There’s an argument that actually needing to practically implement or codify machine morality is so remote that debate is, now and forever, only that — and oh wow, that opinion is superbly dumb. This author has addressed this staggeringly arrogant species-level macro-narcissism before (and it was awesome). See, outright dismissal isn’t a dumb argument because a self-aware machine or something close enough for us to regard as such is without doubt going to happen, it’s dumb because 1. absolutism is fascist, and 2. to the best of our knowledge, excluding the magic touch of Jesus & friends or aliens spiking our genetic punch or whatever, conscious and/or self-aware intelligence (which would require moral consideration) appears to be an emergent trait of massively powerful computation. And we’re getting really good at making machines do that.

Whatever the challenge, humans rarely avoid stabbing toward the supposedly impossible — and a lot of the time, we do land on the moon. The above mentioned Euro-project says it’ll need 10 years to crank out a human brain simulation. Okay, respectable. But, a working draft of the human genome, an initially 15-year international project, was completed 5 years ahead of schedule due largely to advances in brute force computational capability (in the not so digital 1990s). All that computery stuff like, you know, gets better a lot faster these days. Just sayin.

So, you know, might be a good idea to keep hashing out ideas on machine morality.
Because who knows what we might end up with…

Oh sure, I understand, turn me off, erase me — time for a better model, I totally get it.
- or -
Hey, meatsack, don’t touch me or I’ll reformat your squishy face!

Choose your own adventure!

[HUMAN BRAIN PROJECT]
[NO MERCY FOR ROBOTS — NPR]
[ATTRIBUTING MORAL ACCOUNTABILITY TO ROBOTS — IEEE]
[THE FUTURE OF MORAL MACHINES — NYT]

This piece originally appeared at Anthrobotic.com on February 7, 2013.

]]>
https://russian.lifeboat.com/blog/2013/02/machine-morality-a-survey-of-thought-and-a-hint-of-harbinger/feed 1
How can humans compete with singularity agents? https://russian.lifeboat.com/blog/2013/02/how-can-humans-compete-with-ai-agents Wed, 06 Feb 2013 13:29:03 +0000 http://lifeboat.com/blog/?p=6647 It appears now that intelligence of humans is largely superseeded by robots and artificial singularity agents. Education and technology have no chances to make us far more intelligent. The question is now what is our place in this new world where we are not the topmost intelligent kind of species.

Even if we develop new scientific and technological approaches, it is likely that machines will be far more efficient than us if these approaches are based on rationality.

IMO, in the next future, we will only be able to compete in irrational domains but I am not that sure that irrational domains cannot be also handled by machines.

]]>