September 2007 – Lifeboat News: The Blog https://lifeboat.com/blog Safeguarding Humanity Tue, 25 Apr 2017 11:51:17 +0000 en-US hourly 1 https://wordpress.org/?v=6.6.1 SCADA (in)Security’s Going to Cost Us https://russian.lifeboat.com/blog/2007/09/scada-insecurity%e2%80%99s-going-to-cost-us https://russian.lifeboat.com/blog/2007/09/scada-insecurity%e2%80%99s-going-to-cost-us#comments Fri, 28 Sep 2007 02:52:02 +0000 http://lifeboat.com/blog/?p=95 When I read about the “Aurora Generator Test” video that has been leaked to the media I wondered “why leak it now now and who benefits.” Like many of you, I question the reasons behind any leak from an “unnamed source” inside the US Federal government to the media. Hopefully we’ll all benefit from this particular leak.

Then I thought back to a conversation I had at a trade show booth I was working in several years ago. I was speaking with a fellow from the power generation industry. He indicated that he was very worried about the security ramifications of a hardware refresh of the SCADA systems that his utility was using to control its power generation equipment. The legacy UNIX-based SCADA systems were going to be replaced by Windows based systems. He was even more very worried that the “air gaps” that historically have been used to physically separate the SCADA control networks from power company’s regular data networks might be removed to cut costs.

Thankfully on July 19, 2007 the Federal Energy Regulatory Commission proposed to the North American Electric Reliability Corporation a set of new, and much overdue, cyber security standards that will, once adopted and enforced do a lot to help make an attacker’s job a lot harder. Thank God, the people who operate the most critically important part of our national infrastructure have noticed the obvious.

Hopefully a little sunlight will help accelerate the process of reducing the attack surface of North America’s power grid.

After all, the march to the Singularity will go a lot slower without a reliable power grid.

Matt McGuirl, CISSP

]]>
https://russian.lifeboat.com/blog/2007/09/scada-insecurity%e2%80%99s-going-to-cost-us/feed 3
New field-deployable biosensor detects avian influenza virus in minutes instead of days https://russian.lifeboat.com/blog/2007/09/new-field-deployable-biosensor-detects-avian-influenza-virus-in-minutes-instead-of-days https://russian.lifeboat.com/blog/2007/09/new-field-deployable-biosensor-detects-avian-influenza-virus-in-minutes-instead-of-days#comments Thu, 27 Sep 2007 16:56:41 +0000 http://lifeboat.com/blog/?p=94 A new biosensor developed at the Georgia Tech Research Institute (GTRI) can detect avian influenza in just minutes. In addition to being a rapid test, the biosensor is economical, field-deployable, sensitive to different viral strains and requires no labels or reagents.

This kind of technology could be applied to real time monitoring of other diseases as well.


Photograph of the optical biosensor that is approximately 16 millimeters by 33 millimeters in size. The horizontal purple lines are the channels on the waveguide. Credit: Gary Meek

“We can do real-time monitoring of avian influenza infections on the farm, in live-bird markets or in poultry processing facilities,” said Jie Xu, a research scientist in GTRI’s Electro-Optical Systems Laboratory (EOSL)

The biosensor is coated with antibodies specifically designed to capture a protein located on the surface of the viral particle. For this study, the researchers evaluated the sensitivity of three unique antibodies to detect avian influenza virus.

The sensor utilizes the interference of light waves, a concept called interferometry, to precisely determine how many virus particles attach to the sensor’s surface. More specifically, light from a laser diode is coupled into an optical waveguide through a grating and travels under one sensing channel and one reference channel.

Researchers coat the sensing channel with the specific antibodies and coat the reference channel with non-specific antibodies. Having the reference channel minimizes the impact of non-specific interactions, as well as changes in temperature, pH and mechanical motion. Non-specific binding should occur equally to both the test and reference channels and thus not affect the test results.

An electromagnetic field associated with the light beams extends above the waveguides and is very sensitive to the changes caused by antibody-antigen interactions on the waveguide surface. When a liquid sample passes over the waveguides, any binding that occurs on the top of a waveguide because of viral particle attachment causes water molecules to be displaced. This causes a change in the velocity of the light traveling through the waveguide.

]]>
https://russian.lifeboat.com/blog/2007/09/new-field-deployable-biosensor-detects-avian-influenza-virus-in-minutes-instead-of-days/feed 3
The Other Side of the Immortality Coin https://russian.lifeboat.com/blog/2007/09/the-other-side-of-the-immortality-coin https://russian.lifeboat.com/blog/2007/09/the-other-side-of-the-immortality-coin#comments Thu, 06 Sep 2007 14:33:06 +0000 http://lifeboat.com/blog/?p=92 There are two sides to living as long as possible: developing the technologies to cure aging, such as SENS, and preventing human extinction risk, which threatens everybody. Unfortunately, in the life extensionist community, and the world at large, the balance of attention and support is lopsided in favor of the first side of the coin, while largely ignoring the second. I see people meticulously obsessed with caloric restriction and SENS, but apparently unaware of human extinction risks. There’s the global warming movement, sure, but no efforts to address the bio, nano, and AI risks.

It’s easy to understand why. Life extension therapies are a positive and happy thing, whereas existential risk is a negative and discouraging thing. The affect heuristic causes us to shy away from negative affect, while only focusing on projects with positive affect: life extension. Egocentric biases help magnify the effect, because it’s easier to imagine oneself aging and dying than getting wiped out along with billions of others as a result of a planetary plague, for instance. Attributional biases work against both sides of the immortality coin: because there’s no visible bad guy to fight, people aren’t as juiced up as they would be, about, say, protesting a human being like Bush.

Another element working against the risk side of the coin is the assignment of credit: a research team may be the first to significantly extend human life, in which case, the team and all their supporters get bragging rights. Prevention of existential risks is a bit hazier, consisting of networks of safeguards which all contribute a little bit towards lowering the probability of disaster. Existential risk prevention isn’t likely to be the way it is in the movies, where the hero punches out the mad scientist right before he presses the red button that says “Planet Destroyer”, but because of a cooperative network of individuals working to increase safety in the diverse areas that risks could emerge from: biotech, nanotech, and AI.

Present-day immortalists and transhumanists simply don’t care enough about existential risk. Many of them are at the same stage with regards to ideological progression as most of humanity is against the specter of death: accepting, in denial, dismissive. There are few things less pleasant to contemplate than humanity destroying itself, but it must be done anyhow, because if we slip and fall, there’s no getting up.

The greatest challenge is that the likelihood of disaster per year must be decreased to very low levels — less than 0.001% or something — because otherwise the aggregate probability computed over a series of years will approach 1 at the limit. There are many risks that even distributing ourselves throughout space would do nothing to combat — rogue, space-going AI, replicators that eat asteroids and live off sunlight, agents that pursue reproduction at the exclusion of value structures such as conscious experiences. Space colonization is not our silver bullet, despite what some might think. Relying overmuch on space colonization to combat existential risk may give us a false sense of security.

Yesterday it hit the national news that synthetic life is on its way within 3 to 10 years. To anyone following the field, this comes as zero surprise, but there are many thinkers out there who might not have seen it coming. The Lifeboat Foundation, which has saw this well in advance, set up the A-Prize as an effort to bring development of artificial life out into the open, where it should be, and the A-Prize currently has a grand total of three donors: myself, Sergio Tarrero, and one anonymous donor. This is probably a result of insufficient publicity, though.

Genetically engineered viruses are a risk today. Synthetic life will be a risk in 3–10 years. AI could be a risk in 10 years, or it could be a risk now — we have no idea. The fastest supercomputers are already approximating the computing power of the human brain, but since an airplane is way less complex than a bird, we should assume that less-than-human computing power is sufficient for AI. Nanotechnological replicators, a distinct category of replicator that blurs into synthetic life at the extremes, could be a risk in 5–15 years — again, we don’t know. Better to assume they’re coming sooner, and be safe rather than sorry.

Once you realize that humanity has lived entirely without existential risks (except the tiny probability of asteroid impact) since Homo sapiens evolved over 100,000 years ago, and we’re about to be hit full-force by these new risks in the next 3–15 years, the interval between now and then is practically nothing. Ideally, we’d have 100 or 500 years of advance notice to prepare for these risks, not 3–15. But since 3–15 is all we have, we’d better use it.

If humanity continues to survive, the technologies for radical life extension are sure to be developed, taking into account economic considerations alone. The efforts of Aubrey de Grey and others may hurry it along, saving a few million lives in the process, and that’s great. But if we develop SENS only to destroy ourselves a few years later, it’s worse than useless. It’s better to overinvest in existential risk, encourage cryonics for those whose bodies can’t last until aging is defeated, and address aging once we have a handle on existential risk, which we quite obviously don’t. Remember: there will always be more people paying attention to radical life extension than existential risk, so the former won’t be losing much if you shift your focus to the latter. As fellow blogger Steven says, “You have only a small fraction of the world’s eggs; putting them all in the best available basket will help, not harm, the global egg spreading effort.”

For more on why I think fighting existential risk should be central for any life extensionist, see Immortalist Utilitarianism, written in 2004.

]]>
https://russian.lifeboat.com/blog/2007/09/the-other-side-of-the-immortality-coin/feed 4