Toggle light / dark theme

Computer scientists have found that robots evolve more quickly and efficiently after a virtual mass extinction modeled after real-life disasters such as the one that killed off the dinosaurs. Beyond implications for artificial intelligence, the research supports the idea that mass extinctions actually speed up evolution by unleashing new creativity in adaptations.

Photo credit: Joel Lehman.

Read more

If cancer is predominantly a random process, then why don’t organisms with thousands of times more cells suffer more from cancer? Large species like whales and elephants generally live longer, not shorter lives, so how are they protected against the threat of cancer?

While we have a great deal more to learn when it comes to cancer biology, the general belief is that it arises first from mutation. It’s becoming clear it’s actually an incredibly complicated process, requiring a range of variable factors such as mutation, epigenetic alteration and local environment change (like inflammation). While some students may have spent sleepless nights wondering how many mutated cells they contain after learning the fallibility of our replication mechanisms, the reality is that with such an error rate we should all be ridden with cancer in childhood — but we’re not. Our canine companions sadly often succumb around their 1st decade, but humans are actually comparatively good at dealing with cancer. We live a relatively long time in the mammal kingdom for our size and even in a modern environment, it’s predominantly an age-related disease.

While evolution may have honed replication accuracy, life itself requires ‘imperfection’ to evolve. We needed those occasional errors in germ cells to allow evolution. If keeping the odd error is either preferable or essentially not worth the energy tackling when you’re dealing with tens of trillions of cells, then clearly there is more to the story than mutation. In order to maintain a multi-cellular organism for a long enough period, considering that errors are essentially inevitable, other mechanisms must be in place to remove or quarantine problematic cells.

Read more

Synthetic biology is radical and has huge potential to revolutionize multiple industries. The fact is biology has already worked out efficient ways of doing things, or has in place mechanisms we can adapt, so why reinvent anything if we can simply adapt what’s already here? Using billions of years of evolution makes logical sense, and that’s what synthetic biology builds on.

So here is a great video by Grist, explaining what synthetic biology is and what we might be able to do with it in the future.

Read more

OK. In scientific terms, it is only a ‘hypothesis’ — the reverse of the ‘Disposable Soma’ theory of ageing. Here how it goes.

For the past several decades, the Disposable Soma theory of ageing has been enjoying good publicity and a lively interest from both academics and the public alike. It stands up to scientific scrutiny, makes conceptual sense and fits well within an evolutionary framework of ageing. The theory basically suggests that, due to energy resource constraints, there is a trade-off between somatic cell and germ cell repair. As a result, germ cells are being repaired effectively and so the survival of the species is assured, at a cost of individual somatic (bodily) ageing and death. To put it very simply, we are disposable, we age and die because all the effective repair mechanisms have been diverted to our germ cell DNA in order to guarantee the survival of our species.

The theory accounts for many repair pathways and mechanisms converging upon the germ cell, and also for many of those mechanisms being driven away from somatic cell repair just to ensure germ cell survival. In the past two or three years however, it is increasingly being realised that this process is not unidirectional (from soma to germ), but it is bi-directional: under certain circumstances, somatic cells may initiate damage that affects germ cells, and also that germ cells may initiate repairs that benefit somatic cells!

I can’t even begin to describe how important this bi-directionality is. Taking this in a wider and more speculative sense, it is, in fact, the basis for the cure of ageing. The discovery that germ cells can (or are forced to) relinquish their repair priorities, and that resources can then be re-allocated for somatic repairs instead, means that we may be able to avoid age-related damage (because this would be repaired with greater fidelity) and, at the same time, avoid overpopulation (as our now damaged genetic material would be unsuitable for reproduction).

Ermolaeva et al. raised the further possibility that DNA damage in germ cells may protect somatic cells. They suggested that DNA injury in germ cells upregulates stress resistance pathways in somatic cells, and improves stress response to heat or oxidation. This is profoundly important because it shows that, in principle, when germ cells are damaged, they produce agents which can then protect somatic cells against systemic stress.

This mechanism may reflect an innate tendency to reverse the trade-offs between germ cell and somatic cell repair: when the germ cells are compromised, there is delay in offspring production matched by an increased repair of somatic cells. In Nature’s ‘eyes’, if the species cannot survive, at least the individual bodies should.

In addition, it was shown that neuronal stress induces apoptosis (orderly cell death) in the germ line. This process is mediated by the IRE-1 factor, an endoplasmic reticulum stress response sensor, which then activates p53 and initiates the apoptotic cascade in the germ line. Therefore germ cells may die due to a stress response originating from the distantly-located neurons.

If this mechanism exists, it is likely that other similar mechanisms must also exist, waiting to described. The consequence could be that neuronal positive stress (i.e. exposure to meaningful information that entices us to act) can affect our longevity by downgrading the importance of germ cell repair in favour of somatic tissue repair. In other words, the disposable soma theory can be seen in reverse: the soma (body) is not necessarily disposable but it can survive longer if it becomes indispensable, if it is ‘useful to the whole. This, as we claimed last week, can happen through mechanisms which are independent of any artificial biotechnological interventions.

We know that certain events which downgrade reproduction, may also cause a lifespan extension. Ablation of germ cells in the C.elegans worm, leads to an increased lifespan, which shows that signals from the germline have a direct impact upon somatic cell survival, and this may be due to an increased resistance of somatic cells to stress. Somatic intracellular clearance systems are also up-regulated following signals from the germ line.

In addition, protein homoeostasis in somatic cells is well-maintained when germ cells are damaged, and it is significantly downgraded when germ cell function increases. All of the above suggest that when the germ cells are healthy, somatic repair decreases, and when they are not, somatic repair improves as a counter-effect.

In an intriguing paper published last month, Lin et al. showed that under certain circumstances, somatic cells may adopt germ-like characteristics, which may suggest that these somatic cells can also be subjected to germ line protection mechanisms after their transformation. A few days ago Bazley et al. published a paper elucidating the mechanisms of how germ cells may induce somatic cell reprogramming and somatic stem cell pluripotency. This is an additional piece of evidence of the cross-talk mechanisms between soma and germ line, underscoring the fact that the health of somatic tissues depends upon signals from the germ line.

In all, there is sufficient initial evidence to suggest that my line of thinking is quite possibly correct: that the disposable soma theory is not unidirectional and the body may not, after all, be always ‘disposable’. Under certain evolutionary pressures we could experience increased somatic maintenance at the expense of germ cell repairs, and thus reach a situation where the body actually lives longer. I have already discussed that some of these evolutionary pressures could be dependent upon how well one makes themselves ‘indispensable’ to the adaptability of the homo sapiens species within a global techno-cultural environment.

According to the reputable Australian astro-enthusiast journal, SkyNews, a leading biologist says that it is surprising we have not already discovered extra-terrestrials that look like us — given the growing number of Earth-like planets now discovered by astronomers.

Planet_moonSimon Conway Morris, an evolutionary biologist suggests that aliens resembling humans must have evolved on other planets. He bases the claim on evidence that different species will independently develop similar features which means that life similar to that on Earth would also develop on equivalent planets.

The theory, known as convergence, says evolution is a predictable process which follows a rigid set of rules. Read the full story at Skynews

__________
Philip Raymond is Co-Chair of The Cryptocurrency Standards
Association [crypsa.org] and chief editor at AWildDuck.com

Can an emotional component to artificial intelligence be a benefit?

Robots with passion! Emotional artificial intelligence! These concepts have been in books and movies lately. A recent example of this is the movie Ex Machina. Now, I’m not an AI expert, and cannot speak to the technological challenges of developing an intelligent machine, let alone an emotional one. I do however, know a bit about problem solving, and that does relate to both intelligence and emotions. It is this emotional component of problem solving that leads me to speculate on the potential implications to humanity if powerful AI’s were to have human emotions.

Why the question about emotions? In a roundabout way, it has to do with how we observe and judge intelligence. The popular way to measure intelligence in a computer is the Turing test. If it can fool a person through conversation, into thinking that the computer is a person, then it has human level intelligence. But we know that the Turing test by itself is insufficient to be a true intelligence test. Sounding human during dialog is not the primary method we use to gauge intelligence in other people or in other species. Problem solving seems to be a reliable test of intelligence either through IQ tests that involve problem solving, or through direct real world problem solving.

As an example of problem solving, we judge how intelligent a rat is by how fast it can navigate a maze to get to food. Let’s look at this in regards to the first few steps in problem solving.

Fundamental to any problem solving, is recognizing that a problem exists. In this example, the rat is hungry. It desires to be full. It can observe its current state (hungry) and compare it with its desired state (full) and determine that a problem exists. It is now motivated to take action.

Desire is intimately tied to emotion. Since it is desire that allows the determination of whether or not a problem exists, one can infer that emotions allow for the determination that a problem exists. Emotion is a motivator for action.

Once a problem is determined to exist, it is important to define the problem. In this simple example this step isn’t very complex. The rat desires food, and food is not present. It must find food, but its options for finding food are constrained by the confines of the maze. But the rat may have other things going on. It might be colder than it would prefer. This presents another problem. When confronted with multiple problems, the rat must prioritize which problem to address first. Problem prioritization again is in the realm of desires and emotions. It might be mildly unhappy with the temperature, but very unhappy with its hunger state. In this case one would expect that it will maximize its happiness by solving the food problem before curling up to solve its temperature problem. Emotions are again in play, driving behavior which we see as action.

The next steps in problem solving are to generate and implement a solution to the problem. In our rat example, it will most likely determine if this maze is similar to ones it has seen in the past, and try to run the maze as fast as it can to get to the food. Not a lot of emotion involved in these steps with the possible exception of happiness if it recognizes the maze. However, if we look at problems that people face, emotion is riddled in the process of developing and implementing solutions. In the real world environment, problem solving almost always involves working with other people. This is because they are either the cause of the problem, or are key to the problem’s solution, or both. These people have a great deal of emotions associated with them. Most problems require negation to solve. Negotiation by its nature is charged with emotion. To be effective in problem solving a person has to be able to interpret and understand the wants and desires (emotions) of others. This sounds a lot like empathy.

Now, let’s apply the emotional part of problem solving to artificial intelligence. The problem step of determining whether or not a problem exists doesn’t require emotion if the machine in question is a thermostat or a Roomba. A thermostat doesn’t have its own desired temperature to maintain. Its desired temperature is determined by a human and given to the thermostat. That human’s desires are a based on a combination of learned preferences from personal experience, and hardwired preferences based on millions of years of evolution. The thermostat is simply a tool.

Now the whole point behind an AI, especially an artificial general intelligence, is that it is not a thermostat. It is supposed to be intelligent. It must be able to problem solve in a real world environment that involves people. It has to be able to determine that problems exists and then prioritize those problems, without asking for a human to help it. It has to be able to socially interact with people. It must identify and understand their motivations and emotions in order to develop and implement solutions. It has to be able to make these choices which are based on desires, without the benefit of millions of years of evolution that shaped the desires that we have. If we want it to be able to truly pass for human level intelligence, it seems we’ll have to give it our best preferences and desires to start with.

A machine that cannot chose its goals, cannot change its goals. A machine without that choice, if given the goal of say maximizing pin production, will creatively and industriously attempt to convert the entire planet into pins. Such a machine cannot question instructions that are illegal or unethical. Here lies the dilemma. What is more dangerous, the risk that someone will program an AI that has no choice, to do bad things, or the risk that an AI will decide to do bad things on its own?

No doubt about it, this is a tough call. I’m sure some AIs will be built with minimal or no preferences with the intent that it will be simply a very smart tool. But without giving an AI a set of desires and preferences to start with that are comparable to those of humans, we will be interacting with a truly alien intelligence. I for one, would be happier with an AI that at least felt regret about killing someone, than I would be with an AI that didn’t.

Quoted: “Once you really solve a problem like direct brain-computer interface … when brains and computers can interact directly, to take just one example, that’s it, that’s the end of history, that’s the end of biology as we know it. Nobody has a clue what will happen once you solve this. If life can basically break out of the organic realm into the vastness of the inorganic realm, you cannot even begin to imagine what the consequences will be, because your imagination at present is organic. So if there is a point of Singularity, as it’s often referred to, by definition, we have no way of even starting to imagine what’s happening beyond that.”

Read the article here > http://www.theamericanconservative.com/dreher/silicon-valley-mordor/