Toggle light / dark theme

AI and Human Enhancement:

A groundbreaking new AI system is exploring the limits of human potential, developing technologies that can enhance our physical and cognitive abilities. 🤖 By analyzing biological data and applying advanced engineering principles, the AI can identify ways to improve human performance.

How AI Enhances Human Abilities:

AI-powered human enhancement technologies can:

Enhance Physical Abilities: Increase strength, speed, and endurance.
Improve Cognitive Abilities: Enhance memory, intelligence, and creativity.
Extend Lifespan: Slow down the aging process and increase lifespan.
The Ethical Implications:

An NIH-funded project leverages advanced synapse imaging to monitor real-time neuronal changes during learning, unveiling new insights that could inspire next-generation brain-like AI systems. How do we learn something new? How do tasks at a new job, the lyrics to the latest hit song, or directio

A team of researchers from the Institute for Basic Science, Yonsei University, and the Max Planck Institute have developed a new artificial intelligence (AI) technique that brings machine vision closer to how the human brain processes images. Called Lp-Convolution, this method improves the accuracy and efficiency of image recognition systems while reducing the computational burden of existing AI models.

The is remarkably efficient at identifying key details in complex scenes, an ability that traditional AI systems have struggled to replicate. Convolutional Neural Networks (CNNs)—the most widely used AI model for image recognition—process images using small, square-shaped filters. While effective, this rigid approach limits their ability to capture broader patterns in fragmented data.

More recently, vision transformers have shown superior performance by analyzing entire images at once, but they require massive computational power and large datasets, making them impractical for many .

In an experiment reminiscent of the “Transformers” movie franchise, engineers at Princeton University have created a type of material that can expand, assume new shapes, move and follow electromagnetic commands like a remotely controlled robot, even though it lacks any motor or internal gears.

“You can transform between a material and a robot, and it is controllable with an ,” said researcher Glaucio Paulino, the Margareta Engman Augustine Professor of Engineering at Princeton.

In an article published in Nature, the researchers describe how they drew inspiration from the folding art of origami to create a structure that blurs the lines between robotics and materials. The invention is a metamaterial, which is a material engineered to feature new and unusual properties that depend on the material’s physical structure rather than its chemical composition.

A team of Lehigh University researchers has successfully predicted abnormal grain growth in simulated polycrystalline materials for the first time—a development that could lead to the creation of stronger, more reliable materials for high-stress environments, such as combustion engines. A paper describing their novel machine learning method was recently published in Nature Computational Materials.

“Using simulations, we were not only able to predict abnormal grain growth, but we were able to predict it far in advance of when that growth happens,” says Brian Y. Chen, an associate professor of computer science and engineering in Lehigh’s P.C. Rossin College of Engineering and Applied Science and a co-author of the study. “In 86% of the cases we observed, we were able to predict within the first 20% of the lifetime of that material whether a particular grain will become abnormal or not.”

When metals and ceramics are exposed to continuous heat—like the temperatures generated by rocket or airplane engines, for example—they can fail. Such materials are made of crystals, or grains, and when they’re heated, atoms can move, causing the crystals to grow or shrink. When a few grains grow abnormally large relative to their neighbors, the resulting change can alter the material’s properties. A material that previously had some flexibility, for instance, may become brittle.

Suppose you want to make a tiny robot to perform surgery inside a human patient. To avoid damaging healthy tissue and to squeeze into tight spots, the robot should be squishy. And manipulating the robot’s movements with magnetic fields would make sense, as tissues don’t respond to magnetism. But what material would you use for the robot’s limbs? Magnetic materials are stiff and brittle. Embedding tiny particles of them in a rubbery matrix could work, but the thinner—and therefore bendier—you make the composite material, the less it responds to a magnetic field. Heinrich Jaeger of the University of Chicago, Monica Olvera de la Cruz of Northwestern University, Illinois, and their collaborators have now overcome that obstacle by making thin, flexible sheets out of self-assembled nanoparticles of magnetite [1]. Even a modest field of 100 milliteslas can lift a sheet and bend it by 50°, they found.

At room temperature, magnetite (Fe3O4) is ferrimagnetic—that is, the magnetic moments in its two sublattices align in opposite directions but with unequal magnitudes, yielding a net magnetization. The smaller a ferrimagnet, the greater the chance that it has a single domain, and therefore the lower the temperature at which the domain’s magnetization will flip. When the sample size gets down to a few tens of nanometers, a ferrimagnet made of randomly flipping particles becomes, in effect, a paramagnet—that is, it lacks a net magnetization and is attracted by an applied magnetic field. The attraction can be strong. The discoverers of this phenomenon in 1959 dubbed it superparamagnetism [2].

The researchers realized that a sheet made from a single layer of superparamagnetic particles could serve as a viable material for the magnetic actuation of small soft robots. To create the layers, they suspended magnetite nanoparticles in droplets of water coated with an organic solvent. The solvent attracted the nanoparticles, which migrated to a droplet’s surface. The water slowly evaporated, leaving behind a layer of closely packed nanoparticles draped on the droplet’s support structure, a square copper grid. Each of the 20 × 20 µm squares supported a single sheet. As shown in the figure, some of the sheets happened to have a single unattached corner.

How likely is it that we live in a simulation? Are virtual worlds real?

In this first episode of the 2nd Series we delve into the fascinating topic of virtual reality simulations and the extraordinary possibility that our universe is itself a simulation. For thousands of years some mystical traditions have maintained that the physical world and our separated ‘selves’ are an illusion, and now, only with the development of our own computer simulations and virtual worlds have scientists and philosophers begun to assess the statistical probabilities that our shared reality could in fact be some kind of representation rather than a physical place.
As we become more open to these possibilities, other difficult questions start to come into focus. How can we create a common language to talk about matter and energy, that bridges the simulated and simulating worlds. Who could have created such a simulation? Could it be an artificial intelligence rather than a biological or conscious being? Do we have ethical obligations to the virtual beings we interact with in our virtual worlds and to what extent are those beings and worlds ‘real’? The list is long and mind bending.

Fortunately, to untangle our thoughts on this, we have one of the best known philosophers of all things mind bending in the world, Dr. David Chalmers; who has just released a book ‘Reality+: virtual worlds and the problems of philosophy’ about this very topic. Dr. Chalmers is an Australian philosopher and cognitive scientist specialising in the areas of philosophy of mind and philosophy of language. He is a Professor of Philosophy and Neuroscience at New York University, as well as co-director of NYU’s Center for Mind, Brain and Consciousness. He’s the founder of the ‘Towards a Science of Consciousness Conference’ at which he coined the term in 1994 The Hard Problem of Consciousness, kicking off a renaissance in consciousness studies, which has been increasing in popularity and research output ever since.

Donate here: https://www.chasingconsciousness.net/episodes.

What we discuss in this episode:
00:00 Short Intro.
06:00 Synesthesia.
08:27 The science of knowing the nature of reality.
11:02 The Simulation Hypothesis explained.
15:25 The statistical probability evaluation.
18:00 Knowing for sure is beyond the reaches of science.
19:00 You’d only have to render the part you’re interacting with.
20:00 Clues from physics.
22:00 John Wheeler — ‘It from bit’
23:32 Eugene Wigner: measurement as a conscious observation.
27:00 Information theory as a useful but risky hold-all language tool.
34:30 Virtual realities are real and virtual interactions are meaningful.
37:00 Ethical approaches to Non-player Characters (NPC’s) and their rights.
38:45 Will advanced AI be conscious?
42:45 Is god a hacker in the universe up? Simulation Theology.
44:30 Simulation theory meets the argument for the existence of God from design.
51:00 The Hard problem of consciousness applies to AI too.
55:00 Testing AI’s consciousness with the Turing test.
59:30 Ethical value applied to immoral actions in virtual worlds.

References:

In this enlightening episode, we delve into groundbreaking research that challenges our understanding of the brain’s building blocks. Recent studies reveal that a single neuron possesses computational capabilities rivaling those of entire artificial neural networks, suggesting that each neuron may function as a complex processor in its own right.

This UPSC Podcast explores how learning in the brain is more complex than previously thought, revealing that synapses, the connections between neurons, don’t all follow the same rules. A recent study observed these tiny junctions in mice, discovering that their behavior depends on their location on a neuron’s branches called dendrites. Some synapses prioritize local connections, while others form longer circuits, indicating that different parts of a single neuron perform distinct computations, potentially explaining how the brain forms memories, including during processes like offline learning. This research offers a new perspective on how the brain encodes information and could potentially inspire more sophisticated AI methods.

Key Discussion Points:

Neuronal Complexity: Exploring how individual neurons can perform intricate computations, akin to multi-layered neural networks.
Quanta Magazine.

Dendritic Processing: Understanding the role of dendrites in enhancing a neuron’s computational power.
Quanta Magazine.

Implications for AI: Discussing how these findings could revolutionize artificial intelligence by inspiring more efficient neural network architectures.