Toggle light / dark theme

In recent years, roboticists and computer scientists have developed a wide range of systems inspired by nature, particularly by humans and animals. By reproducing animal movements and behaviors, these robots could navigate real-world environments more effectively.

Researchers at Northeastern University in China recently developed a new H-shaped bionic robot that could replicate the movements that cheetahs make while running. This robot, introduced in a paper published in the Journal of Bionic Engineering, is based on piezoelectric materials, a class of materials that generate an electric charge when subjected to mechanical stress.

“The piezoelectric robot realizes linear motion, turning motion, and turning motion with different radii by the voltage differential driving method,” wrote Ying Li, Chaofeng Li and their colleagues in their paper. “A prototype with a weight of 38 g and dimensions of 150 × 80 × 31 mm3 was fabricated.”

At the threshold of a century poised for unprecedented transformations, we find ourselves at a crossroads unlike any before. The convergence of humanity and technology is no longer a distant possibility; it has become a tangible reality that challenges our most fundamental conceptions of what it means to be human.

This article seeks to explore the implications of this new era, in which Artificial Intelligence (AI) emerges as a central player. Are we truly on the verge of a symbiotic fusion, or is the conflict between the natural and the artificial inevitable?

The prevailing discourse on AI oscillates between two extremes: on one hand, some view this technology as a powerful extension of human capabilities, capable of amplifying our creativity and efficiency. On the other, a more alarmist narrative predicts the decline of human significance in the face of relentless machine advancement. Yet, both perspectives seem overly simplistic when confronted with the intrinsic complexity of this phenomenon. Beyond the dichotomy of utopian optimism and apocalyptic pessimism, it is imperative to critically reflect on AI’s cultural, ethical, and philosophical impact on the social fabric, as well as the redefinition of human identity that this technological revolution demands.

Since the dawn of civilization, humans have sought to transcend their natural limitations through the creation of tools and technologies. From the wheel to the modern computer, every innovation has been seen as a means to overcome the physical and cognitive constraints imposed by biology. However, AI represents something profoundly different: for the first time, we are developing systems that not only execute predefined tasks but also learn, adapt, and, to some extent, think.

This transition should not be underestimated. While previous technologies were primarily instrumental—serving as controlled extensions of human will—AI introduces an element of autonomy that challenges the traditional relationship between subject and object. Machines are no longer merely passive tools; they are becoming active partners in the processes of creation and decision-making. This qualitative leap radically alters the balance of power between humans and machines, raising crucial questions about our position as the dominant species.

But what does it truly mean to “be human” in a world where the boundaries between mind and machine are blurring? Traditionally, humanity has been defined by attributes such as consciousness, emotion, creativity, and moral decision-making. Yet, as AI advances, these uniquely human traits are beginning to be replicated—albeit imperfectly—within algorithms. If a machine can imitate creativity or exhibit convincing emotional behavior, where does our uniqueness lie?

This challenge is not merely technical; it strikes at the core of our collective identity. Throughout history, humanity has constructed cultural and religious narratives that placed us at the center of the cosmos, distinguishing us from animals and the forces of nature. Today, that narrative is being contested by a new technological order that threatens to displace us from our self-imposed pedestal. It is not so much the fear of physical obsolescence that haunts our reflections but rather the anxiety of losing the sense of purpose and meaning derived from our uniqueness.

Despite these concerns, many AI advocates argue that the real opportunity lies in forging a symbiotic partnership between humans and machines. In this vision, technology is not a threat to humanity but an ally that enhances our capabilities. The underlying idea is that AI can take on repetitive or highly complex tasks, freeing humans to engage in activities that truly require creativity, intuition, and—most importantly—emotion.

Concrete examples of this approach can already be seen across various sectors. In medicine, AI-powered diagnostic systems can process vast amounts of clinical data in record time, allowing doctors to focus on more nuanced aspects of patient care. In the creative industry, AI-driven text and image generation software are being used as sources of inspiration, helping artists and writers explore new ideas and perspectives. In both cases, AI acts as a catalyst, amplifying human abilities rather than replacing them.

Furthermore, this collaboration could pave the way for innovative solutions in critical areas such as environmental sustainability, education, and social inclusion. For example, powerful neural networks can analyze global climate patterns, assisting scientists in predicting and mitigating natural disasters. Personalized algorithms can tailor educational content to the specific needs of each student, fostering more effective and inclusive learning. These applications suggest that AI, far from being a destructive force, can serve as a powerful instrument to address some of the greatest challenges of our time.

However, for this vision to become reality, a strategic approach is required—one that goes beyond mere technological implementation. It is crucial to ensure that AI is developed and deployed ethically, respecting fundamental human rights and promoting collective well-being. This involves regulating harmful practices, such as the misuse of personal data or the indiscriminate automation of jobs, as well as investing in training programs that prepare people for the new demands of the labor market.

While the prospect of symbiotic fusion is hopeful, we cannot ignore the inherent risks of AI’s rapid evolution. As these technologies become more sophisticated, so too does the potential for misuse and unforeseen consequences. One of the greatest dangers lies in the concentration of power in the hands of a few entities, whether they be governments, multinational corporations, or criminal organizations.

Recent history has already provided concerning examples of this phenomenon. The manipulation of public opinion through algorithm-driven social media, mass surveillance enabled by facial recognition systems, and the use of AI-controlled military drones illustrate how this technology can be wielded in ways that undermine societal interests.

Another critical risk in AI development is the so-called “alignment problem.” Even if a machine is programmed with good intentions, there is always the possibility that it misinterprets its instructions or prioritizes objectives that conflict with human values. This issue becomes particularly relevant in the context of autonomous systems that make decisions without direct human intervention. Imagine, for instance, a self-driving car forced to choose between saving its passenger or a pedestrian in an unavoidable collision. How should such decisions be made, and who bears responsibility for the outcome?

These uncertainties raise legitimate concerns about humanity’s ability to maintain control over increasingly advanced technologies. The very notion of scientific progress is called into question when we realize that accumulated knowledge can be used both for humanity’s benefit and its detriment. The nuclear arms race during the Cold War serves as a sobering reminder of what can happen when science escapes moral oversight.

Whether the future holds symbiotic fusion or inevitable conflict, one thing is clear: our understanding of human identity must adapt to the new realities imposed by AI. This adjustment will not be easy, as it requires confronting profound questions about free will, the nature of consciousness, and the essence of individuality.

One of the most pressing challenges is reconciling our increasing technological dependence with the preservation of human dignity. While AI can significantly enhance quality of life, there is a risk of reducing humans to mere consumers of automated services. Without a conscious effort to safeguard the emotional and spiritual dimensions of human experience, we may end up creating a society where efficiency outweighs empathy, and interpersonal interactions are replaced by cold, impersonal digital interfaces.

On the other hand, this very transformation offers a unique opportunity to rediscover and redefine what it means to be human. By delegating mechanical and routine tasks to machines, we can focus on activities that truly enrich our existence—art, philosophy, emotional relationships, and civic engagement. AI can serve as a mirror, compelling us to reflect on our values and aspirations, encouraging us to cultivate what is genuinely unique about the human condition.

Ultimately, the fate of our relationship with AI will depend on the choices we make today. We can choose to view it as an existential threat, resisting the inevitable changes it brings, or we can embrace the challenge of reinventing our collective identity in a post-humanist era. The latter, though more daring, offers the possibility of building a future where technology and humanity coexist in harmony, complementing each other.

To achieve this, we must adopt a holistic approach that integrates scientific, ethical, philosophical, and sociological perspectives. It also requires an open, inclusive dialogue involving all sectors of society—from researchers and entrepreneurs to policymakers and ordinary citizens. After all, AI is not merely a technical tool; it is an expression of our collective imagination, a reflection of our ambitions and fears.

As we gaze toward the horizon, we see a world full of uncertainties but also immense possibilities. The future is not predetermined; it will be shaped by the decisions we make today. What kind of social contract do we wish to establish with AI? Will it be one of domination or cooperation? The answer to this question will determine not only the trajectory of technology but the very essence of our existence as a species.

Now is the time to embrace our historical responsibility and embark on this journey with courage, wisdom, and an unwavering commitment to the values that make human life worth living.

__
Copyright © 2025, Henrique Jorge

[ This article was originally published in Portuguese in SAPO’s technology section at: https://tek.sapo.pt/opiniao/artigos/a-sinfonia-do-amanha-tit…exao-seria ]

The news: A paralyzed man has walked again thanks to a brain-controlled exoskeleton suit. Within the safety of a lab setting, he was also able to control the suit’s arms and hands, using two sensors on his brain. The patient was a man from Lyon named Thibault, who fell 40 feet (12 meters) from a balcony four years ago, leaving him paralyzed from the shoulders down.

How it worked: Thibault had surgery to place two implants, each containing 64 electrodes, on the parts of the brain that control movement. Software then translated the brain waves read by these implants into instructions for movement. The development of the exoskeleton, by Clinatec and the University of Grenoble, is described in a paper in The Lancet this week.

The Carboncopies Foundation is starting The Brain Emulation Challenge.


With the availability of high throughput electron microscopy (EM), expansion microscopy (ExM), Calcium and voltage imaging, co-registered combinations of these techniques and further advancements, high resolution data sets that span multiple brain regions or entire small animal brains such as the fruit-fly Drosophila melanogaster may now offer inroads to expansive neuronal circuit analysis. Results of such analysis represent a paradigm change in the conduct of neuroscience.

So far, almost all investigations in neuroscience have relied on correlational studies, in which a modicum of insight gleaned from observational data leads to the formulation of mechanistic hypotheses, corresponding computational modeling, and predictions made using those models, so that experimental testing of the predictions offers support or modification of hypotheses. These are indirect methods for the study of a black box system of highly complex internal structure, methods that have received published critique as being unlikely to lead to a full understanding of brain function (Jonas and Kording, 2017).

Large scale, high resolution reconstruction of brain circuitry may instead lead to mechanistic explanations and predictions of cognitive function with meaningful descriptions of representations and their transformation along the full trajectory of stages in neural processing. Insights that come from circuit reconstructions of this kind, a reverse engineering of cognitive processes, will lead to valuable advances in neuroprosthetic medicine, understanding of the causes and effects of neurodegenerative disease, possible implementations of similar processes in artificial intelligence, and in-silico emulations of brain function, known as whole-brain emulation (WBE).

Combining lab-grown muscle tissue with a series of flexible mechanical joints has led to the development of an artificial hand that can grip and make gestures. The breakthrough shows the way forward for a new kind of robotics with a range of potential applications.

While we’ve seen plenty of soft robots at New Atlas and a truly inspiring range of mechanical prosthetics, we’ve yet to see too many inventions that quite literally combine human tissue with machines. That’s likely because the world of biohybrid science is still in its very early stages. Sure, there was an artificial fish powered by human heart cells and a robot that used a locust’s ear to hear, but in terms of the practical use of the technology, the field has remained somewhat empty.

Now though, researchers at the University of Tokyo and Waseda University in Japan have shown a breakthrough demonstrating the real promise of the technology.

Nanozymes are a class of nanomaterials that exhibit catalytic functions analogous to those of natural enzymes. They demonstrate considerable promise in the biomedical field, particularly in the treatment of bone infections, due to their distinctive physicochemical properties and adjustable catalytic activities. Bone infections (e.g., periprosthetic infections and osteomyelitis) are infections that are challenging to treat clinically. Traditional treatments often encounter issues related to drug resistance and suboptimal anti-infection outcomes. The advent of nanozymes has brought with it a new avenue of hope for the treatment of bone infections.

UC Davis Health is pleased to announce that Neurosurgeon David Brandman and his team at UC Davis Neuroprosthetics Lab were selected for a 2025 Top Ten Clinical Research Achievement Award. The Clinical Research Forum presents this award to honor 10 outstanding clinical research studies published in peer-reviewed journals in the previous year. This year’s Top 10 Awards ceremony will be held on April 14 in Washington, D.C.

Brandman and his team are recognized for their groundbreaking work in developing a new brain-computer interface (BCI) that translates brain signals into speech with up to 97% accuracy — the most accurate system of its kind. Their work was published in the New England Journal of Medicine.

“Our team is very honored that our study was selected among the nation’s best published clinical research studies. Our work demonstrates the most accurate speech neuroprosthesis (device) ever reported,” said Brandman, co-director of the Neuroprosthetics Lab. He is an assistant professor in the UC Davis Department of Neurological Surgery.

A class of synthetic soft materials called liquid crystal elastomers (LCEs) can change shape in response to heat, similar to how muscles contract and relax in response to signals from the nervous system. 3D printing these materials opens new avenues to applications, ranging from soft robots and prosthetics to compression textiles.

Controlling the material’s properties requires squeezing this elastomer-forming ink through the of a 3D printer, which induces changes to the ink’s internal structure and aligns rigid building blocks known as mesogens at the molecular scale. However, achieving specific, targeted alignment, and resulting properties, in these shape-morphing materials has required extensive trial and error to fully optimize printing conditions. Until now.

In a new study, researchers at the Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS), Princeton University, Lawrence Livermore National Laboratory, and Brookhaven National Laboratory worked together to write a playbook for printing liquid crystal elastomers with predictable, controllable alignment, and hence properties, every time.

Biopunk androids replicants.


What happens when humans begin combining biology with technology, harnessing the power to recode life itself.

What does the future of biotechnology and genetic engineering look like? How will humans program biology to create organ farm technology and bio-robots. And what happens when companies begin investing in advanced bio-printing, artificial wombs, and cybernetic prosthetic limbs.