Toggle light / dark theme

Words of the prophet.


What happens when AI surpasses human intelligence, accelerating its own evolution beyond our control? This is the Singularity, a moment where technology reshapes the world in ways we can’t yet imagine.
Futurist Ray Kurzweil predicts that by 2045, AI will reach this point, merging with human intelligence through Brain-Computer Interfaces (BCIs) and redefining the future of civilization. But as we move closer to this reality, we must ask: Will the Singularity be humanity’s greatest leap or its greatest risk?
Chapters.
00:00 — 00:48 Intro.
00:48 — 01:51 Technological Singularity.
01:51 — 05:09 Kurzweil’s Predictions and Accuracy.
05:09 — 07:32 The Path to the Singularity.
07:32 — 08:51 Brain-Computer Interfaces (BCIs)
08:51 — 12:14 The Singularity: What Happens Next?
12:14 — 14:14 The Concerns: Are We Ready?
14:14 — 15:11 The Countdown to 2045
The countdown has already begun. Are we prepared for what’s coming?
#RayKurzweil #Singularity #AI #FutureTech #ArtificialIntelligence #BrainComputerInterface

I’ve long been fascinated by the fundamental mystery of our universe’s origin. In my work, I explore an alternative to the traditional singularity-based models of cosmology. Instead of a universe emerging from an infinitely dense point, I propose that a flat universe and its time-reversed partner—an anti-universe—can emerge together from nothing through a smooth, quantum process.

This model, described in a manuscript accepted for publication in Europhysics Letters, addresses some of the key challenges in earlier proposals, such as the Hartle–Hawking no-boundary and Vilenkin’s tunneling approaches.

What happens when AI becomes infinitely smarter than us—constantly upgrading itself at a speed beyond human comprehension? This is the Singularity, a moment where AI surpasses all limits, leaving humanity at a crossroads.
Elon Musk predicts superintelligent AI by 2029, while Ray Kurzweil envisions the Singularity by 2045. But if AI reaches this point, will it be our greatest breakthrough or our greatest threat?
The answer might change everything we know about the future.

Chapters:

00:00 — 01:15 Intro.
01:15 — 03:41 What Is Singularity Paradox?
03:41 — 06:19 How Will Singularity Happen?
06:19 — 09:05 What Will Singularity Look Like?
09:05 — 11:50 How Close Are We?
11:50 — 14:13 Challenges And Criticism.

#AI #Singularity #ArtificialIntelligence #ElonMusk #RayKurzweil #FutureTech

As for these new JWST findings. Poplawski told Space.com: “It would be fascinating if our universe had a preferred axis. Such an axis could be naturally explained by the theory that our universe was born on the other side of the event horizon of a black hole existing in some parent universe.”

He added that black holes form from stars or at the centers of galaxies, and most likely globular clusters, which all rotate. That means black holes also rotate, and the axis of rotation of a black hole would influence a universe created by the black hole, manifesting itself as a preferred axis.

“I think that the simplest explanation of the rotating universe is the universe was born in a rotating black hole. Spacetime torsion provides the most natural mechanism that avoids a singularity in a black hole and instead creates a new, closed universe,” Poplawski continued. “A preferred axis in our universe, inherited by the axis of rotation of its parent black hole, might have influenced the rotation dynamics of galaxies, creating the observed clockwise-counterclockwise asymmetry.

Howard Bloom, Dr. Ben Goertzel, and Dr. Mihaela Ulieru examine how principles of emergent intelligence in natural systems can inform artificial general intelligence (AGI) development.

Join us at the Beneficial AGI Summit & Unconference 2025 (May 26–28 in Istanbul) to learn more about these topics and collaborate on addressing the critical challenges of developing beneficial AGI. Register now to watch online or attend in-person: https://bgisummit.io/

/ 22517.global_brain.
https://en.wikipedia.org/wiki/Howard_ • A Cultural Legend Tackles the Benefic… 00:00 Intro 01:20 Howard Bloom’s Online Journey and the Global Brain 04:33 Ben Goertzel’s Perspective on the Global Brain 09:07 The Evolution of Intelligence and AI 12:42 Challenges and Philosophies in AI Development 17:56 Human Values and AI: A Complex Relationship 24:18 The Role of Compassion in AI and Human Evolution 29:31 Tribalism and Ethical Reasoning in AI 30:16 Emergence of AI Values 31:26 Self-Organization and Compassion in AI 32:21 Ethical Theories and AI Attractors 34:33 Future Economy and AI Impact 34:48 AI and Human Economy Transformation 35:44 Cosmic Ambitions and AI 37:15 Competition Among AIs 38:00 Vision of Beneficial AGI 38:20 Path to Human-Level AGI 42:32 Emergence and Cooperation in AI 46:17 Singularity and Human Nature 50:09 Punctuated Equilibrium in AI Development 52:27 Engineering the Future of Intelligence 54:22 Closing Thoughts on AI and the Future #AGI #AI #BGI — SingularityNET was founded by Dr. Ben Goertzel with the mission of creating a decentralized, democratic, inclusive and beneficial Artificial General Intelligence (AGI). According to Dr. Goertzel, AGI should be independent of any central entity, open to anyone and not restricted to the narrow goals of a single corporation or even a single country. The SingularityNET team includes seasoned engineers, scientists, researchers, entrepreneurs, and marketers. The core platform and AI teams are further complemented by specialized teams devoted to application areas such as finance, robotics, biomedical AI, media, arts and entertainment. Website: https://singularitynet.io X: https://twitter.com/SingularityNET Linkedin: / singularitynet Instagram: / singularitynet.io Discord: / discord Telegram: https://t.me/singularitynet WhatsApp: https://whatsapp.com/channel/0029VaM8… Warpcast: https://warpcast.com/singularitynet Bluesky: https://bsky.app/profile/singularityn… Github: https://github.com/singnet.
• A Cultural Legend Tackles the Benefic…

00:00 Intro.
01:20 Howard Bloom’s Online Journey and the Global Brain.
04:33 Ben Goertzel’s Perspective on the Global Brain.
09:07 The Evolution of Intelligence and AI
12:42 Challenges and Philosophies in AI Development.
17:56 Human Values and AI: A Complex Relationship.
24:18 The Role of Compassion in AI and Human Evolution.
29:31 Tribalism and Ethical Reasoning in AI
30:16 Emergence of AI Values.
31:26 Self-Organization and Compassion in AI
32:21 Ethical Theories and AI Attractors.
34:33 Future Economy and AI Impact.
34:48 AI and Human Economy Transformation.
35:44 Cosmic Ambitions and AI
37:15 Competition Among AIs.
38:00 Vision of Beneficial AGI
38:20 Path to Human-Level AGI
42:32 Emergence and Cooperation in AI
46:17 Singularity and Human Nature.
50:09 Punctuated Equilibrium in AI Development.
52:27 Engineering the Future of Intelligence.
54:22 Closing Thoughts on AI and the Future.

#AGI #AI #BGI

SingularityNET was founded by Dr. Ben Goertzel with the mission of creating a decentralized, democratic, inclusive and beneficial Artificial General Intelligence (AGI). According to Dr. Goertzel, AGI should be independent of any central entity, open to anyone and not restricted to the narrow goals of a single corporation or even a single country. The SingularityNET team includes seasoned engineers, scientists, researchers, entrepreneurs, and marketers. The core platform and AI teams are further complemented by specialized teams devoted to application areas such as finance, robotics, biomedical AI, media, arts and entertainment.

Artificial Intelligence (AI) is, without a doubt, the defining technological breakthrough of our time. It represents not only a quantum leap in our ability to solve complex problems but also a mirror reflecting our ambitions, fears, and ethical dilemmas. As we witness its exponential growth, we cannot ignore the profound impact it is having on society. But are we heading toward a bright future or a dangerous precipice?

This opinion piece aims to foster critical reflection on AI’s role in the modern world and what it means for our collective future.

AI is no longer the stuff of science fiction. It is embedded in nearly every aspect of our lives, from the virtual assistants on our smartphones to the algorithms that recommend what to watch on Netflix or determine our eligibility for a bank loan. In medicine, AI is revolutionizing diagnostics and treatments, enabling the early detection of cancer and the personalization of therapies based on a patient’s genome. In education, adaptive learning platforms are democratizing access to knowledge by tailoring instruction to each student’s pace.

These advancements are undeniably impressive. AI promises a more efficient, safer, and fairer world. But is this promise being fulfilled? Or are we inadvertently creating new forms of inequality, where the benefits of technology are concentrated among a privileged few while others are left behind?

One of AI’s most pressing challenges is its impact on employment. Automation is eliminating jobs across various sectors, including manufacturing, services, and even traditionally “safe” fields such as law and accounting. Meanwhile, workforce reskilling is not keeping pace with technological disruption. The result? A growing divide between those equipped with the skills to thrive in the AI-driven era and those displaced by machines.

Another urgent concern is privacy. AI relies on vast amounts of data, and the massive collection of personal information raises serious questions about who controls these data and how they are used. We live in an era where our habits, preferences, and even emotions are continuously monitored and analyzed. This not only threatens our privacy but also opens the door to subtle forms of manipulation and social control.

Then, there is the issue of algorithmic bias. AI is only as good as the data it is trained on. If these data reflect existing biases, AI can perpetuate and even amplify societal injustices. We have already seen examples of this, such as facial recognition systems that fail to accurately identify individuals from minority groups or hiring algorithms that inadvertently discriminate based on gender. Far from being neutral, AI can become a tool of oppression if not carefully regulated.

Who Decides What Is Right?

AI forces us to confront profound ethical questions. When a self-driving car must choose between hitting a pedestrian or colliding with another vehicle, who decides the “right” choice? When AI is used to determine parole eligibility or distribute social benefits, how do we ensure these decisions are fair and transparent?

The reality is that AI is not just a technical tool—it is also a moral one. The choices we make today about how we develop and deploy AI will shape the future of humanity. But who is making these decisions? Currently, AI’s development is largely in the hands of big tech companies and governments, often without sufficient oversight from civil society. This is concerning because AI has the potential to impact all of us, regardless of our individual consent.

A Utopia or a Dystopia?

The future of AI remains uncertain. On one hand, we have the potential to create a technological utopia, where AI frees us from mundane tasks, enhances productivity, and allows us to focus on what truly matters: creativity, human connection, and collective well-being. On the other hand, there is the risk of a dystopia where AI is used to control, manipulate, and oppress—dividing society between those who control technology and those who are controlled by it.

The key to avoiding this dark scenario lies in regulation and education. We need robust laws that protect privacy, ensure transparency, and prevent AI’s misuse. But we also need to educate the public on the risks and opportunities of AI so they can make informed decisions and demand accountability from those in power.

Artificial Intelligence is, indeed, the Holy Grail of Technology. But unlike the medieval legend, this Grail is not hidden in a distant castle—it is in our hands, here and now. It is up to us to decide how we use it. Will AI be a tool for building a more just and equitable future, or will it become a weapon that exacerbates inequalities and threatens our freedom?

The answer depends on all of us. As citizens, we must demand transparency and accountability from those developing and implementing AI. As a society, we must ensure that the benefits of this technology are shared by all, not just a technocratic elite. And above all, we must remember that technology is not an end in itself but a means to achieve human progress.

The future of AI is the future we choose to build. And at this critical moment in history, we cannot afford to get it wrong. The Holy Grail is within our reach—but its true value will only be realized if we use it for the common good.

__
Copyright © 2025, Henrique Jorge

[ This article was originally published in Portuguese in SAPO’s technology section at: https://tek.sapo.pt/opiniao/artigos/o-santo-graal-da-tecnologia ]

Our understanding of black holes, time and the mysterious dark energy that dominates the universe could be revolutionized, as new University of Sheffield research helps unravel the mysteries of the cosmos.

Black holes—areas of space where gravity is so strong that not even light can escape—have long been objects of fascination, with astrophysicists, and others dedicating their lives to revealing their secrets. This fascination with the unknown has inspired numerous writers and filmmakers, with novels and films such as “Interstellar” exploring these enigmatic objects’ hold on our collective imagination.

According to Einstein’s theory of , anyone trapped inside a black hole would fall toward its center and be destroyed by immense gravitational forces. This center, known as a singularity, is the point where the matter of a giant star, which is believed to have collapsed to form the black hole, is crushed down into an infinitesimally tiny point. At this singularity, our understanding of physics and time breaks down.