Think transhumanism is a relatively new social and intellectual phenomenon? Guess again.
Many of the ideas characteristic of the movement have already been bantered about for literally hundreds of years—whether it be such things as radical life extension or the construction of machine minds. The Enlightenment period in particular was a fruitful time for these ideas to take flight, mostly on account of the new sciences, the rise of rationalism and secular humanism, and the waning influence of religion. Two thinkers that best exemplified Enlightenment-era proto-transhumanism were Denis Diderot and Marquis de Condorcet, and their early contributions are worth revisiting.
In 2013, physicist Alex Wissner-Gross published a single equation for intelligence in [ITALIC] Physical Review Letters [/ITALIC]: # F = T∇Sτ
The force of an intelligent system equals its temperature — computational capacity, raw horsepower — multiplied by the gradient of its future option-space. Intelligence is not a mysterious property of carbon-based brains.
It is a physical force: the tendency of any sufficiently energetic system to maximize the number of future states accessible to it.
The equation was elegant. Correct. And incomplete.
It describes the force. It does not describe the geometry of the space through which that force navigates.
A gradient without a metric is a direction without distance — it tells the system where to push but not what distortion it will encounter on the way there.
We spent three years building the geometry. We tested it across 69 billion simulations. What we found changes everything. ## The Missing Geometry — From Force to Navigation.
Are minds just processes? Can AI become conscious, morally wiser, or even part of a larger collective intelligence? Anders Sandberg and Joscha Bach discuss consciousness, AGI, hybrid minds, moral uncertainty, collective agency and the future of the cyborg Leviathan. It’s a deep and winding discussion with so many interesting topics covered!
0:00 Intro. 0:37 What is consciousness? Phenomenology — functionalism & panpsychism. 1:54 Causal boundaries — the mind is a causally organised process with a non-arbitrary functional boundary, sustained through time by feedback, control, and internal continuity. 3:20 Minds are not states — they are processes. We don’t see causal filtering in tables. 5:54 Epiphenomenalism is self-undermining if it has no causal role, and taking causation seriously pushes towards functionalism. 9:49 Methodological humility about armchair philosophy of mind. 12:41 Putnam-style Brain-in-a-vat — and why standard objections to AI minds fall flat. 16:37 Is sentience required (or desired) for not just moral competence in AI, but moral motivation as well? 22:35 Why stepping outside yourself is powerful — seeing. 25:12 Are AIs born enlightened? 26:25 Are LLMs AGI yet? What’s still missing. 28:16 AI, hybrid minds, and the limits of human augmentation. 32:32 Can minds be extended — in humans, dogs, and cats? 36:19 Why human language may not be open-ended enough. 39:41 Why AI is so data-hungry — and why better algorithms must exist. 43:39 Why better representations matter more than raw compute (grokking was surprising) 48:46 How babies build a world model from touch and perception. 51:05 What comes after copilots: agent teams, multimodality and new AI workflows. 55:32 Can AI help us discover new forms of taste and aesthetics. 59:49 Using AI to learn art history and invent a transhumanist aesthetic. 1:01:47 When AI helps everyone looks professional, what still counts as real skill? 1:03:56 What happens when the self starts to merge with AI 1:05:43 How AI changes the way we think and create. 1:08:10 What happens when AI starts shaping human relationships. 1:11:18 Why feeling in control can matter more than being right. 1:12:58 Why intelligence without wisdom is very dangerous. 1:17:45 AI via scaling statistical pattern matching vs symbolic (& causal) reasoning. Can LLMs learn causality or just correlation? 1:23:00 Will multimodal AI replace LLMs or use them as glue everywhere. 1:24:02 10 years to the singularity? 1:25:27 AI, coordination and the corruption problem. 1:29:47 Can AI become more moral than us (humans)? and if so, should it? 1:34:31 Why pluralism still leaves moral collisions unresolved. 1:34:31 Traversing the landscape of norms (value) 1:38:14 Can ethics work across nested levels of existence? (from the person-effecting-view to the matrioshka-effecting-view) 1:43:08 Moral realism, evolution & game-theoretic symmetries. 1:48:01 Is there a global optimum of moral coordination? Is that god? 1:55:12 Metaphors of the body-politic, the body of Christ, Omega Point theory, Leviathan. 1:59:36 Will superintelligences converge into a cosmic singleton?
Have any ideas about people to interview? Want to be notified about future events? Any comments about the STF series? Please fill out this form: https://docs.google.com/forms/d/1mr9P… regards, Adam Ford
For centuries we treated technology as a tool, and now a new movement insists it is becoming the future of the human species itself.
Transhumanists like Harari and Kurzweil predict the merger of humans and machines, even the rise of a “digital God.” But critics fear this proposed future, calling transhumanism “the world’s most dangerous idea.”
Is the future one where technology is not merely a source of innovation but the basis for a new account of what it is to be human, or are claims of eternal life and new forms of intelligence just fanciful nonsense?
Joining the debate are transhumanist pioneer Zoltan Istvan, physicist and consciousness researcher Àlex Gómez-Marín, philosopher of mind Susan Schneider, and Softmax co-founder Adam Goldstein.
Tap the link now to watch the full debate.
We have for centuries sought technological progress. But now some are making the radical claim that technology is the future of the human race. ‘Effective accelerationists’ have won high-profile Silicon Valley support and claim we should accelerate technology to.
A new electronic implant system can help lab-grown pancreatic cells mature and function properly, potentially providing a basis for novel, cell-based therapies for diabetes. The approach, developed by researchers at the Perelman School of Medicine at the University of Pennsylvania and the School of Engineering and Applied Sciences at Harvard University, incorporates an ultrathin mesh of conductive wires into growing pancreatic tissue, according to a study published in Science.
“The words ‘bionic,’ ‘cybernetic,’ ‘cyborg,’ all of those apply to the device we’ve created,” said Juan Alvarez, Ph.D., an assistant professor of Cell and Developmental Biology. While these terms may sound futuristic, he noted this approach is already in use in the form of deep brain stimulation, which treats neurological conditions.
“What we’re doing is like deep stimulation for the pancreas. Just like pacemakers help the heart keep rhythm, controlled electrical pulses can help pancreatic cells develop and function the way they’re supposed to,” he said.
The compound eyes of the humble fruit fly are a marvel of nature. They are wide-angle and can process visual information several times faster than the human eye. Inspired by this biological masterpiece, researchers at the Chinese Academy of Sciences have developed an insect-scale compound eye that can both see and smell, potentially improving how drones and robots navigate complex environments and avoid obstacles.
Traditional cameras on robots and drones may excel at capturing high-definition photos, but struggle with a narrow field of view and limited peripheral vision. They also tend to be bulky and power-hungry.
The Technological Singularity is the most overconfident idea in modern futurism: a prediction about the point where prediction breaks. It’s pitched like a destination, argued like a religion, funded like an arms race, and narrated like a movie trailer — yet the closer the conversation gets to specifics, the more it reveals something awkward and human. Almost nobody is actually arguing about “the Singularity.” They’re arguing about which future deserves fear, which future deserves faith, and who gets to steer the curve when it stops looking like a curve and starts looking like a cliff.
The Singularity begins as a definitional hack: a word borrowed from physics to describe a future boundary condition — an “event horizon” where ordinary forecasting fails. I. J. Good — British mathematician and early AI theorist — framed the mechanism as an “intelligence explosion,” where smarter systems build smarter systems and the loop feeds on itself. Vernor Vinge — computer scientist and science-fiction author — popularized the metaphor that, after superhuman intelligence, the world becomes as unreadable to humans as the post-ice age would have been to a trilobite.
In my podcast interviews, the key move is that “Singularity” isn’t one claim — it’s a bundle. Gennady Stolyarov II — transhumanist writer and philosopher — rejects the cartoon version: “It’s not going to be this sharp delineation between humans and AI that leads to this intelligence explosion.” In his framing, it’s less “humans versus machines” than a long, messy braid of tools, augmentation, and institutions catching up to their own inventions.
Twelve‑year‑old Kai Pollnitz from Georgetown received a life‑changing surprise when YouTube creator MrBeast gifted him a custom Open Bionics Hero PRO robotic hand. Kai, who was born with a congenit…
In a recent study, researchers from China have developed a chip-scale LiDAR system that mimics the human eye’s foveation by dynamically concentrating high-resolution sensing on regions of interest (ROIs) while maintaining broad awareness across the full field of view.
The study is published in the journal Nature Communications.
LiDAR systems power machine vision in self-driving cars, drones, and robots by firing laser beams to map 3D scenes with millimeter precision. The eye packs its densest sensors in the fovea (sharp central vision spot) and shifts gaze to what’s important. By contrast, most LiDARs use rigid parallel beams or scans that spread uniform (often coarse) resolution everywhere. Boosting detail means adding more channels uniformly, which explodes costs, power, and complexity.
This video explores aliens, mind uploading to other species, genetic engineering, and future robots.
SOURCES: • https://en.wikipedia.org/wiki/Eagle_eye#:~… https://www.scientificamerican.com/ar… • https://en.wikipedia.org/wiki/Human_c… ___ 💡 Future Business Tech explores the future of technology and the world. Examples of topics I cover include: • Artificial Intelligence & Robotics • Virtual and Augmented Reality • Brain-Computer Interfaces • Transhumanism • Genetic Engineering SUBSCRIBE: https://bit.ly/3geLDGO ___ This video explores the future of ChatGPT and 10 ways it could change society. Other related terms: aliens, alien species, advanced civilization, genetic engineering, robot, mind upload, mind uploading, brain computer interface, artificial intelligence, ai, future business tech, future technology, future technologies, etc. ℹ️ Some links are affiliate links. They cost you nothing extra but help support the channel so I can create more videos like this. #alien #aliens #avatar #avatar2 #geneticengineering #braincomputerinterface. • https://vcahospitals.com/know-your-pe… • https://www.scientificamerican.com/ar… • https://en.wikipedia.org/wiki/Human_c…
💡 Future Business Tech explores the future of technology and the world.
Examples of topics I cover include: • Artificial Intelligence & Robotics. • Virtual and Augmented Reality. • Brain-Computer Interfaces. • Transhumanism. • Genetic Engineering.
This video explores the future of ChatGPT and 10 ways it could change society. Other related terms: aliens, alien species, advanced civilization, genetic engineering, robot, mind upload, mind uploading, brain computer interface, artificial intelligence, ai, future business tech, future technology, future technologies, etc.