From networks XII: inventions & invocations by.
Shawn Bell
From networks XII: inventions & invocations by.
Shawn Bell
Less is more: Reducing zinc to boost stem cell-derived islet function and survival.
Zinc is required for insulin packaging into secretory granules, yet reduced zinc transporter activity paradoxically enhances beta cell function. In this issue, Wang et al. show that pharmacologic inhibition of zinc transport in stem cell-derived islets activates AMPK signaling and improves maturation, hypoxia resistance, VEGFA expression, and graft performance.
Glycogen and lactate metabolism in mouse fetal Sertoli cells sustain the germ line.
Estermann and Sheheen et al. identified a metabolic coupling between fetal Sertoli and germ cells in mice, driven by glycogen breakdown and lactate transport through the MCT4/MCT1 shuttle. This interaction is essential to support fetal germ cell development.
When NVIDIA founder and CEO Jensen Huang told podcaster Lex Fridman in a recent interview that he thinks we have already achieved AGI, I understood why the statement landed with such force. Today’s systems are impressive, useful, and often psychologically persuasive. They can create the feeling that the threshold has already been crossed. But my answer is no: we have not achieved AGI just yet. In my 2026 book, SUPERALIGNMENT: The Three Approaches to the AI Alignment Problem — How to Ensure the Arrival of Benevolent Artificial Superintelligence Aligned with Human Goals and Values, I argue that AGI should not be declared based on hype, surprise, or market excitement. It should be recognized only when three far more meaningful benchmarks are met.
In fact, one of the reasons this debate keeps spiraling into confusion is that we have been trapped for years in the “moving goalposts” problem. By practical conversational standards, machines passed the Turing test long ago. But every time AI masters a previously “human-exclusive” capacity—dialogue, strategy, writing, even emotional style—many observers simply redefine that achievement as mere automation. That is precisely why I reject unstable, psychology-based thresholds. If our benchmark is just whatever still makes humans feel uniquely special, then AGI will always remain one step away by definition.
That is why, in SUPERALIGNMENT, I start with operational definitions of AGI and ASI. For me, AGI is not merely a system that performs well across many cognitive tasks. It is a system that can generalize knowledge across domains, reason abstractly, adapt to open and uncertain environments, transfer learned knowledge to novel contexts, and introspect on its own reasoning. In other words, AGI is not just impressive breadth. It is flexible, self-reflective generality at par with or above human capabilities. That is a much higher bar than what most people mean when they casually say, “AI is already general.”
Scientists have unveiled a new approach to ultra-secure communication that could make quantum encryption simpler and more efficient than ever before. By harnessing a 19th-century optics phenomenon called the Talbot effect, researchers developed a system that sends information using multiple states of single photons instead of just two, dramatically boosting data capacity. Even more impressive, the setup works with standard components and requires only a single detector, reducing cost and complexity.
We’re often seduced by the idea that the mind is a computer, and that consciousness is just a matter of running the right code. But philosopher Peter Godfrey-Smith, renowned for his work on octopus minds, disagrees. Fresh research into animal minds—from bees to jellyfish—suggests that consciousness arises not from software but from electrical oscillations moving rhythmically across cell membranes in living brains. And those oscillations, Godfrey-Smith argues, are unlikely to be reproducible in artificial hardware. Perhaps, then, only living brains can truly be conscious.
Late in the previous century, there seemed to be good reasons to think that the physical make-up of a system could not matter much to whether that system had a mind. The organization of the system is what matters, people thought, and physically different systems can be organized the same way. As a result, artificial minds making use of ordinary computer hardware should be possible. This whole discussion was hypothetical, because there weren’t any convincing possible cases of artificial minds to worry about.
Since then, two things have happened. From around 2022, we’ve been confronted with candidates for artificial minds that are disturbingly impressive. These are the LLM systems, such as ChatGPT. But reasons have emerged to doubt that the physical make-up of a system is irrelevant and minds are “substrate independent.” A view sometimes called biological naturalism holds that the biological details of nervous systems might make a difference to whether a physical system has a mind. (The term was coined, with this sense at least, by John Searle.) But if nervous systems and brains are special, what is it that makes them special?