Toggle light / dark theme

To keep his Universe static, Einstein added a term into the equations of general relativity, one he initially dubbed a negative pressure. It soon became known as the cosmological constant. Mathematics allowed the concept, but it had absolutely no justification from physics, no matter how hard Einstein and others tried to find one. The cosmological constant clearly detracted from the formal beauty and simplicity of Einstein’s original equations of 1915, which achieved so much without any need for arbitrary constants or additional assumptions. It amounted to a cosmic repulsion chosen to precisely balance the tendency of matter to collapse on itself. In modern parlance we call this fine tuning, and in physics it is usually frowned upon.

Einstein knew that the only reason for his cosmological constant to exist was to secure a static and stable finite Universe. He wanted this kind of Universe, and he did not want to look much further. Quietly hiding in his equations, though, was another model for the Universe, one with an expanding geometry. In 1922, the Russian physicist Alexander Friedmann would find this solution. As for Einstein, it was only in 1931, after visiting Hubble in California, that he accepted cosmic expansion and discarded at long last his vision of a static Cosmos.

Einstein’s equations provided a much richer Universe than the one Einstein himself had originally imagined. But like the mythic phoenix, the cosmological constant refuses to go away. Nowadays it is back in full force, as we will see in a future article.

Finally, a rational exploration of what ChatGPT actually knows and what that means.


Try out my quantum mechanics course (and many others on math and science) on Brilliant using the link https://brilliant.org/sabine. You can get started for free, and the first 200 will get 20% off the annual premium subscription.

I used to think that today’s so-called “artificial intelligences” are actually pretty dumb. But I’ve recently changed my mind. In this video I want to explain why I think that they do understand some of what they do, if not very much. And since I was already freely speculating, I have added some thoughts about how the situation with AIs is going to develop.

💌 Support us on Donatebox ➜ https://donorbox.org/swtg.
👉 Transcript and References on Patreon ➜ https://www.patreon.com/Sabine.
📩 Sign up for my weekly science newsletter. It’s free! ➜ https://sabinehossenfelder.com/newsletter/
🔗 Join this channel to get access to perks ➜
https://www.youtube.com/channel/UC1yNl2E66ZzKApQdRuTQ4tw/join.

00:00 Intro.

Researchers at Empa, ETH Zurich and the Politecnico di Milano are developing a new type of computer component that is more powerful and easier to manufacture than its predecessors. Inspired by the human brain, it is designed to process large amounts of data fast and in an energy-efficient way.

In many respects, the is still superior to modern computers. Although most people can’t do math as fast as a , we can effortlessly process complex sensory information and learn from experiences, while a computer cannot—at least not yet. And, the brain does all this by consuming less than half as much energy as a laptop.

One of the reasons for the brain’s energy efficiency is its structure. The individual brain cells—the neurons and their connections, the synapses—can both store and process information. In computers, however, the memory is separate from the processor, and data must be transported back and forth between these two components. The speed of this transfer is limited, which can slow down the whole computer when working with large amounts of data.

For fans of bioethical nightmares, it’s been a real stonker of a month. First, we had the suggestion that we use comatose women’s wombs to house surrogate pregnancies. Now, it appears we might have a snazzy idea for what to do with their brains, too: to turn them into hyper-efficient biological computers.

Lately, you see, techies have been worrying about the natural, physical limits of conventional, silicon-based computing. Recent developments in ‘machine learning’, in particular, have required exponentially greater amounts of energy – and corporations are concerned that further technological progress will soon become environmentally unsustainable. Thankfully, in a paper published this week, a team of American scientists pointed out something rather nifty: that the walnut-shaped, spongy computer in your skull doesn’t appear to be bound by anything like the same limitations – and that it might, therefore, provide us with something of a solution.

The human brain, the paper explains, is slower than machines at performing basic tasks (like mathematical sums), but much, much better at processing complex problems that involve limited, or ambiguous, data. Humans learn, that is, how to make smart decisions quickly, even when we only have small fragments of information to go on, in a way that computers simply can’t. For anything more sophisticated than arithmetic, sponge beats silicon by a mile.