Toggle light / dark theme

Two asteroids, including the newly detected 2024 MK, will pass Earth safely this week, coinciding with Asteroid Day. The event highlights efforts such as ESA’s asteroid deflection mission and their new Flyeye telescope system aimed at improving our detection and response to these celestial threats.

Two large asteroids will safely pass Earth this week, a rare occurrence perfectly timed to commemorate this year’s Asteroid Day. Neither poses any risk to our planet, but one of them was only discovered a week ago, highlighting the need to continue improving our ability to detect potentially hazardous objects in our cosmic neighborhood.

ⓘ IFLScience is not responsible for content shared from external sites.

The event is so rare because of its large size – 375 meters (1230 feet) average diameter – as well as its proximity to the Earth.

“The 2029 flyby is an incredibly rare event,” ESA explained in an X post. “By comparing impact craters across the Solar System with the sizes and orbits of all known asteroids, scientists believe that an asteroid as large as Apophis only comes this close to Earth once every 5,000 to 10,000 years.

From the article:

Longtermism asks fundamental questions and promotes the kind of consequentialism that should guide public policy.


Based on a talk delivered at the conference on Existential Threats and Other Disasters: How Should We Address Them? May 30–31, 2024 – Budva, Montenegro – sponsored by the Center for the Study of Bioethics, The Hastings Center, and The Oxford Uehiro Center for Practical Ethics.

For twenty years, I have been talking about old age dependency ratios as an argument for universal basic income and investing in anti-aging therapies to keep elders healthy longer. A declining number of young workers supporting a growing number of retirees is straining many welfare systems. Healthy seniors are less expensive and work longer. UBI is more intergenerationally equitable, especially if we face technological unemployment.

New observations spotlight the volatile processes that shape star systems like our own, offering a unique glimpse into the primordial stages of planetary formation.

Astronomers have captured a snapshot of a giant asteroid collision in Beta Pictoris, revealing insights into early planetary formation. The study, using data from the James Webb and Spitzer Space Telescopes, tracked dust changes around the star. The findings suggest a massive collision 20 years ago, altering our understanding of this young star system’s development.

Massive collision in beta pictoris star system.

There’s a lot of other ways that AI could really take things in a bad direction.


One of the OpenAI directors who worked to oust CEO Sam Altman is issuing some stark warnings about the future of unchecked artificial intelligence.

In an interview during Axios’ AI+ summit, former OpenAI board member Helen Toner suggested that the risks AI poses to humanity aren’t just worst-case scenarios from science fiction.

“I just think sometimes people hear the phrase ‘existential risk’ and they just think Skynet, and robots shooting humans,” Toner said, referencing the evil AI technology from the “Terminator” films that’s often used as a metaphor for worst-case-scenario AI predictions.

Many fear future technologies may doom our civilization, but could the pursuit of technology, and civilization itself, be what dooms humanity?
Watch my exclusive video ISRU: https://nebula.tv/videos/isaacarthur–
Get Nebula using my link for 40% off an annual subscription: https://go.nebula.tv/isaacarthur.
Get a Lifetime Membership to Nebula for only $300: https://go.nebula.tv/lifetime?ref=isa

Join this channel to get access to perks:
/ @isaacarthursfia.
Visit our Website: http://www.isaacarthur.net.
Join Nebula: https://go.nebula.tv/isaacarthur.
Support us on Patreon: / isaacarthur.
Support us on Subscribestar: https://www.subscribestar.com/isaac-a
Facebook Group: / 1583992725237264
Reddit: / isaacarthur.
Twitter: / isaac_a_arthur on Twitter and RT our future content.
SFIA Discord Server: / discord.

Credits:
The Fermi Paradox: Timebombs.
Episode 450; June 6, 2024
Written, Produced \& Narrated by: Isaac Arthur.
Sound Music Courtesy of:
Epidemic Sound Epidemic Sound http://epidemicsound.com/creator.
Stellardrone, \

Since the release of ChatGPT in November 2022, artificial intelligence (AI) has both entered the common lexicon and sparked substantial public intertest. A blunt yet clear example of this transition is the drastic increase in worldwide Google searches for ‘AI’ from late 2022, which reached a record high in February 2024.

You would therefore be forgiven for thinking that AI is suddenly and only recently a ‘big thing.’ Yet, the current hype was preceded by a decades-long history of AI research, a field of academic study which is widely considered to have been founded at the 1956 Dartmouth Summer Research Project on Artificial Intelligence.1 Since its beginning, a meandering trajectory of technical successes and ‘AI winters’ subsequently unfolded, which eventually led to the large language models (LLMs) that have nudged AI into today’s public conscience.

Alongside those who aim to develop transformational AI as quickly as possible – the so-called ‘Effective Accelerationism’ movement, or ‘e/acc’ – exist a smaller and often ridiculed group of scientists and philosophers who call attention to the inherent profound dangers of advanced AI – the ‘decels’ and ‘doomers.’2 One of the most prominent concerned figures is Nick Bostrom, the Oxford philosopher whose wide-ranging works include studies of the ethics of human enhancement,3 anthropic reasoning,4 the simulation argument,5 and existential risk.6 I first read his 2014 book Superintelligence: Paths, Dangers, Strategies7 five years ago, which convinced me that the risks which would be posed to humanity by a highly capable AI system (a ‘superintelligence’) ought to be taken very seriously before such a system is brought into existence. These threats are of a different kind and scale to those posed by the AIs in existence today, including those developed for use in medicine and healthcare (such as the consequences of training set bias,8 uncertainties over clinical accountability, and problems regarding data privacy, transparency and explainability),9 and are of a truly existential nature. In light of the recent advancements in AI, I recently revisited the book to reconsider its arguments in the context of today’s digital technology landscape.

This is an issue that the character Ye Wenjie wrestles with in the first episode of Netflix’s 3 Body Problem. Working at a radio observatory, she does finally receive a message from a member of an alien civilization—telling her they are a pacifist and urging her not to respond to the message or Earth will be attacked.

The series will ultimately offer a detailed, elegant solution to the Fermi Paradox, but we will have to wait until the second season.

Or you can read the second book in Cixin Liu’s series, The Dark Forest. Without spoilers, the explanation set out in the books runs as follows: “The universe is a dark forest. Every civilization is an armed hunter stalking through the trees like a ghost, gently pushing aside branches that block the path and trying to tread without sound.”

As we go on with our everyday lives, it’s very easy to forget about the sheer size of the universe.

The Earth may seem like a mighty place, but it’s practically a grain within a grain of sand in a universe that is estimated to contain over 200 billion galaxies. That’s something to think about the next time you take life too seriously.

So when we gaze up into the starry night sky, we have every reason to be awestruck—and overwhelmed with curiosity. With the sheer size of the universe and the number of galaxies, stars, and planets in it, surely there are other sentient beings out there. But how come we haven’t heard from them?