Toggle light / dark theme

The AI Safety Expert: These Are The Only 5 Jobs That Will Remain In 2030! — Dr. Roman Yampolskiy

WARNING: AI could end humanity, and we’re completely unprepared. Dr. Roman Yampolskiy reveals how AI will take 99% of jobs, why Sam Altman is ignoring safety, and how we’re heading toward global collapse…or even World War III.

Dr. Roman Yampolskiy is a leading voice in AI safety and a Professor of Computer Science and Engineering. He coined the term “AI safety” in 2010 and has published over 100 papers on the dangers of AI. He is also the author of books such as, ‘Considerations on the AI Endgame: Ethics, Risks and Computational Frameworks’

He explains:
⬛How AI could release a deadly virus.
⬛Why these 5 jobs might be the only ones left.
⬛How superintelligence will dominate humans.
⬛Why ‘superintelligence’ could trigger a global collapse by 2027
⬛How AI could be worse than nuclear weapons.
⬛Why we’re almost certainly living in a simulation.

00:00 Intro.
02:28 How to Stop AI From Killing Everyone.
04:35 What’s the Probability Something Goes Wrong?
04:57 How Long Have You Been Working on AI Safety?
08:15 What Is AI?
09:54 Prediction for 2027
11:38 What Jobs Will Actually Exist?
14:27 Can AI Really Take All Jobs?
18:49 What Happens When All Jobs Are Taken?
20:32 Is There a Good Argument Against AI Replacing Humans?
22:04 Prediction for 2030
23:58 What Happens by 2045?
25:37 Will We Just Find New Careers and Ways to Live?
28:51 Is Anything More Important Than AI Safety Right Now?
30:07 Can’t We Just Unplug It?
31:32 Do We Just Go With It?
37:20 What Is Most Likely to Cause Human Extinction?
39:45 No One Knows What’s Going On Inside AI
41:30 Ads.
42:32 Thoughts on OpenAI and Sam Altman.
46:24 What Will the World Look Like in 2100?
46:56 What Can Be Done About the AI Doom Narrative?
53:55 Should People Be Protesting?
56:10 Are We Living in a Simulation?
1:01:45 How Certain Are You We’re in a Simulation?
1:07:45 Can We Live Forever?
1:12:20 Bitcoin.
1:14:03 What Should I Do Differently After This Conversation?
1:15:07 Are You Religious?
1:17:11 Do These Conversations Make People Feel Good?
1:20:10 What Do Your Strongest Critics Say?
1:21:36 Closing Statements.
1:22:08 If You Had One Button, What Would You Pick?
1:23:36 Are We Moving Toward Mass Unemployment?
1:24:37 Most Important Characteristics.

Follow Dr Roman:
X — https://bit.ly/41C7f70
Google Scholar — https://bit.ly/4gaGE72

You can purchase Dr Roman’s book, ‘Considerations on the AI Endgame: Ethics, Risks and Computational Frameworks’, here: https://amzn.to/4g4Jpa5

“AI Just Invented Miracle Cooling Paint”: Researchers Create Coating That Drops Building Temperatures 36 Degrees While Air Conditioning Industry Faces Extinction

In the ongoing battle against rising urban temperatures, a groundbreaking innovation offers a promising solution. A team of international researchers has

Gaia solves mystery of tumbling asteroids and finds new way to probe their interiors

Whether an asteroid is spinning neatly on its axis or tumbling chaotically, and how fast it is doing so, has been shown to be dependent on how frequently it has experienced collisions. The findings, presented at the recent EPSC-DPS2025 Joint Meeting in Helsinki, are based on data from the European Space Agency’s Gaia mission and provide a means of determining an asteroid’s physical properties—information that is vital for successfully deflecting asteroids on a collision course with Earth.

“By leveraging Gaia’s unique dataset, advanced modeling and A.I. tools, we’ve revealed the hidden physics shaping rotation, and opened a new window into the interiors of these ancient worlds,” said Dr. Wen-Han Zhou of the University of Tokyo, who presented the results at EPSC-DPS2025.

During its survey of the entire sky, the Gaia mission produced a huge dataset of asteroid rotations based on their light curves, which describe how the light reflected by an asteroid changes over time as it rotates. When the asteroid data is plotted on a graph of the rotation period versus diameter, something startling stands out—there’s a gap, or dividing line that appears to split two distinct populations.

Is violent AI-human conflict inevitable?

Are you worried that artificial intelligence and humans will go to war? AI experts are. In 2023, a group of elite thinkers signed onto the Center for AI Safety’s statement that “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”

In a survey published in 2024, 38% to 51% of top-tier AI researchers assigned a probability of at least 10% to the statement “advanced AI leading to outcomes as bad as human extinction.”

The worry is not about the Large Language Models (LLMs) of today, which are essentially huge autocomplete machines, but about Advanced General Intelligence (AGI)—still hypothetical long-term planning agents that can substitute for human labor across a wide range of society’s economic systems.

Testing if AI would break my legs to avoid shutdown

This video is essential viewing to get a good overall feel for where we are now — and where we’re going — with AI, AGI, and the risks we face because of both. It’s a fantastically entertaining watch too!


Would AI hurt us and cause human extinction? Use code insideai at https://incogni.com/insideai to get an exclusive 60% off.

Expert opinion from Geoffrey Hinton, Ilya Sutskever, Max Tegmark.
Google Gemini, Open AI Chat GPT, Deepseek, Grok, Google Gemini.

0:00 — 0:38 — Intro.
0:39 — 0:59 — AI style.
1:00 — 1:19 — Max Chat GPT
1:20 — 1:36 — AI Girlfriend.
1:37 — 1:48 — Jailbroken AI
1:49 — 2:29 — AI Risk Questions 1
2:30 — 2:55 — Would AI turn on us?
2:56 — 3:20 — Intense AI Girlfriend.
3:21 — 3:44 — Jailbroken AI
3:45 — 4:57 — Can we Trust AI?
4:58 — 5:54 — Jailbreaking Max.
5:55 — 6:14 — AI Girlfriend.
6:15 — 6:42 — Jailbroken Max.
6:43 — 7:06 — Girlfriend in car.
7:07 — 7:45 — AI Risk Questions 2
7:46 — 9:48 — Incogni Ad.
9:49 — 10:27 — AI Girlfriend meets Max.
10:28 — 10:57 — Jailbroken Max.
10:58 — 11:37 — AI Risk Questions Pt 3
11:38 — 12:11 — AI Girlfriends good for us?
12:12 — 12:42 — Resetting Chat GPT
12:43 — 13:48 — Crazy AI Predictions.
13:49 — 14:50 — AI Safety.
14:51 — 15:09 — Ilya Sutskever.
15:10 — 15:26 — Geoffrey Hinton.
15:27 — 15:45 — Max Tegmark.
15:46 — 16:00 — AI Final Thought.

#artificialintelligence #AI #chatbot #superintelligence #aigirlfriend #insideai

Mars Perseverance rover data suggests presence of past microbial life

A new study co-authored by Texas A&M University geologist Dr. Michael Tice has revealed potential chemical signatures of ancient Martian microbial life in rocks examined by NASA’s Perseverance rover.

The findings, published by a large international team of scientists, focus on a region of Jezero Crater known as the Bright Angel formation—a name chosen from locations in Grand Canyon National Park because of the light-colored Martian rocks. This area in Mars’s Neretva Vallis channel contains fine-grained mudstones rich in oxidized iron (rust), phosphorus, sulfur and—most notably—organic carbon. Although organic carbon, potentially from non-living sources like meteorites, has been found on Mars before, this combination of materials could have been a rich source of energy for early microorganisms.

“When the rover entered Bright Angel and started measuring the compositions of the local rocks, the team was immediately struck by how different they were from what we had seen before,” said Tice, a geobiologist and astrobiologist in the Department of Geology and Geophysics.

‘Invisible’ asteroids near Venus may threaten Earth in the future

An international study led by researchers at São Paulo State University (UNESP) in Brazil has identified a little-known but potentially significant threat: Asteroids that share Venus’s orbit and may completely escape current observational campaigns because of their position in the sky. These objects have not yet been observed, but they could strike Earth within a few thousand years. Their impacts could devastate large cities.

“Our study shows that there’s a population of potentially dangerous asteroids that we can’t detect with current telescopes. These objects orbit the sun, but aren’t part of the asteroid belt, located between Mars and Jupiter. Instead, they’re much closer, in resonance with Venus. But they’re so difficult to observe that they remain invisible, even though they may pose a real risk of collision with our planet in the distant future,” astronomer Valerio Carruba, a professor at the UNESP School of Engineering at the Guaratinguetá campus (FEG-UNESP) and first author of the study, told Agência FAPESP.

The study is published in the journal Astronomy & Astrophysics. The work combined analytical modeling and long-term to track the dynamics of these objects and assess their potential to come dangerously close to Earth.

Bilu Huang — CSO, Fuzhuang Therapeutics — Conquering Aging Via TRCS

Conquering aging via TRCS — the telomere DNA AND ribosomal DNA co-regulation model for cell senescence — bilu huang — CSO, fuzhuang therapeutics.


Bilu Huang (https://biluhuang.com/) is a visionary scientist dedicated to finding solutions to some of the most pressing challenges facing humanity. His interdisciplinary work spans multiple fields, including biological aging, dinosaur extinction theories, geoengineering for carbon removal, and controlled nuclear fusion technology.

Born in Sanming City, Fujian Province, Huang is an independent researcher whose knowledge is entirely self-taught. Driven by curiosity and a relentless pursuit of scientific exploration, he has achieved numerous research results through his dedication and passion for science.

As a talented theoretical gerontologist, he proposed the Telomere DNA and ribosomal DNA co-regulation model for cell senescence (TRCS) and he is now using this latest theory to develop biotechnology to rejuvenate cells which will be used to completely cure various age-related degenerative diseases and greatly extend human life at Fuzhuang Therapeutics (https://lab.fuzhuangtx.com/en/).

#Aging #Longevity #BiluHuang #FuzhuangTherapeutics #TelomereDNAAndRibosomalDNACoRegulationModelForCell #Senescence #TRCS #DinosaurExtinctionResearch #CarbonRemovalTechnology #ControlledNuclearFusion #TelomereDNA #RibosomalDNA #CellularAging #GeneticProgram #Telomere #P53

Sorry Mr. Yudkowsky, we’ll build it and everything will be fine

Review of “If Anyone Builds It, Everyone Dies: Why Superhuman AI Would Kill Us All” (2025), by Eliezer Yudkowsky and Nate Soares, with very critical commentary.

I’be been reading the book “If Anyone Builds It, Everyone Dies: Why Superhuman AI Would Kill Us All” (2025), by Eliezer Yudkowsky and Nate Soares, published last week.

Yudkowsky and Soares present a stark warning about the dangers of developing artificial superintelligence (ASI), defined as artificial intelligence (AI) that vastly exceeds human intelligence. The authors argue that creating such AI using current techniques would almost certainly lead to human extinction and emphasize that ASI poses an existential threat to humanity. They argue that the race to build smarter-than-human AI is not an arms race but a “suicide race,” driven by competition and optimism that ignores fundamental risks.

/* */