Toggle light / dark theme

Microsoft’s Majorana 1 quantum chip introduces a breakthrough Topological Core, enabling stable and scalable qubits.

By leveraging topoconductors, this innovation paves the way for million-qubit machines capable of solving complex scientific and industrial challenges. With DARPA

Formed in 1958 (as ARPA), the Defense Advanced Research Projects Agency (DARPA) is an agency of the United States Department of Defense responsible for the development of emerging technologies for use by the military. DARPA formulates and executes research and development projects to expand the frontiers of technology and science, often beyond immediate U.S. military requirements, by collaborating with academic, industry, and government partners.

Rufo Guerreschi.
https://www.linkedin.com/in/rufoguerreschi.

Coalition for a Baruch Plan for AI
https://www.cbpai.org/

0:00 Intro.
0:21 Rufo Guerreschi.
0:28 Contents.
0:41 Part 1: Why we have a governance problem.
1:18 From e-democracy to cybersecurity.
2:42 Snowden showed that international standards were needed.
3:55 Taking the needs of intelligence agencies into account.
4:24 ChatGPT was a wake up moment for privacy.
5:08 Living in Geneva to interface with states.
5:57 Decision making is high up in government.
6:26 Coalition for a Baruch plan for AI
7:12 Parallels to organizations to manage nuclear safety.
8:11 Hidden coordination between intelligence agencies.
8:57 Intergovernmental treaties are not tight.
10:19 The original Baruch plan in 1946
11:28 Why the original Baruch plan did not succeed.
12:27 We almost had a different international structure.
12:54 A global monopoly on violence.
14:04 Could expand to other weapons.
14:39 AI is a second opportunity for global governance.
15:19 After Soviet tests, there was no secret to keep.
16:22 Proliferation risk of AI tech is much greater?
17:44 Scale and timeline of AI risk.
19:04 Capabilities of security agencies.
20:02 Internal capabilities of leading AI labs.
20:58 Governments care about impactful technologies.
22:06 Government compute, risk, other capabilities.
23:05 Are domestic labs outside their jurisdiction?
23:41 What are the timelines where change is required?
24:54 Scientists, Musk, Amodei.
26:24 Recursive self improvement and loss of control.
27:22 A grand gamble, the rosy perspective of CEOs.
28:20 CEOs can’t really say anything else.
28:59 Altman, Trump, Softbank pursuing superintelligence.
30:01 Superintelligence is clearly defined by Nick Bostrom.
30:52 Explain to people what “superintelligence” means.
31:32 Jobs created by Stargate project?
32:14 Will centralize power.
33:33 Sharing of the benefits needs to be ensured.
34:26 We are running out of time.
35:27 Conditional treaty idea.
36:34 Part 2: We can do this without a global dictatorship.
36:44 Dictatorship concerns are very reasonable.
37:19 Global power is already highly concentrated.
38:13 We are already in a surveillance world.
39:18 Affects influential people especially.
40:13 Surveillance is largely unaccountable.
41:35 Why did this machinery of surveillance evolve?
42:34 Shadow activities.
43:37 Choice of safety vs liberty (privacy)
44:26 How can this dichotomy be rephrased?
45:23 Revisit supply chains and lawful access.
46:37 Why the government broke all security at all levels.
47:17 The encryption wars and export controls.
48:16 Front door mechanism replaced by back door.
49:21 The world we could live in.
50:03 What would responding to requests look like?
50:50 Apple may be leaving “bug doors” intentionally.
52:23 Apple under same constraints as government.
52:51 There are backdoors everywhere.
53:45 China and the US need to both trust AI tech.
55:10 Technical debt of past unsolved problems.
55:53 Actually a governance debt (social-technical)
56:38 Provably safe or guaranteed safe AI
57:19 Requirement: Governance plus lawful access.
58:46 Tor, Signal, etc are often wishful thinking.
59:26 Can restructure incentives.
59:51 Restrict proliferation without dragnet?
1:00:36 Physical plus focused surveillance.
1:02:21 Dragnet surveillance since the telegraph.
1:03:07 We have to build a digital dog.
1:04:14 The dream of cyber libertarians.
1:04:54 Is the government out to get you?
1:05:55 Targeted surveillance is more important.
1:06:57 A proper warrant process leveraging citizens.
1:08:43 Just like procedures for elections.
1:09:41 Use democratic system during chip fabrication.
1:10:49 How democracy can help with technical challenges.
1:11:31 Current world: anarchy between countries.
1:12:25 Only those with the most guns and money rule.
1:13:19 Everyone needing to spend a lot on military.
1:14:04 AI also engages states in a race.
1:15:16 Anarchy is not a given: US example.
1:16:05 The forming of the United States.
1:17:24 This federacy model could apply to AI
1:18:03 Same idea was even proposed by Sam Altman.
1:18:54 How can we maximize the chances of success?
1:19:46 Part 3: How to actually form international treaties.
1:20:09 Calling for a world government scares people.
1:21:17 Genuine risk of global dictatorship.
1:21:45 We need a world /federal/ democratic government.
1:23:02 Why people are not outspoken.
1:24:12 Isn’t it hard to get everyone on one page?
1:25:20 Moving from anarchy to a social contract.
1:26:11 Many states have very little sovereignty.
1:26:53 Different religions didn’t prevent common ground.
1:28:16 China and US political systems similar.
1:30:14 Coming together, values could be better.
1:31:47 Critical mass of states.
1:32:19 The Philadelphia convention example.
1:32:44 Start with say seven states.
1:33:48 Date of the US constitutional convention.
1:34:42 US and China both invited but only together.
1:35:43 Funding will make a big difference.
1:38:36 Lobbying to US and China.
1:38:49 Conclusion.
1:39:33 Outro

In today’s AI news, Chinese AI start-up DeepSeek wrapped up a week of revealing technical details about its development of a ChatGPT competitor, which was achieved at a fraction of the typical costs, in a move that is poised to accelerate global advances in the field. Over the past few days, DeepSeek published eight open-source projects on GitHub, the world’s largest open-source community.

In other advances, TikTok is preparing to sunset its creator marketplace in favor of a new, more expanded experience, the company has informed businesses and creators via email. The online platform, which connects brands with creators for collaborating on ads and other sponsorships, will stop allowing creator invitations or the creation of new campaigns as of Saturday the company says.

Ll need a Mac with an M1 chip or higher, which means Intel-based Macs are out of the loop. + And, Hume AI has unveiled Octave, an innovative text-to-speech (TTS) system that leverages large language model (LLM) technology to generate contextually aware and emotionally nuanced speech. The incredibly human-like voice tool competitively positions Octave as a leader in AI-driven voice synthesis. Traditional TTS systems often produce context-insensitive speech, which leads to monotonous output.

In videos, Anthropic’s CEO, Dario Amodei, returns to the Hard Fork podcast for a candid, wide-ranging interview. We discuss Anthropic’s brand-new Claude 3.7 Sonnet model, the A.I. arms race against China, and his hopes and fears for this technology over the next two years. Then, we gather up recent tech stories, put them into a hat and close out the week with a round of HatGPT.

The year one hundred two thousand twenty-three. A giant meteorite the size of Pluto is approaching the Solar System. It flies straight to Earth. But as the meteorite crosses Saturn’s orbit, a swarm of miner probes approaches it. The scan revealed no minerals on the object, so the searches returned with nothing.
Meanwhile, the Space Security Center in Alaska military personnel are setting up a laser. The Solar System witnesses a sudden flare and nothing remains of the dwarf-sized meteorite. Now, unless hydrogen miners on Jupiter post videos of another annihilation on social media… This is what the world will look like when humanity finally becomes a Type Two civilization on the Kardashev scale. We’ll have almost infinite energy reserves, the ability to prepare for interstellar flights, or to instantly destroy any threat. But will humanity really be safe? And what can ruin a Type Two civilization?

#eldddir_space #eldddir_earth #eldddir_homo #eldddir_animals.
#eldddir_disaster #eldddir_ocean #eldddir_bombs #eldddir_future #eldddir_tech #eldddir_jupiter #eldddir_mars #eldddir_spacex #eldddir_rockets

There’s an arms race in medicine—scientists design drugs to treat lethal bacterial infections, but bacteria can evolve defenses to those drugs, sending the researchers back to square one. In an article published in the Journal of the American Chemical Society, a University of California, Irvine-led team describes the development of a drug candidate that can stop bacteria before they have a chance to cause harm.

“The issue with antibiotics is this crisis of antibiotic resistance,” said Sophia Padilla, a Ph.D. candidate in chemistry and lead author of the new study. “When it comes to antibiotics, can evolve defenses against them—they’re becoming stronger and always getting better at protecting themselves.”

About 35,000 people in the U.S. die each year from from pathogens like Staphylococcus, while about 2.8 million people suffer from bacteria-related illnesses.

Smart bullets are real—and they might already be in use. From DARPA’s EXACTO to Russia’s secretive programs, guided bullets have come a long way since The Fifth Element. Here’s what we know.

Got a beard? Good. I’ve got something for you: http://beardblaze.com.

Simon’s Social Media:
Twitter: / simonwhistler.
Instagram: / simonwhistler.

Love content? Check out Simon’s other YouTube Channels:

Cyber Warfare, Explained.
Use code JOHNNYHARRIS at the link below to get an exclusive 60% off an annual Incogni plan: https://incogni.com/johnnyharris.

From influencing elections to disrupting nuclear facilities, the threat of cyber warfare is both ever-present and mostly ignored. Israel, America, and Russia are just a few of the countries in the ever growing cyber arms race.

My videos go live early on Nebula. Sign up now and get my next video before everyone else: https://www.nebula.com/johnnyharris.

Check out all my sources for this video here: https://docs.google.com/document/d/1gaOjUIm3ucnKpQawfkaP_YXn…p=sharing.

“Acquisitions and programs are moving forward,” an SDA spokesperson said in a statement to SpaceNews, adding that the agency is preparing to release a fresh solicitation for the 10 satellites in the near future.

Tranche 3 Tracking Layer proposals

In parallel with efforts to correct procurement missteps, SDA is advancing the first major satellite acquisition since Tournear’s removal: a 54-satellite procurement for the Tranche 3 Tracking Layer of the Proliferated Warfighter Space Architecture (PWSA). This next-generation missile tracking constellation builds on the foundation of earlier tranches, expanding coverage and improving real-time threat detection capabilities.

In this interview Jeff Sebo discusses the ethical implications of artificial intelligence and why we must take the possibility of AI sentience seriously now. He explores challenges in measuring moral significance, the risks of dismissing AI as mere tools, and strategies to mitigate suffering in artificial systems. Drawing on themes from the paper ‘Taking AI Welfare Seriously’ and his up and coming book ‘The Moral Circle’, Sebo examines how to detect markers of sentience in AI systems, and what to do about it. We explore ethical considerations through the lens of population ethics, AI governance (especially important in an AI arms race), and discuss indirect approaches detecting sentience, as well as AI aiding in human welfare. This rigorous conversation probes the foundations of consciousness, moral relevance, and the future of ethical AI design.

Paper ‘Taking AI Welfare Seriously’: https://eleosai.org/papers/20241030_T… — The Moral Circle by Jeff Sebo: https://www.amazon.com.au/Moral-Circl?tag=lifeboatfound-20?tag=lifeboatfound-20… Jeff’s Website: https://jeffsebo.net/ Eleos AI: https://eleosai.org/ Chapters: 00:00 Intro 01:40 Implications of failing to take AI welfare seriously 04:43 Engaging the disengaged 08:18 How Blake Lemoine’s ‘disclosure’ influenced public discourse 12:45 Will people take AI sentience seriously if it is seen tools or commodities? 16:19 Importance, neglectedness and tractability (INT) 20:40 Tractability: Difficulties in measuring moral significance — i.e. by aggregate brain mass 22:25 Population ethics and the repugnant conclusion 25:16 Pascal’s mugging: low probabilities of infinite or astronomically large costs and rewards 31:21 Distinguishing real high stakes causes from infinite utility scams 33:45 The nature of consciousness, and what to measure in looking for moral significance in AI 39:35 Varieties of views on what’s important. Computational functionalism 44:34 AI arms race dynamics and the need for governance 48:57 Indirect approaches to achieving ideal solutions — Indirect normativity 51:38 The marker method — looking for morally relevant behavioral & anatomical markers in AI 56:39 What to do about suffering in AI? 1:00:20 Building in fault tolerance to noxious experience into AI systems — reverse wireheading 1:05:15 Will AI be more friendly if it has sentience? 1:08:47 Book: The Moral Circle by Jeff Sebo 1:09:46 What kind of world could be achieved 1:12:44 Homeostasis, self-regulation and self-governance in sentient AI systems 1:16:30 AI to help humans improve mood and quality of experience 1:18:48 How to find out more about Jeff Sebo’s research 1:19:12 How to get involved Many thanks for tuning in! Please support SciFuture by subscribing and sharing! Have any ideas about people to interview? Want to be notified about future events? Any comments about the STF series? Please fill out this form: https://docs.google.com/forms/d/1mr9P… Kind regards, Adam Ford

Book — The Moral Circle by Jeff Sebo: https://www.amazon.com.au/Moral-Circl?tag=lifeboatfound-20?tag=lifeboatfound-20

Jeff’s Website: https://jeffsebo.net/