БЛОГ

Archive for the ‘policy’ category: Page 9

Nov 25, 2023

The Exciting, Perilous Journey Toward AGI | Ilya Sutskever | TED

Posted by in categories: business, policy, robotics/AI

Just weeks before the management shakeup at OpenAI rocked Silicon Valley and made international news, the company’s cofounder and chief scientist Ilya Sutskever explored the transformative potential of artificial general intelligence (AGI), highlighting how it could surpass human intelligence and profoundly transform every aspect of life. Hear his take on the promises and perils of AGI — and his optimistic case for how unprecedented collaboration will ensure its safe and beneficial development. (Recorded October 17, 2023)

If you love watching TED Talks like this one, become a TED Member to support our mission of spreading ideas: https://ted.com/membership.

Continue reading “The Exciting, Perilous Journey Toward AGI | Ilya Sutskever | TED” »

Nov 22, 2023

Why North Korea may use nuclear weapons first, and why current US policy toward Pyongyang is unsustainable

Posted by in categories: existential risks, military, nuclear energy, policy

I suggest two responses to this difficult challenge for the United States and its allies: At the time of attack, the allies should respond with nonnuclear retaliation as long as politically feasible, in order to prevent further nuclear escalation. However, this will be difficult given the likely post-strike panic and hysteria. So, in preparation, the US should deconcentrate its northeast Asian conventional footprint, to reduce North Korean opportunities to engage in nuclear blackmail regarding regional American clusters of military equipment and personnel, and to reduce potential US casualties and consequent massive retaliation pressures if North Korea does launch a nuclear attack.

North Korean first-use incentives. The incentives for North Korea to use nuclear weapons first in a major conflict are powerful:

Operationally, North Korea will likely have only a very short time window to use its weapons of mass destruction. The Americans will almost certainly try to immediately suppress Northern missiles. An imminent, massive US-South Korea disarming strike creates an extreme use-it-or-lose-it dilemma for Pyongyang. If Kim Jong-Un does not use his nuclear weapons at the start of hostilities, most will be destroyed a short time later by allied airpower, turning an inter-Korean conflict into a conventional war that the North will probably lose. Frighteningly, this may encourage Kim to also release his strategic nuclear weapons almost immediately after fighting begins.

Nov 20, 2023

UC Berkeley Researchers Propose an Artificial Intelligence Algorithm that Achieves Zero-Shot Acquisition of Goal-Directed Dialogue Agents

Posted by in categories: information science, policy, robotics/AI

Large Language Models (LLMs) have shown great capabilities in various natural language tasks such as text summarization, question answering, generating code, etc., emerging as a powerful solution to many real-world problems. One area where these models struggle, though, is goal-directed conversations where they have to accomplish a goal through conversing, for example, acting as an effective travel agent to provide tailored travel plans. In practice, they generally provide verbose and non-personalized responses.

Models trained with supervised fine-tuning or single-step reinforcement learning (RL) commonly struggle with such tasks as they are not optimized for overall conversational outcomes after multiple interactions. Moreover, another area where they lack is dealing with uncertainty in such conversations. In this paper, the researchers from UC Berkeley have explored a new method to adapt LLMs with RL for goal-directed dialogues. Their contributions include an optimized zero-shot algorithm and a novel system called imagination engine (IE) that generates task-relevant and diverse questions to train downstream agents.

Since the IE cannot produce effective agents by itself, the researchers utilize an LLM to generate possible scenarios. To enhance the effectiveness of an agent in achieving desired outcomes, multi-step reinforcement learning is necessary to determine the optimal strategy. The researchers have made one modification to this approach. Instead of using any on-policy samples, they used offline value-based RL to learn a policy from the synthetic data itself.

Nov 19, 2023

Re-Thinking The ‘When’ And ‘How’ Of Brain Death

Posted by in categories: biotech/medical, law, neuroscience, policy

In an article published yesterday in MIT Technology Review, Rachel Nuwer wrote a thought provoking piece exploring the boundaries between life and death.


Beyond the brain and brain death itself, related efforts are studying and attempting to develop techniques for restoring metabolic function in a number of organs other than the brain after death, including the heart and kidneys, which could greatly enhance organ donation capabilities.

While these developments are promising, researchers caution against overpromising. The path to these medical advancements is paved with years of research and ethical considerations. The exploration into the dying process will surely challenge not only scientific and medical fields but also societal, theological, and legal considerations, as it reshapes our understanding of one of life’s most profound phenomena. At some point, policy and regulations will need to follow—further adding to the complexity of the topic.

Continue reading “Re-Thinking The ‘When’ And ‘How’ Of Brain Death” »

Nov 12, 2023

Humane’s AI Pin up close

Posted by in categories: habitats, policy, robotics/AI, space

We spent 90 minutes with the pin and its founders at Humane’s SF offices.

A few hours after this morning’s big unveil, Humane opened its doors to a handful of press.


A few hours after this morning’s big unveil, Humane opened its doors to a handful of press. Located in a nondescript building in San Francisco’s SoMa neighborhood, the office is home to the startup’s hardware design teams.

Continue reading “Humane’s AI Pin up close” »

Nov 10, 2023

Cannabis Use Connected with Potential Long-Term Cardiology Issues

Posted by in categories: biotech/medical, health, policy

Two recent studies due to be presented as posters at the American Heart Association (AHA) Scientific Sessions 2023 between November 11–13 examine how frequent cannabis use could potentially lead to increased chances of cardiology issues, including heart attack, stroke, or heart failure. These studies were conducted by an international team of researchers and holds the potential to help scientists, medical professionals, and the public better understand the long-term health risks associated with cannabis use, specifically pertaining to cardiovascular concerns.

For the first study, which was conducted by the All of Us Research Program, researchers enlisted 156,999 participants who had not experienced heart failure at the time of the study’s enrollment to take part in a survey-based study to evaluate their cannabis use habits and conduct a follow-up survey 45 months later. The results indicated that heart failure emerged with 2,958 (1.88 percent) of the participants during the 48-month study period along with a 34 percent increased risk of emerging heart failure for participants were reported daily cannabis use compared to participants who didn’t use cannabis.

“Our results should encourage more researchers to study the use of marijuana to better understand its health implications, especially on cardiovascular risk,” said Dr. Yakubu Bene-Alhasan, who is a resident physician at Medstar Health and lead author of the study. “We want to provide the population with high-quality information on marijuana use and to help inform policy decisions at the state level, to educate patients and to guide health care professionals.”

Nov 8, 2023

Nuclear Disarmament and UN Reforms

Posted by in categories: ethics, existential risks, geopolitics, military, nuclear weapons, policy, treaties

Although essentially the United Nations are now making nuclear weapons illegal with new treaties like nuclear disarmament. Russia currently has taken another route for globalization and possibly nuclear escalation. As currently the doomsday clock seems closer to midnight which could mean the end of the world scenarios due to Russias escalation and the possibility of all out nuclear war globally and then nuclear annihilation of the planet. Even with current wars are actually seemingly always going on but this global escalation of nuclear war is a zero sum game as no one would be the winner due to radiation levels circulating the planet. I do think that the us and china are in a treaty but so far Russia is still escalating which now holds the world now ransom.


This is a summary of Policy Brief 139 which is available with full references on the Toda Peace Institute’s website.

In January 2021, a global treaty came into force outlawing the bomb. The Treaty on the Prohibition of Nuclear Weapons (TPNW or Ban Treaty) is the most significant multilateral development in nuclear arms control since the Non-Proliferation Treaty’s (NPT) entry into force in 1970. It establishes a new normative settling point on the ethics, legality and legitimacy of the bomb.

Continue reading “Nuclear Disarmament and UN Reforms” »

Oct 31, 2023

FSS #11 Biotech, Neurotech and AI: Opportunities and Risks

Posted by in categories: biotech/medical, life extension, nanotechnology, neuroscience, policy, robotics/AI

The convergence of Biotechnology, Neurotechnology, and Artificial Intelligence has major implications for the future of humanity. This talk explores the long-term opportunities inherent to these fields by surveying emerging breakthroughs and their potential applications. Whether we can enjoy the benefits of these technologies depends on us: Can we overcome the institutional challenges that are slowing down progress without exacerbating civilizational risks that come along with powerful technological progress?

About the speaker: Allison Duettmann is the president and CEO of Foresight Institute. She directs the Intelligent Cooperation, Molecular Machines, Biotech & Health Extension, Neurotech, and Space Programs, Fellowships, Prizes, and Tech Trees, and shares this work with the public. She founded Existentialhope.com, co-edited Superintelligence: Coordination & Strategy, co-authored Gaming the Future, and co-initiated The Longevity Prize. She advises companies and projects, such as Cosmica, and The Roots of Progress Fellowship, and is on the Executive Committee of the Biomarker Consortium. She holds an MS in Philosophy & Public Policy from the London School of Economics, focusing on AI Safety.

Oct 31, 2023

Can personalized care prevent excessive screening for colorectal cancer in older adults?

Posted by in categories: biotech/medical, health, policy

Colorectal cancer screening is widely recommended for adults ages 45 to 75 with an average risk of developing the disease. However, many people don’t realize that the benefits of screening for this type of cancer aren’t always the same for older adults.

“While many clinicians simply follow guideline recommendations for colon screening in adults within this age range, this isn’t always the best approach,” said Sameer Saini, M.D., M.S., who is a gastroenterologist at both Michigan Medicine and the Lieutenant Colonel Charles S. Kettles VA Medical Center and is as a health services researcher at the University of Michigan Institute for Healthcare Policy and Innovation and the Ann Arbor VA Center for Clinical Management Research, or CCMR.

“As individuals get older, they often acquire health problems that can lead to potential harm when coupled with endoscopy. While guidelines recommend a personalized approach to screening in average risk individuals between ages 76 and 85, there are no such recommendations for older adults who are younger than age 76—individuals who we commonly see in our clinics.”

Oct 30, 2023

Three things to know about the White House’s executive order on AI

Posted by in categories: government, policy, robotics/AI, security

The goal of the order, according to the White House, is to improve “AI safety and security.” It also includes a requirement that developers share safety test results for new AI models with the US government if the tests show that the technology could pose a risk to national security. This is a surprising move that invokes the Defense Production Act, typically used during times of national emergency.

The executive order advances the voluntary requirements for AI policy that the White House set back in August, though it lacks specifics on how the rules will be enforced. Executive orders are also vulnerable to being overturned at any time by a future president, and they lack the legitimacy of congressional legislation on AI, which looks unlikely in the short term.

“The Congress is deeply polarized and even dysfunctional to the extent that it is very unlikely to produce any meaningful AI legislation in the near future,” says Anu Bradford, a law professor at Columbia University who specializes in digital regulation.

Page 9 of 93First678910111213Last