Toggle light / dark theme

CSIS will host a public event on responsible AI in a global context, featuring a moderated discussion with Julie Sweet, Chair and CEO of Accenture, and Brad Smith, President and Vice Chair of the Microsoft Corporation, on the business perspective, followed by a conversation among a panel of experts on the best way forward for AI regulation. Dr. John J. Hamre, President and CEO of CSIS, will provide welcoming remarks.

Keynote Speakers:
Brad Smith, President and Vice Chair, Microsoft Corporation.
Julie Sweet, Chair and Chief Executive Officer, Accenture.

Featured Speakers:
Gregory C. Allen, Director, Project on AI Governance and Senior Fellow, Strategic Technologies Program, CSIS
Mignon Clyburn, Former Commissioner, U.S. Federal Communications Commission.
Karine Perset, Head of AI Unit and OECD.AI, Digital Economy Policy Division, Organisation for Economic Co-Operation and Development (OECD)
Helen Toner, Director of Strategy, Center for Security and Emerging Technology, Georgetown University.

This event is made possible through general support to CSIS.

A nonpartisan institution, CSIS is the top national security think tank in the world.
Visit www.csis.org to find more of our work as we bring bipartisan solutions to the world’s greatest challenges.

Want to see more videos and virtual events? Subscribe to this channel and turn on notifications: https://cs.is/2dCfTve.

Fujitsu said it will establish an AI ethics and governance office to ensure the safe and secure deployment of AI technologies.

To be headed by Junichi Arahori, the new office will focus on implementing ethical measures related to the research, development, and implementation of AI and other machine learning applications.

“This marks the next step in Fujitsu’s ongoing efforts to strengthen and enforce comprehensive, company-wide measures to achieve robust AI ethics governance based on international best-practices, policies, and legal frameworks,” the company stated.

Another major player in the cryptocurrency world is forecasting a dismal year for Bitcoin (BTC) in 2022. Following the United States Federal Reserve’s and other central banks’ tightening of liquidity measures, Huobi Research believes that BTC will enter a bear market. On the brighter side, decentralized finance (DeFi) will continue to expand and adapt, with decentralized autonomous organization (DAO) governance eventually becoming a major driver of activity on the chain.


Following the Federal Reserve’s and other central banks’ tightening of liquidity measures, Huobi Research believes that Bitcoin will enter a bear market.

“Your scientists were so preoccupied with whether they could, they didn’t stop to think if they should.” — Dr Ian Malcolm in Jurassic Park

Throughout most of human history, the goal was to establish a better life for people. Whether proponents of change admit to it or not, they hope to make everything perfect. However, this very impulse to improve security against everything bad and eliminate all physical ills could precipitate just another kind of doom.

To borrow the words of a Jeff Goldblum character, those of us who did the most to uplift humanity may have been “so preoccupied with whether they could, they didn’t stop to think if they should.”

In Carl Sagan’s The Demon-Haunted World, he pointed out that the modern world is complicated. Everything we don’t understand is something to fear (unless you are a specialist in it), and it is a thing that can be ignorantly speculated about in a vacuum, as vaccines are by many on social media.

Rather than give up on humanity’s ability to come to correct judgments, Sagan offers the tools of critical thinking, taking the form of the famous Baloney Detection Kit. The rules are things you can always try to offer someone if they believe nonsensical conspiracy theories.

In Machinia, Damon learns that the robot uprising was the result of the weapons of war simply refusing to wage war. In the article that follows, the UN is already very concerned about autonomous weapons being deployed that do not require human governance. #war, #UN


GENEVA — Countries taking part in UN talks on autonomous weapons stopped short of launching negotiations on an international treaty to govern their use, instead agreeing merely to continue discussions.

The International Committee of the Red Cross and several NGOs had been pushing for negotiators to begin work on an international treaty that would establish legally-binding new rules on the machine-operated weapons.

Unlike existing semi-autonomous weapons such as drones, fully-autonomous weapons have no human-operated “kill switch” and instead leave decisions over life and death to sensors, software and machine processes.

While AI can provide real-time analysis of enormous amounts of data, an AI system coupled with blockchain technology can provide a transparent data governance model for quicker validation amongst various stakeholders through smart contracts and DAOs.

Blockchain benefits can address AI’s shortcomings

Applying the benefits of blockchain technology can help address various shortcomings of AI and help in increasing people’s trust in AI-based applications. With Blockchain, AI applications acquire the qualities of decentralization, distributed data governance, data immutability, transparency, security, and real-time accountability. Many AI-enabled intelligent systems are criticized for their lack of security and trust levels. Blockchain technology can essentially help in addressing the security and trust deficit issues to a significant extent. Enormous challenges remain for both blockchain technology and Artificial Intelligence. Still, when combined, they display tremendous potential and will complement each other to restore the trust factor and improve efficiency at large.

✅ Instagram: https://www.instagram.com/pro_robots.

You are on the PRO Robots channel and in this form we present you with high-tech news. What can Google’s army of robots really do? Can time turn backwards? Catapult rockets and a jet engine powered by plastic waste. All this and much more in one edition of high-tech news! Watch the video until the end and write your impressions about the new army of robots from Google in the comments.

0:00 In this issue.
0:23 Everyday Robots Project.
1:20 California startup Machina Labs.
2:01 Aero cabs try to become part of transportation systems.
2:47 Renault decided to create its own flying car.
3:39 Startup Flytrex.
4:32 Startup SpinLaunch.
5:28 A rocket engine powered by plastic waste.
6:10 NASA launched the DART mission into space.
7:02 Parker Solar Probe.
7:48 Fitness Instructor Winning a Flight on Virgin Galactic’s Space Plane.
8:24 Quantum experiment by MIT physicists.
9:28 Quantum systems can evolve in two opposite directions.
10:19 Apple to launch its augmented reality headset project.
10:58 The world’s first eye prosthesis fully printed on a 3D printer.
11:38 South Korea announced the creation of a floating city of the future.
12:30 Moscow City Council approved the list of streets available for unmanned transport.
13:15 SH-350 drone of Russian Post from Aeromax company has successfully made its first test flight.
14:00 Concern “Kalashnikov” patented its own version of a miniature electric vehicle.

#prorobots #robots #robot #future technologies #robotics.

More interesting and useful content:
✅ Elon Musk Innovation https://www.youtube.com/playlist?list=PLcyYMmVvkTuQ-8LO6CwGWbSCpWI2jJqCQ
✅Future Technologies Reviews https://www.youtube.com/playlist?list=PLcyYMmVvkTuTgL98RdT8-z-9a2CGeoBQF
✅ Technology news.

#prorobots #technology #roboticsnews.

ONE YEAR AGO Google artificial intelligence researcher Timnit Gebru tweeted, “I was fired” and ignited a controversy over the freedom of employees to question the impact of their company’s technology. Thursday, she launched a new research institute to ask questions about responsible use of artificial intelligence that Gebru says Google and other tech companies won’t.

“Instead of fighting from the inside, I want to show a model for an independent institution with a different set of incentive structures,” says Gebru, who is founder and executive director of Distributed Artificial Intelligence Research (DAIR). The first part of the name is a reference to her aim to be more inclusive than most AI labs—which skew white, Western, and male —and to recruit people from parts of the world rarely represented in the tech industry.

Gebru was ejected from Google after clashing with bosses over a research paper urging caution with new text-processing technology enthusiastically adopted by Google and other tech companies. Google has said she resigned and was not fired, but acknowledged that it later fired Margaret Mitchell, another researcher who with Gebru co-led a team researching ethical AI. The company placed new checks on the topics its researchers can explore. Google spokesperson Jason Freidenfelds declined to comment but directed WIRED to a recent report on the company’s work on AI governance, which said Google has published more than 500 papers on “responsible innovation” since 2018.