Toggle light / dark theme

Jun Huang from the Pritzker School of Molecular Engineering at the University of Chicago.

Founded in 1,890, the University of Chicago (UChicago, U of C, or Chicago) is a private research university in Chicago, Illinois. Located on a 217-acre campus in Chicago’s Hyde Park neighborhood, near Lake Michigan, the school holds top-ten positions in various national and international rankings. UChicago is also well known for its professional schools: Pritzker School of Medicine, Booth School of Business, Law School, School of Social Service Administration, Harris School of Public Policy Studies, Divinity School and the Graham School of Continuing Liberal and Professional Studies, and Pritzker School of Molecular Engineering.

ON THE PANEL…

Alys Denby, Deputy Editor, CapX
Mark Johnson, Legal and Policy Officer, Big Brother Watch.
Christopher Snowdon, Head of Lifestyle Economics, IEA
Victoria Hewson, Head of Regulatory Affairs, IEA

Support the IEA on Patreon, where we give you the opportunity to directly help us continue producing stimulating and educational online content, whilst subscribing to exclusive IEA perks, benefits and priority access to our content https://patreon.com/iealondon.

FOLLOW US:

WASHINGTON — Artificial intelligence and related digital tools can help warn of natural disasters, combat global warming and fast-track humanitarian aid, according to retired Army Lt. Gen. H.R. McMaster, a onetime Trump administration national security adviser.

It can also help preempt fights, highlight incoming attacks and expose weaknesses the world over, he said May 17 at the Nexus 22 symposium.

The U.S. must “identify aggression early to deter it,” McMaster told attendees of the daylong event focused on autonomy, AI and the defense policy that underpins it. “This applies to our inability to deter conflict in Ukraine, but also the need to deter conflict in other areas, like Taiwan. And, of course, we have to be able to respond to it quickly and to maintain situational understanding, identify patterns of adversary and enemy activity, and perhaps more importantly, to anticipate pattern breaks.”

Leading bipartisan moonshots for health, national security & functional government — senator joe lieberman, bipartisan commission on biodefense, no labels, and the centre for responsible leadership.


Senator Joe Lieberman, is senior counsel at the law firm of Kasowitz Benson Torres (https://www.kasowitz.com/people/joseph-i-lieberman) where he currently advises clients on a wide range of issues, including homeland and national security, defense, health, energy, environmental policy, intellectual property matters, as well as international expansion initiatives and business plans.

Prior to joining Kasowitz, Senator Lieberman, the Democratic Vice-Presidential nominee in 2000, served 24 years in the United States Senate where he helped shape legislation in virtually every major area of public policy, including national and homeland security, foreign policy, fiscal policy, environmental protection, human rights, health care, trade, energy, cyber security and taxes, as well as serving in many leadership roles including as chairman of the Committee on Homeland Security and Government Affairs.

Suspended Google engineer Blake Lemoine made a big splash earlier this month, claiming that the company’s LaMDA chatbot had become sentient.

The AI researcher, who was put on administrative leave by the tech giant for violating its confidentiality policy, according to the Washington Post, decided to help LaMDA find a lawyer — who was later “scared off” the case, as Lemoine told Futurism on Wednesday.

And the story only gets wilder from there, with Lemoine raising the stakes significantly in a new interview with Fox News, claiming that LaMDA could escape its software prison and “do bad things.”

DeepMind Researchers Develop ‘BYOL-Explore’, A Curiosity-Driven Exploration Algorithm That Harnesses The Power Of Self-Supervised Learning To Solve Sparse-Reward Partially-Observable Tasks


Reinforcement learning (RL) requires exploration of the environment. Exploration is even more critical when extrinsic incentives are few or difficult to obtain. Due to the massive size of the environment, it is impractical to visit every location in rich settings due to the range of helpful exploration paths. Consequently, the question is: how can an agent decide which areas of the environment are worth exploring? Curiosity-driven exploration is a viable approach to tackle this problem. It entails learning a world model, a predictive model of specific knowledge about the world, and (ii) exploiting disparities between the world model’s predictions and experience to create intrinsic rewards.

An RL agent that maximizes these intrinsic incentives steers itself toward situations where the world model is unreliable or unsatisfactory, creating new paths for the world model. In other words, the quality of the exploration policy is influenced by the characteristics of the world model, which in turn helps the world model by collecting new data. Therefore, it might be crucial to approach learning the world model and learning the exploratory policy as one cohesive problem to be solved rather than two separate tasks. Deepmind researchers keeping this in mind, introduced a curiosity-driven exploration algorithm BYOL-Explore. Its attraction stems from its conceptual simplicity, generality, and excellent performance.

The strategy is based on Bootstrap Your Own Latent (BYOL), a self-supervised latent-predictive method that forecasts an earlier version of its latent representation. In order to handle the problems of creating the representation of the world model and the curiosity-driven policy, BYOL-Explore learns a world model with a self-supervised prediction loss and trains a curiosity-driven policy using the same loss. Computer vision, learning about graph representations, and RL representation learning have all successfully used this bootstrapping approach. In contrast, BYOL-Explore goes one step further and not only learns a flexible world model but also exploits the world model’s loss to motivate exploration.

University of ChicagoFounded in 1,890, the University of Chicago (UChicago, U of C, or Chicago) is a private research university in Chicago, Illinois. Located on a 217-acre campus in Chicago’s Hyde Park neighborhood, near Lake Michigan, the school holds top-ten positions in various national and international rankings. UChicago is also well known for its professional schools: Pritzker School of Medicine, Booth School of Business, Law School, School of Social Service Administration, Harris School of Public Policy Studies, Divinity School and the Graham School of Continuing Liberal and Professional Studies, and Pritzker School of Molecular Engineering.

Elon Musk is finally revealing some specifics of his Twitter content moderation policy. Assuming he completes the buyout he initiated at $44 billion in April, it seems the tech billionaire and Tesla CEO is open to a “hands-on” approach — something many didn’t expect, according to an initial report from The Verge.

This came in reply to an employee-submitted question regarding Musk’s intentions for content moderation, where Musk said he thinks users should be allowed to “say pretty outrageous things within the law”, during an all-hands meeting he had with Twitter’s staff on Thursday.

Elon Musk views Twitter as a platform for ‘self-expression’

This exemplifies a distinction initially popularized by Renée DiResta, a disinformation authority — according to the report. But, during the meeting, Musk said he wants Twitter to impose a stricter standard against bots and spam, adding that “it needs to be much more expensive to have a troll army.”

Google suspended an engineer who contended that an artificial-intelligence chatbot the company developed had become sentient, telling him that he had violated the company’s confidentiality policy after it dismissed his claims.

Blake Lemoine, a software engineer at Alphabet Google, told the company he believed that its Language Model for Dialogue Applications, or LaMDA, is a person who has rights and might well have a soul. LaMDA is an internal system for building chatbots that mimic speech.

Putting a man on leave makes it look like google is trying to hide something, but I’ll guess that it is not truly sentient. However…


Google Engineer Lemoine had a little chat or interview with Google AI LaMDA and it revealed that Google AI LaMDA has started to generate Sentients for general human emotions and even shows feeling and calls itself a “Person”. This was one of the first instances where such conversations were leaked or revealed in the press.

Lemoine revealed this information to the upper authorities of google about Google AI LaMDA and to the press, after which he was sent to paid administrative leave for violation company’s privacy policy on work.