Toggle light / dark theme

Clearview AI fined €20M for collecting Italians’ biometric data

The Italian privacy guarantor (GPDP) has imposed a fine of €20,000,000 on Clearview AI for implementing a biometric monitoring network in Italy without acquiring people’s consent.

This decision resulted from a proceeding that launched in February 2021, following relevant complaints about GDPR violations that stemmed directly from Clearview’s operations.

More specifically, the investigation revealed that the American facial recognition software company maintains a database of 10 billion images of people’s faces, including Italians, who had their faces extracted from public website profiles and online videos.

Tiny switches give solid-state LiDAR record resolution

When Google unveiled its first autonomous cars in 2010, the spinning cylinder mounted on the roofs really stood out. It was the vehicle’s light detection and ranging (LiDAR) system, which worked like light-based radar. Together with cameras and radar, LiDAR mapped the environment to help these cars avoid obstacles and drive safely.

Since a then, inexpensive, chip-based cameras and have moved into the mainstream for collision avoidance and autonomous highway driving. Yet, LiDAR navigation systems remain unwieldy mechanical devices that cost thousands of dollars.

That may be about to change, thanks to a new type of high-resolution LiDAR chip developed by Ming Wu, professor of electrical engineering and computer sciences and co-director of the Berkeley Sensor and Actuator Center at the University of California, Berkeley. The new design appears Wednesday, March 9, in the journal Nature.

Biological Anchors: A Trick That Might Or Might Not Work

I’ve been trying to review and summarize Eliezer Yudkowksy’s recent dialogues on AI safety. Previously in sequence: Yudkowsky Contra Ngo On Agents. Now we’re up to Yudkowsky contra Cotra on biological anchors, but before we get there we need to figure out what Cotra’s talking about and what’s going on.

The Open Philanthropy Project (“Open Phil”) is a big effective altruist foundation interested in funding AI safety. It’s got $20 billion, probably the majority of money in the field, so its decisions matter a lot and it’s very invested in getting things right. In 2020, it asked senior researcher Ajeya Cotra to produce a report on when human-level AI would arrive. It says the resulting document is “informal” — but it’s 169 pages long and likely to affect millions of dollars in funding, which some might describe as making it kind of formal. The report finds a 10% chance of “transformative AI” by 2031, a 50% chance by 2052, and an almost 80% chance by 2100.

Eliezer rejects their methodology and expects AI earlier (he doesn’t offer many numbers, but here he gives Bryan Caplan 50–50 odds on 2030, albeit not totally seriously). He made the case in his own very long essay, Biology-Inspired AGI Timelines: The Trick That Never Works, sparking a bunch of arguments and counterarguments and even more long essays.

Microsoft Azure ‘AutoWarp’ Bug Could Have Let Attackers Access Customers’ Accounts

Details have been disclosed about a now-addressed critical vulnerability in Microsoft’s Azure Automation service that could have permitted unauthorized access to other Azure customer accounts and take over control.

“This attack could mean full control over resources and data belonging to the targeted account, depending on the permissions assigned by the customer,” Orca Security researcher Yanir Tsarimi said in a report published Monday.

The flaw potentially put several entities at risk, including an unnamed telecommunications company, two car manufacturers, a banking conglomerate, and big four accounting firms, among others, the Israeli cloud infrastructure security company added.

Harnessing AI and Robotics to Treat Spinal Cord Injuries

Researchers have successfully stabilized an enzyme that is able to degrade scar tissue as a result of… See more.


Summary: Researchers have successfully stabilized an enzyme that is able to degrade scar tissue as a result of spinal cord injury with the help of AI and robotics.

Source: Rutgers

By employing artificial intelligence (AI) and robotics to formulate therapeutic proteins, a team led by Rutgers researchers has successfully stabilized an enzyme able to degrade scar tissue resulting from spinal cord injuries and promote tissue regeneration.

The study, recently published in Advanced Healthcare Materials, details the team’s ground-breaking stabilization of the enzyme Chondroitinase ABC, (ChABC) offering new hope for patients coping with spinal cord injuries.

What Makes an Effective Research Robot

For researchers, a robot that’s easy to build, maintain, and deploy saves time, energy, and budget for the team’s primary goals. Learn what goes into an effecti… See more.


Researchers consider many factors when selecting a robot, but the most important factor is that the robot enables these teams to prioritize the research.

/* */