Toggle light / dark theme

Your personal information may have been leaked in the ‘Mother of all Breaches’ (MOAB), cybersecurity researchers have warned.

Over 26 billion personal records have been exposed, in what researchers believe to be the biggest-ever data leak.

Sensitive information from several sites including Twitter, Dropbox, and Linkedin was discovered on an unsecured page.

Thomvest Ventures is popping into 2024 with a new $250 million fund and the promotion of Umesh Padval and Nima Wedlake to the role of managing directors.

The Bay Area venture capital firm was started about 25 years ago by Peter Thomson, whose family is the majority owners of Thomson Reuters.

“Peter has always had a very strong interest in technology and what technology would do in terms of shaping society and the future,” Don Butler, Thomvest Ventures’ managing director, told TechCrunch. He met Thomson in 1999 and joined the firm in 2000.

Large language models (LLMs), deep learning-based models trained to generate, summarize, translate and process written texts, have gained significant attention after the release of Open AI’s conversational platform ChatGPT. While ChatGPT and similar platforms are now widely used for a wide range of applications, they could be vulnerable to a specific type of cyberattack producing biased, unreliable or even offensive responses.

Researchers at Hong Kong University of Science and Technology, University of Science and Technology of China, Tsinghua University and Microsoft Research Asia recently carried out a study investigating the potential impact of these attacks and techniques that could protect models against them. Their paper, published in Nature Machine Intelligence, introduces a new psychology-inspired technique that could help to protect ChatGPT and similar LLM-based conversational platforms from cyberattacks.

“ChatGPT is a societally impactful artificial intelligence tool with millions of users and integration into products such as Bing,” Yueqi Xie, Jingwei Yi and their colleagues write in their paper. “However, the emergence of attacks notably threatens its responsible and secure use. Jailbreak attacks use adversarial prompts to bypass ChatGPT’s ethics safeguards and engender harmful responses.”

Microsoft “cherry-picked” examples of its generative AI’s output after it would frequently “hallucinate” incorrect responses, Business Insider reports.

The scoop comes from leaked audio of an internal presentation on an early version of Microsoft’s Security Copilot, a ChatGPT-like AI tool designed to help cybersecurity professionals.

According to BI, the audio contains a Microsoft researcher discussing the results of “threat hunter” tests in which the AI analyzed a Windows security log for possible malicious activity.

A team of computer scientists led by the University of Massachusetts Amherst recently announced a new method for automatically generating whole proofs that can be used to prevent software bugs and verify that the underlying code is correct.

This new method, called Baldur, leverages the artificial intelligence power of large language models (LLMs), and when combined with the state-of-the-art tool Thor, yields unprecedented efficacy of nearly 66%. The team was recently awarded a Distinguished Paper award at the ACM Joint European Software Engineering Conference and Symposium on the Foundations of Software Engineering.

“We have unfortunately come to expect that our software is buggy, despite the fact that it is everywhere and we all use it every day,” says Yuriy Brun, professor in the Manning College of Information and Computer Sciences at UMass Amherst and the paper’s senior author.

⚠️ Over 7,100 WordPress sites have been hit by the ‘Balada Injector’ malware, which exploits sites using a vulnerable version of the Popup Builder plugin. Read More ➡️ https://thehackernews.com/2024/01/balada-injector-infects-over-7100.htm


Thousands of WordPress sites using a vulnerable version of the Popup Builder plugin have been compromised with a malware called Balada Injector.

First documented by Doctor Web in January 2023, the campaign takes place in a series of periodic attack waves, weaponizing security flaws WordPress plugins to inject backdoor designed to redirect visitors of infected sites to bogus tech support pages, fraudulent lottery wins, and push notification scams.

Subsequent findings unearthed by Sucuri have revealed the massive scale of the operation, which is said to have been active since 2017 and infiltrated no less than 1 million sites since then.