Toggle light / dark theme

VexTrio: The Uber of Cybercrime — Brokering Malware for 60+ Affiliates

VexTrio, the shadowy entity controlling a massive network of 70,000+ domains, is finally in the spotlight. This “traffic broker” fuels countless scams & malware campaigns, including ClearFake, SocGholish, & more. Read:


The threat actors behind ClearFake, SocGholish, and dozens of other actors have established partnerships with another entity known as VexTrio as part of a massive “criminal affiliate program,” new findings from Infoblox reveal.

The latest development demonstrates the “breadth of their activities and depth of their connections within the cybercrime industry,” the company said, describing VexTrio as the “single largest malicious traffic broker described in security literature.”

VexTrio, which is believed to be have been active since at least 2017, has been attributed to malicious campaigns that use domains generated by a dictionary domain generation algorithm (DDGA) to propagate scams, riskware, spyware, adware, potentially unwanted programs (PUPs), and pornographic content.

Mother of All Breaches: LinkedIn, X, Telegram, Adobe named in 26B leak

The researchers have given the breach the title — MOAB, meaning ‘Mother of All Breaches.’

The security of your personal data hangs in the balance as cybersecurity experts uncover what could be the mother of all breaches, posing a threat of unprecedented proportions.


Researchers have warned that a database containing 26 billion leaked data records has been discovered. The supermassive data leak is likely the biggest found to date.

Thomvest Ventures closes $250M fund to invest across fintech, cybersecurity, AI

Thomvest Ventures is popping into 2024 with a new $250 million fund and the promotion of Umesh Padval and Nima Wedlake to the role of managing directors.

The Bay Area venture capital firm was started about 25 years ago by Peter Thomson, whose family is the majority owners of Thomson Reuters.

“Peter has always had a very strong interest in technology and what technology would do in terms of shaping society and the future,” Don Butler, Thomvest Ventures’ managing director, told TechCrunch. He met Thomson in 1999 and joined the firm in 2000.

A simple technique to defend ChatGPT against jailbreak attacks

Large language models (LLMs), deep learning-based models trained to generate, summarize, translate and process written texts, have gained significant attention after the release of Open AI’s conversational platform ChatGPT. While ChatGPT and similar platforms are now widely used for a wide range of applications, they could be vulnerable to a specific type of cyberattack producing biased, unreliable or even offensive responses.

Researchers at Hong Kong University of Science and Technology, University of Science and Technology of China, Tsinghua University and Microsoft Research Asia recently carried out a study investigating the potential impact of these attacks and techniques that could protect models against them. Their paper, published in Nature Machine Intelligence, introduces a new psychology-inspired technique that could help to protect ChatGPT and similar LLM-based conversational platforms from cyberattacks.

“ChatGPT is a societally impactful artificial intelligence tool with millions of users and integration into products such as Bing,” Yueqi Xie, Jingwei Yi and their colleagues write in their paper. “However, the emergence of attacks notably threatens its responsible and secure use. Jailbreak attacks use adversarial prompts to bypass ChatGPT’s ethics safeguards and engender harmful responses.”

In Leaked Audio, Microsoft Cherry-Picked Examples to Make Its AI Seem Functional

Microsoft “cherry-picked” examples of its generative AI’s output after it would frequently “hallucinate” incorrect responses, Business Insider reports.

The scoop comes from leaked audio of an internal presentation on an early version of Microsoft’s Security Copilot, a ChatGPT-like AI tool designed to help cybersecurity professionals.

According to BI, the audio contains a Microsoft researcher discussing the results of “threat hunter” tests in which the AI analyzed a Windows security log for possible malicious activity.

Researchers develop AI-driven Machine-Checking Method for Verifying Software Code

A team of computer scientists led by the University of Massachusetts Amherst recently announced a new method for automatically generating whole proofs that can be used to prevent software bugs and verify that the underlying code is correct.

This new method, called Baldur, leverages the artificial intelligence power of large language models (LLMs), and when combined with the state-of-the-art tool Thor, yields unprecedented efficacy of nearly 66%. The team was recently awarded a Distinguished Paper award at the ACM Joint European Software Engineering Conference and Symposium on the Foundations of Software Engineering.

“We have unfortunately come to expect that our software is buggy, despite the fact that it is everywhere and we all use it every day,” says Yuriy Brun, professor in the Manning College of Information and Computer Sciences at UMass Amherst and the paper’s senior author.

/* */