Toggle light / dark theme

One more. I hope it’s not posted yet. Even AI isn’t safe.


ChatGPT creator OpenAI has confirmed a data breach caused by a bug in an open source library, just as a cybersecurity firm noticed that a recently introduced component is affected by an actively exploited vulnerability.

OpenAI said on Friday that it had taken the chatbot offline earlier in the week while it worked with the maintainers of the Redis data platform to patch a flaw that resulted in the exposure of user information.

The issue was related to ChatGPT’s use of Redis-py, an open source Redis client library, and it was introduced by a change made by OpenAI on March 20.

The Near–Ultrasound Invisible Trojan, or NUIT, was developed by a team of researchers from the University of Texas at San Antonio and the University of Colorado Colorado Springs as a technique to secretly convey harmful orders to voice assistants on smartphones and smart speakers.

If you watch videos on YouTube on your smart TV, then that television must have a speaker, right? According to Guinevere Chen, associate professor and co-author of the NUIT article, “the sound of NUIT harmful orders will [be] inaudible, and it may attack your mobile phone as well as connect with your Google Assistant or Alexa devices.” “That may also happen in Zooms during meetings. During the meeting, if someone were to unmute themself, they would be able to implant the attack signal that would allow them to hack your phone, which was placed next to your computer.

The attack works by playing sounds close to but not exactly at ultrasonic frequencies, so they may still be replayed by off-the-shelf hardware, using a speaker, either the one already built into the target device or anything nearby. If the first malicious instruction is to mute the device’s answers, then subsequent actions, such as opening a door or disabling an alarm system, may be initiated without warning if the first command was to silence the device in the first place.

On the third day of the Pwn2Own hacking contest, security researchers were awarded $185,000 after demonstrating 5 zero-day exploits targeting Windows 11, Ubuntu Desktop, and the VMware Workstation virtualization software.

The highlight of the day was the Ubuntu Desktop operating system getting hacked three times by three different teams, although one of them was a collision with the exploit being previously known.

The three working Ubuntu zero-day were demoed by Kyle Zeng of ASU SEFCOM (a double free bug), Mingi Cho of Theori (a Use-After-Free vulnerability), and Bien Pham (@bienpnn) of Qrious Security.

A nefarious use for AI. Phishing emails.


SECURITY experts have issued a warning over dangerous phishing emails that are put together by artificial intelligence.

The scams are convincing and help cybercriminals connect with victims before they attack, according to security site CSO.

The AI phishing emails are said to be more convincing than the human versions because they don’t contain some usual telltale scam signs.

Amid a flurry of Google and Microsoft generative AI releases last week during SXSW, Garry Kasparov, who is a chess grandmaster, Avast Security Ambassador and Chairman of the Human Rights Foundation, told me he is less concerned about ChatGPT hacking into home appliances than he is about users being duped by bad actors.

“People still have the monopoly on evil,” he warned, standing firm on thoughts he shared with me in 2019. Widely considered one of the greatest chess players of all time, Kasparov gained mythic status in the 1990s as world champion when he beat, and then was defeated by IBM’s Deep Blue supercomputer.


Despite the rapid advancement of generative AI, chess legend Garry Kasparov, now ambassador for the security firm Avast, explains why he doesn’t fear ChatGPT creating a virus to take down the Internet, but shares Gen’s CTO concerns that text-to-video deepfakes could warp our reality.

Society has a limited amount of time “to figure out how to react” and “regulate” AI, says Sam Altman.

OpenAI CEO Sam Altman has cautioned that his company’s artificial intelligence technology, ChatGPT, poses serious risks as it reshapes society.

He emphasized that regulators and society must be involved with the technology, according to an interview telecasted by ABC News on Thursday night.


Interesting Engineering is a cutting edge, leading community designed for all lovers of engineering, technology and science.

The cryptojacking group known as TeamTNT is suspected to be behind a previously undiscovered strain of malware used to mine Monero cryptocurrency on compromised systems.

That’s according to Cado Security, which found the sample after Sysdig detailed a sophisticated attack known as SCARLETEEL aimed at containerized environments to ultimately steal proprietary data and software.

Specifically, the early phase of the attack chain involved the use of a cryptocurrency miner, which the cloud security firm suspected was deployed as a decoy to conceal the detection of data exfiltration.

With the development of computing and data, autonomous agents are gaining power. The need for humans to have some say over the policies learned by agents and to check that they align with their goals becomes all the more apparent in light of this.

Currently, users either 1) create reward functions for desired actions or 2) provide extensive labeled data. Both strategies present difficulties and are unlikely to be implemented in practice. Agents are vulnerable to reward hacking, making it challenging to design reward functions that strike a balance between competing goals. Yet, a reward function can be learned from annotated examples. However, enormous amounts of labeled data are needed to capture the subtleties of individual users’ tastes and objectives, which has proven expensive. Furthermore, reward functions must be redesigned, or the dataset should be re-collected for a new user population with different goals.

New research by Stanford University and DeepMind aims to design a system that makes it simpler for users to share their preferences, with an interface that is more natural than writing a reward function and a cost-effective approach to define those preferences using only a few instances. Their work uses large language models (LLMs) that have been trained on massive amounts of text data from the internet and have proven adept at learning in context with no or very few training examples. According to the researchers, LLMs are excellent contextual learners because they have been trained on a large enough dataset to incorporate important commonsense priors about human behavior.