Dec 16, 2021

Neural networks can hide malware, and scientists are worried

Posted by in categories: cybercrime/malcode, robotics/AI

This article is part of our reviews of AI research papers, a series of posts that explore the latest findings in artificial intelligence.

With their millions and billions of numerical parameters, deep learning models can do many things: detect objects in photos, recognize speech, generate text—and hide malware. Neural networks can embed malicious payloads without triggering anti-malware software, researchers at the University of California, San Diego, and the University of Illinois have found.

Their malware-hiding technique, EvilModel, sheds light on the security concerns of deep learning, which has become a hot topic of discussion in machine learning and cybersecurity conferences. As deep learning becomes ingrained in applications we use every day, the security community needs to think about new ways to protect users against their emerging threats.

Comments are closed.