БЛОГ

Dec 7, 2020

How banks use AI to catch criminals and detect bias

Posted by in categories: finance, information science, robotics/AI, terrorism

Imagine an algorithm that reviews thousands of financial transactions every second and flags the fraudulent ones. This is something that has become possible thanks to advances in artificial intelligence in recent years, and it is a very attractive value proposition for banks that are flooded with huge amounts of daily transactions and a growing challenge of fighting financial crime, money laundering, financing of terrorism, and corruption.

The benefits of artificial intelligence, however, are not completely free. Companies that use AI to detect and prevent crime also deal with new challenges, such as algorithmic bias, a problem that happens when an AI algorithm causes systemic disadvantage for a group of a specific gender, ethnicity, or religion. In past years, algorithmic bias that hasn’t been well-controlled has damaged the reputation of the companies using it. It’s incredibly important to always be alert to the existence of such bias.

For instance, in 2019, the algorithm running Apple’s credit card was found to be biased against women, which caused a PR backlash against the company. In 2018, Amazon had to shut down an AI-powered hiring tool that also showed bias against women.

Comments are closed.