БЛОГ

Nov 25, 2020

Artificial Intelligence Is Now Smart Enough to Know When It Can’t Be Trusted

Posted by in categories: biotech/medical, military, robotics/AI

Artificial intelligence is being developed that can analyze whether it’s own decision or prediction is reliable.

…An AI that is aware/determine or analyze it’s own weaknesses. Basically, it should help doctors or passengers of the AI know quickly the risk involved.


How might The Terminator have played out if Skynet had decided it probably wasn’t responsible enough to hold the keys to the entire US nuclear arsenal? As it turns out, scientists may just have saved us from such a future AI-led apocalypse, by creating neural networks that know when they’re untrustworthy.

These deep learning neural networks are designed to mimic the human brain by weighing up a multitude of factors in balance with each other, spotting patterns in masses of data that humans don’t have the capacity to analyse.

While Skynet might still be some way off, AI is already making decisions in fields that affect human lives like autonomous driving and medical diagnosis, and that means it’s vital that they’re as accurate as possible. To help towards this goal, this newly created neural network system can generate its confidence level as well as its predictions.

Comments are closed.