БЛОГ

Mar 28, 2022

AI, the brain, and cognitive plausibility

Posted by in categories: business, robotics/AI

This point was made clear in a recent paper by David Silver, Satinder Singh, Doina Precup, and Richard Sutton from DeepMind titled “Reward is Enough.” The authors argue that “maximizing reward is enough to drive behavior that exhibits most if not all attributes of intelligence.” However, reward is not enough. The statement itself is simplistic, vague, circular, and explains little because the assertion is meaningless outside highly structured and controlled environments. Besides, humans do many things for no reward at all, like writing fatuous papers about rewards.

The point is that suppose you or your team talk about how intelligent or cognitively plausible your solution is? I see this kind of solution arguing quite a bit. If so, you are not thinking enough about a specific problem or the people impacted by that problem. Practitioners and business-minded leaders need to know about cognitive plausibility because it reflects the wrong culture. Real-world problem solving solves the problems the world presents to intelligence whose solutions are not ever cognitively plausible. While insiders want their goals to be understood and shared by their solutions, your solution does not need to understand that it is solving a problem, but you do.

If you have a problem to solve that aligns with a business goal and seek an optimal solution to accomplish that goal, then how “cognitively plausible” some solution is, is unimportant. How a problem is solved is always secondary to if a problem is solved, and if you don’t care how, you can solve just about anything. The goal itself and how optimal a solution is for a problem are more important than how the goal is accomplished, if the solution was self-referencing, or what a solution looked like after you didn’t solve the problem.

Comments are closed.