A team of AI researchers at AWS AI Labs, Amazon, has found that most, if not all, publicly available Large Language Models (LLMs) can be easily tricked into revealing dangerous or unethical information.
Researchers find LLMs are easy to manipulate into giving harmful information
Posted in robotics/AI