Artificial intelligence trained to deceive humans: 'Master of deception' in strategy game
(STUDY FINDS) -- Artificial intelligence systems are fast becoming increasingly sophisticated, with engineers and developers working to make them as “human” as possible. Unfortunately, that can also mean lying just like a person. AI platforms are reportedly learning to deceive us in ways that can have far-reaching consequences. A new study by researchers from the Center for AI Safety in San Francisco delves into the world of AI deception, exposing the risks and offering potential solutions to this growing problem.
At its core, deception is the luring of false beliefs from others to achieve a goal other than telling the truth. When humans engage in deception, we can usually explain it in terms of their beliefs and desires – they want the listener to believe something false because it benefits them in some way. But can we say the same about AI systems?
The study, published in the open-access journal Patterns, argues that the philosophical debate about whether AIs truly have beliefs and desires is less important than the observable fact that they are increasingly exhibiting deceptive behaviors that would be concerning if displayed by a human.
The post Artificial intelligence trained to deceive humans: 'Master of deception' in strategy game appeared first on WND.
