Swiss researchers find security flaws in AI models
Artificial intelligence (AI) models can be manipulated despite existing safeguards. With targeted attacks, scientists in Lausanne have been able to trick these systems into generating dangerous or ethically dubious content. Today's large language models (LLMs) have remarkable capabilities that can nevertheless be misused. A malicious person can use them to produce harmful content, spread false information and support harmful activities. +Get the most important news from Switzerland in your inbox Of the AI models tested... Читать дальше...