Swiss study finds language distorts ChatGPT information on armed conflicts
New research shows that when asked in Arabic about the number of civilian casualties killed in the Middle East conflict, ChatGPT gives significantly higher casualty numbers than when the prompt is written in Hebrew. These systematic discrepancies can reinforce biases in armed conflicts and encourage information bubbles, researchers say. +Get the most important news from Switzerland in your inbox Every day, millions of people engage with and seek information from ChatGPT and other large language models (LLMs). Читать дальше...