Microsoft Launches Measures to Keep Users From Tricking AI Chatbots
Microsoft unveiled tools to prevent users from tricking artificial intelligence chatbots for malevolent purposes. The tech giant rolled out a series of offerings for its Azure AI system, including a tool to block so-called “prompt injection” attacks, according to a Thursday (March 28) blog post. “Prompt injection attacks have emerged as a significant challenge, where malicious actors try […]
Читать дальше...