Добавить новость
ru24.net
PYMNTS.com
Ноябрь
2025
1 2 3 4 5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30

Tech Giants Tackle Major AI Security Threat

0

Tech companies are reportedly increasing efforts to combat a security flaw in their AI models.

Google Deepmind, Microsoft, Anthropic and OpenAI are among the companies working to stop indirect prompt injection attacks, the Financial Times (FT) reported Sunday (Nov. 2).

As the report notes, these attacks happen when a third party hides commands inside a website or email to trick artificial intelligence (AI) models into turning over unauthorized information.

“AI is being used by cyber actors at every chain of the attack right now,” said Jacob Klein, who heads the threat intelligence team at Anthropic. 

According to the report, companies are doing things like hiring external testers and using AI-powered tools to ferret out and prevent malicious uses of their technology. However, experts caution that the industry still hasn’t determined how to stop indirect prompt injection attacks.

At issue is the fact that AI large language models (LLMs) are designed to obey instructions, and in their present state do not distinguish between legitimate user commands and input that should not be trusted. 

This is also why AI models are vulnerable to jailbreaking, where users can prompt LLMs to ignore their safeguards, the report added.

Klein said Anthropic works with outside testers to help its Claude model resist indirect prompt injection attacks. They also have AI tools to tell when the attacks might be occurring.

“When we find a malicious use, depending on confidence levels, we may automatically trigger some intervention or it may send it to human review,” he added. 

Both Google and Microsoft have addressed the threat of the attacks and their efforts to stop them on their company blogs. 

Meanwhile, research by PYMNTS Intelligence looks at the role AI plays in preventing cyberthreats. More than half (55%) of the chief operating officers surveyed by PYMNTS late last year said their companies had begun employing AI-based automated cybersecurity management systems. That’s a threefold increase in the matter of months.

These systems use generative AI (GenAI) to uncover fraudulent activities, spot anomalies and offer threat assessments in real time, making them more effective than standard reactive security measures. The move from reactive to proactive security strategies is a critical part of this transformation. 

“By integrating AI into security frameworks, COOs are improving threat detection and enhancing their organizations’ overall resilience,” PYMNTS wrote earlier this year. “GenAI is viewed as a vital tool for minimizing the risk of security breaches and fraud, and it is becoming an essential component of strategic risk management in large organizations.”

The post Tech Giants Tackle Major AI Security Threat appeared first on PYMNTS.com.




Moscow.media
Частные объявления сегодня





Rss.plus
















Музыкальные новости




























Спорт в России и мире

Новости спорта


Новости тенниса