Добавить новость
ru24.net
PYMNTS.com
Ноябрь
2025
1 2 3 4 5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30

AI Becomes Both Tool and Target in Cybersecurity

0

The Prompt Economy turned its focus to security during the past week, as companies rush to deploy the hottest technology during the hottest commercial season of the year. In fact, while many companies keep a human in the loop for security purposes, OpenAI advanced a solution last week that says agents can help secure their own software.

Here are the details. The company introduced a beta version of a solution called Aardvark. Powered by GPT-5, it acts as an autonomous “security researcher” that continuously scans source code to identify and fix vulnerabilities in real time. Unlike traditional methods such as fuzzing or static analysis, Aardvark uses large language model–based reasoning to understand how code behaves, determine where it might break, and propose targeted fixes. The system integrates directly with GitHub and OpenAI Codex, reviewing every code commit, running sandboxed validation tests to confirm exploitability, and even generating annotated patches for human approval. OpenAI describes Aardvark as a co-worker for engineers, augmenting rather than replacing human oversight by automating the tedious, error-prone parts of vulnerability discovery.

According to OpenAI, Aardvark has already been running across internal and partner codebases for several months, detecting meaningful vulnerabilities and achieving a 92% recall rate in benchmark tests. Beyond enterprise use, the system has responsibly disclosed multiple vulnerabilities in open-source software, ten of which have received CVE identifiers. OpenAI positions Aardvark as part of a broader “defender-first” approach to security, one that democratizes access to high-end expertise and enables continuous, scalable protection across modern software ecosystems. The company is offering pro bono scanning to select open-source projects and has opened a private beta to refine accuracy and reporting workflows before broader release.

Another report, this one from CSO.com, says that agentic AI is emerging as one of the most transformative forces in cybersecurity. The technology’s ability to process data continuously and react in real time enables it to detect, contain, and neutralize threats at a scale and speed that human teams cannot match. Security leaders such as Zoom CISO Sandra McLeod and Dell Technologies CSO John Scimone told CSO that autonomous detection, self-healing responses, and AI-driven orchestration are now essential for reducing the time a threat remains active. By taking over high-volume, time-sensitive monitoring tasks, agentic AI lets security teams concentrate on strategy and risk mitigation rather than routine operations.

The article outlines seven leading use cases where AI agents are already reshaping defense capabilities, from autonomous threat detection and Security Operations Center (SOC) support to automated triage, help desk automation, and real-time zero-trust enforcement. Deloitte’s Naresh Persaud highlights how AI agents can draft forensic reports and scale SOC workflows dynamically, while Radware’s Pascal Geenens notes that agentic systems close the gap between detection and response by automatically enriching and correlating data across threat feeds. The piece also underscores the technology’s human-capital benefit: AI agents, as Palo Alto Networks’ Rahul Ramachandran argues, act as a “force multiplier” for cybersecurity teams facing persistent talent shortages.

Beyond defense, AI is also improving brand protection by spotting phishing domains and scam ads before they spread. “Agentic AI will level the playing field by enabling defenders to respond with equal speed and expansive breadth,” said John Scimone, president and CSO at Dell Technologies.

Promising developments for sure but no silver bullets. Another report last week detailed some of the issues that agentic AI will bring to the CISO tasked with developing it. A Bleeping Computer article warns that the rise of autonomous AI agents is upending traditional enterprise security models by creating a new category of non-human identities (NHIs).

Unlike human users, these AI agents make decisions, act across systems, and persist in environments without oversight—creating what the article calls “agent sprawl.” Many continue operating long after their intended use, holding active credentials that attackers can exploit. The piece identifies three key technical risks: shadow agents that outlive their purpose, privilege escalation through over-permissioned agents, and large-scale data exfiltration caused by poorly scoped or compromised integrations. Together, these vulnerabilities expose a governance gap that conventional identity and access management (IAM) systems are ill-equipped to handle.

The article argues for an “identity-first” approach to agentic AI security, one that treats every AI agent as a managed digital identity with tightly scoped permissions, ownership, and auditability. Legacy tools fail, it says, because they assume human intent and static interaction patterns, while AI agents spawn sub-agents, chain API calls, and operate autonomously across applications.

To counter that complexity, CISOs are urged to take immediate steps: inventory all agents, assign human owners, enforce least-privilege access, propagate identity context across multi-agent chains, and monitor anomalous behavior. Token Security concludes that the real danger lies not in a specific exploit, but in the “illusion of safety”— the assumption that trusted credentials equal trusted behavior. Without identity visibility and control, the article cautions, agentic AI could become the enterprise’s next major attack vector.

The post AI Becomes Both Tool and Target in Cybersecurity appeared first on PYMNTS.com.




Moscow.media
Частные объявления сегодня





Rss.plus
















Музыкальные новости




























Спорт в России и мире

Новости спорта


Новости тенниса