A year after launching Security Copilot, Microsoft’s generative AI-powered assistant for cybersecurity professionals, the tech giant has now released six cybersecurity-focused AI agents that are being integrated across its products.
Available for preview in April, the AI agents will autonomously assist security teams with tasks such as phishing and security alert triage, conditional access monitoring, vulnerability monitoring and prioritization and threat intelligence curation.
Additionally, Microsoft announced that five other AI agents, from its partners OneTrust, Aviatrix, BlueVoyant, Tanium and Fletch, will also be available in Security Copilot.
Infosecurity sat down with Vasu Jakkal, Corporate Vice President of Microsoft Security, to discuss what these announcements mean for Microsoft’s broader AI security strategy.

Infosecurity Magazine: Could you explain the key factors that drove the decision to integrate AI agents into Security Copilot?
Vasu Jakkal: With generative AI, Microsoft Security has been on a journey for the past two years. We announced Security Copilot in 2023, right at the heels of announcements of ChatGPT and other generative AI chatbots. In April 2024, we made Security Copilot generally available. This year marks the next inflection point for us, as we believe AI agents will transition Copilot from a passive AI assistant to active, autonomous and adaptive AI assistants that can complete tasks on your behalf – of course, with human agency.
There were three driving forces for accelerating this journey and introducing AI agents.
First, we continue to see defenders being overwhelmed by the threat landscape. We now face 7000 password attacks per second—up from 4000 last year—and attackers are breaching systems faster than ever, averaging just 72 minutes from a phishing click to data access (with some studies suggesting even less time).
Additionally, the number of unique attackers has surged from 300 nation-state and financial crime actors last year to 1500 this year. This data highlights the rapid, large-scale, and sophisticated evolution of today's security threats.
The second challenge involves growing concerns around data security and insider risk, especially with the increased use of AI. In our survey, 80% of leaders in organizations using AI identified data security as their top risk, and 88% expressed significant concern about indirect prompt injections. Additionally, insider threats are becoming a significant factor in data breaches, accounting for approximately 20% of such incidents.
The third challenge is operational complexity. Fragmentation across tool systems continues to rise—with AI expanding the number of security vendors to the thousands—and most organizations struggle to integrate all these tools. Additionally, there is a significant talent shortage, with 4.7 million jobs currently unfilled. Organizations must also keep up with regulatory updates, which are issued at a rate of 250 per day.
IM: AI agents are making headlines across various industries, especially in cybersecurity. How does Microsoft define an 'AI agent’?
VJ: There are three necessary attributes to consider a generative AI tool as an ‘AI agent,’ as opposed to a generic GenAI tool. These are:
- Autonomy: AI agents must have a level of autonomy higher than traditional software – which we usually refer to as ‘automation’
- Reasoning: AI agents must have the ability to ‘reason’ or ‘think’ when they processes and make inferences fromthe data that they are fed
- Context, memory and learning: AI agents must be able to learn based on user inputs, on the context it has been fed and, in the future, they will learn from their own patterns as well
Register to our upcoming webinar: AI Agents and the Evolving Landscape of Digital Identity
IM: Could you walk us through the six new AI agents that will soon be available in Microsoft Security Copilot? Where can people find them and who are they for?
VJ: The six agents that we are launching tackle what we believe are some of the hardest challenges right now where agents can deliver value.
The first one is a phishing triage agent. We saw 30 billion phishing emails from January 2024 to December 2024. This agent is going to look at all this volume of data and sort out between false alerts and actual threats. It will be available in Microsoft Defender and will typically be used by security analysts, security practitioners and experts in email security and email threat investigation who utilize Microsoft Defender.
The second and third agents, a data loss or data leakage triage agent and an insider risk alert triage agent, will be available in Microsoft Purview, our data security tool. It is designed to be used by data analysts.
The fourth agent is an identity agent. Conditional access is our policy engine, which helps ensure that the users have the right access and the right policies, along with the apps. This new AI-powered conditional access agent will monitor and check which individuals and apps do not have the right access and allow the user to restrict it. It will be available in Entra, Microsoft's identity solution, specifically for identity administrators responsible for identity and access policies.
"People have looked at Security Copilot as a teacher. Now, I think they are going to look at AI agents as if they are their workforce."
The fifth agent is a vulnerability agent. It monitors for existing vulnerabilities and automates the patching process for Windows OS. It will be available in Intune, our device management tool, for vulnerability management program leaders and IT admins.
Finally, the sixth agent is our threat intelligence agent, which curates and customizes threat intelligence tailored to an organization's specific business context, including its industry sector and unique needs. It is meant for everyone in cybersecurity to use.
IM: Which generative AI models do the agents run on and what is the pricing model?
VJ: Security Copilot and our AI agents combine advanced GPT-4 models from OpenAI with everything that Microsoft brings to the table, including hyperscale infrastructure, cyber-specific orchestration, Microsoft Security's unique expertise, global threat intelligence and comprehensive security products.
For our AI agents like for Security Copilot, each general-purpose AI model has been fine-tuned for security use cases using data on the 84 trillion signals that we have at Microsoft Security. This helps minimize hallucinations and provides agents with an understanding of cyber intelligence.
Our AI agents pricing model is the same as Security Copilot, which is a consumptive model: if you use the phishing agent, for instance, it will spin a meter and the unit of monetization is a securely compute unit, which is $4 an hour. Microsoft will charge you depending on how much you use it.

IM: You mentioned the growing skills gap in the cybersecurity industry. What would you say to security professionals who fear that your AI agents could take away their jobs?
VJ: The skills gap right now is so daunting. We don't have the personnel; it takes months to hire a cybersecurity professional. Meanwhile, the attacks are happening by the second—and thousands of them. I have spoken to many customers, and one of the biggest challenges is attracting early-career professionals. They often have to reskill someone – but again, there aren’t enough people to train them.
That’s why I'm hoping that people look at these agents and think: “Great, I now have a solution to help me.” People have looked at Security Copilot as a teacher. Now, I think they are going to look at AI agents as if they are their workforce so that every single defender has a team of their own to tackle the work.
I believe our AI agents help address the talent gap rather than exacerbate it, and they empower defenders to allocate their energy to creative and strategic tasks instead of merely processing alerts and triaging them.
Additionally, we have a key performance indicators (KPI) dashboard, which displays the actions taken by the agents and how they're executed and audit logs, allowing users to review the actions taken in the past. We want to show the users how the reasoning works and explain it because that's what builds confidence. We'll continue to evolve it based on user feedback, determining how much more or how little they want to see.
IM: There have been concerns about the potential for AI-powered security tools to exacerbate cyber risks. How can you reassure future users that the AI agents won’t bring more risks?
VJ: First, we do believe that AI-powered security tools need to be disciplined from the ground up. Our tools are built following responsible AI principles and with the Microsoft Secure Future Initiative framework's guardrails.
We launched the Secure Future Initiative in November 2023, with one goal in mind: How do we secure systems in this new world of AI? The three principles guiding the initiative are secure by default, secure by design and secure operations.
"There is no way we can keep up with the threat landscape without embracing AI."
For our AI agents, this translates to a checklist encompassing concerns around safety, privacy, security, fairness and inclusion. We conduct quality and responsibility checks before launching any tools.
Additionally, we constantly monitor specific controls and regularly conduct red teaming exercises.
There is no way we can keep up with the threat landscape and the challenges I shared with you without embracing AI.
We know attackers will use it because they utilize every tool at their disposal. If we don't advance AI for security, then defenders are going to be ten steps behind.
So, we've got to learn to use this tool. We've got to innovate on it and we need the cybersecurity ecosystem with us.
IM: Given the recent security concerns and subsequent rollback of the Recall feature, which aimed to capture comprehensive user activity, how is Microsoft addressing user privacy and data security in the development and implementation of future features that involve capturing or storing user data?
VJ: I'm not the expert on Recall, but you’re right, there were some challenges when the product was launched. We went back, adapted and relaunched it based on the feedback from our customers.
Globally, we launched the Secure Future Initiative in 2023 with the mission to declare that security is the number one priority at Microsoft above all else.
In July 2023, an attack by a China-based nation-state actor, Storm-0558, targeted identity access. Then, we had the Russian nation-state actor Midnight Blizzard launch an attack as well. We recognized the need to change our security approach and to make it a priority.
For us, it was a cultural transformation as much as a technological one because there is no way the security team can secure everything. You need every single employee at Microsoft to follow best practices, ensuring that we design code that is secure and so on. We've revised our company incentives and made security a co-priority for every employee.
Microsoft's Secure Future Initiative is built on six core engineering pillars: robust identity and secret protection, secure network infrastructure, tenant protection and production system isolation, comprehensive monitoring and incident response, all underpinned by strong governance.
We've hired 14 deputy CISOs to work with engineering leaders. We report to our CEO on a weekly basis, and we hold a meeting every other week to review these reports.
Finally, we have launched a cybersecurity skilling academy.