Microsoft has announced the preview of Security Copilot, an AI-powered security analysis tool designed to help cybersecurity analysts detect, process and respond to threats quickly.
The tool is powered by OpenAI’s GPT-4 generative AI and Microsoft’s proprietary security-specific model that collates insights and data from various products like Microsoft Sentinel, Defender, and Intune to help security teams better understand their environment.
Security Copilot can provide remediation instructions and summarize incidents by accepting files, URLs, and code snippets for analysis.
The tool is aimed at reducing the asymmetric battle that cybersecurity professionals often face against sophisticated attackers.
At the same time, Vasu Jakkal, Microsoft’s corporate vice president of Security, Compliance, Identity, and Management, noted that too often, defenders fight against prolific and relentless attackers who are hidden among noise.
Security Copilot is Microsoft’s latest AI push, integrating generative AI features into its software offerings. In the past two months, the company has integrated AI capabilities into products such as Bing, Edge browser, GitHub, LinkedIn, and Skype.
Additionally, Microsoft emphasized that the tool is privacy-compliant and customer data is not used to train the foundation AI models. Redmond said the proprietary security-specific model behind Security Copilot is informed by more than 65 trillion daily signals.
The tool enables users to create a PowerPoint presentation outlining an incident and its attack chain, and they can also ask it about suspicious user logins over a specific time period.