Microsoft on Tuesday unveiled a chatbot called Security Copilot that will aid cybersecurity experts in comprehending important problems and locating solutions.

After OpenAI’s ChatGPT bot won over the public’s attention after its November launch, the business has been hard at work reinforcing its software with artificial intelligence models from the startup.
The resulting generative AI program, in the words of Microsoft earlier this month while touting new features in Word and other productivity apps, can occasionally be “usefully wrong.” Microsoft is still working to expand its cybersecurity division, which is expected to generate more than $20 billion in sales by 2022.
The GPT-4 large language model from OpenAI, in which Microsoft has spent billions, as well as a security-specific model that Microsoft developed using everyday activity data it collects, are all used in the Microsoft Security Copilot. The system is also aware of a specific customer’s security environment, but models won’t be trained using that information.
In response to a text prompt that a user writes in, the chatbot can create PowerPoint slides summarizing security events, detail exposure to an active vulnerability, or specify the accounts involved in an exploit.
A user can select a “off-target” button to indicate a mistake or press a button to affirm a correct answer. According to Vasu Jakkal, corporate vice president of security, compliance, identity, management, and privacy at Microsoft, this kind of feedback will aid the service’s ability to learn.
Microsoft engineers have been working with the Security Copilot to complete their tasks. It can quickly identify the two important instances out of 1,000 alerts, according to Jakkal. For an analyst who didn’t know how to do it, the tool also reverse-engineered a piece of malicious code, according to her.
Such support can be crucial for businesses that struggle to find experts and wind up hiring staff with limited knowledge in certain fields. It takes time and there is a learning curve, according to Jakkal. And now you can be enhanced by Security Copilot thanks to its built-in skills. Thus, it will enable you to accomplish more with less.
Microsoft has not disclosed the price of Security Copilot when it is made more generally accessible.
According to Jakkal, it is hoped that many employees rather than a small number of executives will use it within a particular organization. This indicates that Microsoft plans to eventually expand the range of topics that can be discussed using the tool.
The service will work with Microsoft security products such as Sentinel for tracking threats. Microsoft will determine if it should add support for third-party tools such as Splunk based on input from early users in the next few months, Jakkal said.
If Microsoft were to require customers to use Sentinel or other Microsoft products if they want to turn on the Security Copilot, that could very well influence the purchasing decisions, said Frank Dickson, group vice president for security and trust at technology industry researcher IDC.
“For me, I was like, ‘Wow, this may be the single biggest announcement in security this calendar year,’” he said.
Nothing is Microsoft’s security rivals, such as Palo Alto Networks, from releasing chatbots of their own. But getting out first means Microsoft will have a head start, Dickson said.
Security Copilot will be available to a small set of Microsoft clients in a private preview before wider release later.