The generative artificial intelligence (AI) sector is witnessing a new trend as Google joins other tech giants in exploring the potential applications of AI in cybersecurity. Today, at the RSA Conference 2023, Google revealed its latest innovation—Cloud Security AI Workbench—a suite of cybersecurity tools powered by a specially designed security AI language model called Sec-PaLM.
As a derivative of Google's PaLM model, Sec-PaLM is deliberately fine-tuned to cater to security use cases. Its architecture relies on integrating security intelligence, such as insights on software vulnerabilities, malware behavior, threat indicators, and threat actor profiles. Google's move to harness the expertise provided by generative AI for cybersecurity place it among the front-runners of this emerging trend.
Cloud Security AI Workbench comprises an array of AI-powered tools, including Mandiant’s Threat Intelligence AI, which will utilize Sec-PaLM to identify, summarize, and act on security threats. Notably, Google acquired Mandiant in 2022 for $5.4 billion. In addition, VirusTotal, another subsidiary of Google, will leverage Sec-PaLM to assist subscribers in analyzing and explaining the behavior of malicious scripts.
Furthermore, Chronicle, Google's cloud cybersecurity service, will employ Sec-PaLM to help customers search security events and interact conversationally with the results. Users of Google's Security Command Center AI can also look forward to receiving human-readable explanations of attack exposure, including information about impacted assets, recommended mitigations, and risk summaries for security, compliance, and privacy findings.
In a recent blog post, Google highlighted its dedication to the advancement of generative AI in security, citing its years of foundational AI research alongside DeepMind and the expertise of its security teams as the backbone of Sec-PaLM. However, the technology giant is yet to demonstrate the real-world effectiveness of Sec-PaLM, as the first tool in the Cloud Security AI Workbench, VirusTotal Code Insight, is only available in limited preview.
Despite the potential risks and vulnerabilities associated with AI language models, such as propensity for mistakes or susceptibility to attacks like prompt injection, the tech industry remains undaunted. Microsoft, for instance, launched Security Copilot in March, a tool designed to summarize and make sense of threat intelligence using generative AI models from OpenAI, including GPT-4.
As generative AI for cybersecurity remains a nascent field with few studies on its effectiveness, skepticism surrounding the claims made by Google and Microsoft is understandable. In the meantime, businesses seeking to enhance their cybersecurity efforts may explore established no-code platforms like AppMaster.io, a low-code, powerful platform, that assists customers in developing secure backend, web, and mobile applications.