A summer of security: empowering cyber defenders with AI

AI provides an unprecedented opportunity for building a new era of American innovation. We can use these new tools to grow the U.S. economy, create jobs, accelerate scientific advances and give the advantage back to security defenders.
And when it comes to security opportunities — we’re thrilled to be driving progress in three key areas ahead of the summer’s biggest cybersecurity conferences like Black Hat USA and DEF CON 33: agentic capabilities, next-gen security model and platform advances, and public-private partnerships focused on putting these tools to work.
1. Giving defenders an edge with agentic capabilities
Last year, we announced Big Sleep, an AI agent developed by Google DeepMind and Google Project Zero, that actively searches and finds unknown security vulnerabilities in software. By November 2024, Big Sleep was able to find its first real-world security vulnerability, showing the immense potential of AI to plug security holes before they impact users.
Since then, Big Sleep has continued to discover multiple real-world vulnerabilities, exceeding our expectations and accelerating AI-powered vulnerability research. Most recently, based on intel from Google Threat Intelligence, the Big Sleep agent discovered an SQLite vulnerability (CVE-2025-6965) — a critical security flaw, and one that was known only to threat actors and was at risk of being exploited. Through the combination of threat intelligence and Big Sleep, Google was able to actually predict that a vulnerability was imminently going to be used and we were able to cut it off beforehand. We believe this is the first time an AI agent has been used to directly foil efforts to exploit a vulnerability in the wild.
These AI advances don’t just help secure Google's products. Big Sleep is also being deployed to help improve the security of widely used open-source projects — a major win for ensuring faster, more effective security across the internet more broadly. These cybersecurity agents are a game changer, freeing up security teams to focus on high-complexity threats, dramatically scaling their impact and reach.
But of course this work needs to be done safely and responsibly. In our latest white paper, we outline our approach to building AI agents in ways that safeguard privacy, mitigate the risks of rogue actions, and ensure the agents operate with the benefit of human oversight and transparency. When deployed according to secure-by-design principles, agents can give defenders an edge like no other tool that came before them.
We will continue to share our agentic AI insights and report findings through our industry-standard disclosure process. You can keep tabs on all publicly disclosed vulnerabilities from Big Sleep on our issue tracker page.
2. Announcing new AI security capabilities
Agentic tools are just one way that AI can help alleviate the pressures put on today’s cybersecurity defenders — particularly when it comes to the grueling task of sifting through large amounts of data to identify incidents. That’s why this summer, we’ll be demoing AI capabilities that give defenders the upper hand.
- Timesketch: We are extending Timesketch, Google’s open-source collaborative digital forensics platform, with agentic capabilities. Powered by Sec-Gemini, Timesketch will accelerate incident response by using AI to automatically perform the initial forensic investigation. This lets analysts focus their efforts on other tasks, while drastically cutting down on investigation time. At Black Hat USA (booth #2240—come on by!), we’ll demo Timesketch’s new agentic log analysis capabilities, powered by Sec-Gemini, and showcase concrete use cases.
- FACADE: At Black Hat, we’ll also provide the first live, behind-the-scenes look at FACADE (Fast and Accurate Contextual Anomaly Detection) — an important AI-based system, which has been performing insider threat detection at Google since 2018. Attendees will learn how FACADE processes billions of daily security events across Google to identify internal threats. And thanks to its unique contrastive learning approach, it doesn’t require data from past attacks to do its job.
- DEF CON GENSEC Capture the Flag (CTF): At DEF CON 33, we’re partnering with Airbus for a CTF event to show how AI can advance cybersecurity professionals’ capabilities. Participants will have the opportunity to team up with an AI assistant to complete challenges designed to engage participants across all skill levels.
3. Putting these tools to work with public and private partners
Collaboration across industry and with public sector partners is essential to cybersecurity success. That’s why we worked with industry partners to launch the Coalition for Secure AI (CoSAI), an initiative to ensure the safe implementation of AI systems. To further this work, today we’re announcing Google will donate data from our Secure AI Framework (SAIF) to help accelerate CoSAI’s agentic AI, cyber defense and software supply chain security workstreams.
Additionally, next month at DEF CON 33, the final round of our two-year AI Cyber Challenge (AIxCC) with DARPA will come to a close. Challengers will unveil new AI tools to help find and fix vulnerabilities that can help secure major open-source projects. Be on the lookout for an announcement about the winners from DEF CON 33 next month.
We have always believed in AI’s potential to make the world safer, but over the last year we have seen real leaps in its capabilities, with new tools redefining what lasting and durable cybersecurity can look like.
This summer’s advances in AI have the potential to be game-changing, but what we do next matters. By building these tools the right way, applying them in new ways and working together with industry and governments to deploy them at scale, we can usher in a digital future that’s not only more prosperous, but also more secure.