|
|
|
|
August 21, 2025
|
Hackers Infiltrate Alleged North Korean Operative’s Computer, Leak Evidence of...
|
|
August 21, 2025
|
Ecosia Proposes Unusual Stewardship Model for Google Chrome
|
|
August 21, 2025
|
OpenAI Presses Meta for Evidence on Musk’s $97 Billion Takeover Bid
|
|
August 15, 2025
|
ChatGPT Mobile App Surpasses $2 Billion in Consumer Spending, Dominating Rivals
|
|
|
Google's AI Bug Hunter 'Big Sleep' Uncovers 20 Security Flaws in Open Source Software
August 4, 2025
Google’s experimental AI-powered vulnerability scanner, codenamed Big Sleep, has made its first mark on the cybersecurity landscape. The tool, developed jointly by DeepMind and Google’s elite hacking team Project Zero, has identified 20 security vulnerabilities in widely used open-source projects, including media library FFmpeg and image-editing toolkit ImageMagick.
Announced by Heather Adkins, Google’s VP of security, the discovery marks a milestone in automated security research. While specific details about the vulnerabilities remain under wraps until patches are issued — standard practice in responsible disclosure — the success of Big Sleep suggests that AI is beginning to meaningfully contribute to the complex task of bug hunting.
According to Google spokesperson Kimberly Samra, each flaw was found and reproduced autonomously by Big Sleep, with human experts only stepping in to verify and polish the final reports. This human-in-the-loop model aims to balance AI efficiency with human judgment, ensuring accurate and actionable disclosures.
Royal Hansen, VP of engineering at Google, praised the findings as signaling “a new frontier in automated vulnerability discovery.”
Big Sleep isn’t the only AI tool being trained for this task. Others like RunSybil and XBOW are also making waves — the latter recently topping a HackerOne bug bounty leaderboard. These tools are part of a growing trend where large language models (LLMs) are repurposed from language generation to security analysis.
Still, the rise of AI in bug hunting isn’t without its challenges. Software maintainers have expressed concerns over false positives and “hallucinated” vulnerabilities that waste time and effort. Some have likened these to the “AI slop” of the security world — plausible-sounding reports that don’t hold up under scrutiny.
Vlad Ionescu, CTO and co-founder of RunSybil, acknowledged both the promise and the pitfalls of these tools, but called Big Sleep a “legit” project backed by strong design and seasoned experts. “Project Zero knows how to find bugs. DeepMind has the compute. That’s a powerful combo,” he said.
While the future of AI-driven vulnerability research looks promising, the community remains cautious. The real test will be how consistently and accurately these systems can perform — and whether they can move from experimental success to industry standard.
|
|
|
Sign Up to Our Newsletter!
Get the latest news in tech.
|
|
|