DARPA Challenges AI Pros to Safeguard US Infrastructure

“I’m going to do something I’ve never done before.” Those were Jeff Moss’s words during his August 9 opening speech of Black Hat USA 2023, in Las Vegas, before giving the floor to Perri Adams, program manager at the US Defense Advanced Research Projects Agency (DARPA).

In a surprise announcement, Adams kick-started the AI Cyber Challenge (AIxCC), a two-year competition for computer scientists, AI experts, software developers and other cybersecurity specialists to create a new generation of AI-powered cybersecurity tools for securing US critical infrastructure and government services.

“Cybersecurity is a race between offense and defense – and there is no silver bullet. However, recent technological advances do offer promising new ways of ensuring that we can keep defense one step ahead,” she said.

“Defenders are tasked with defending critical infrastructure but can’t do it at the scale it’s being attacked. In the past decade, we’ve seen the development of promising new AI-enabled capabilities. When used responsibly, we see significant potential for this technology to be applied to key cybersecurity issues. By automatically defending critical software at scale, we can have the greatest impact for cybersecurity across the country, and the world,” she added.

Backing from Anthropic, Google, Microsoft and OpenAI

The contest will be led by DARPA in collaboration with four generative AI pioneers, Anthropic, Google, Microsoft and OpenAI, who will make their technologies available and bring their expertise to help the competitors.

Additionally, the Open Source Security Foundation (OpenSSF), a project from the Linux Foundation, will serve as a challenge advisor to guide teams in creating AI systems capable of addressing vital cybersecurity issues, such as the security of our critical infrastructure and software supply chains.

AIxCC will feature two tracks for participation:

  • The Funded Track, in which up to seven small businesses will be selected from proposals submitted to a Small Business Innovation Research solicitation
  • The Open Track, in which competitors will register with DARPA via the competition website and will proceed without DARPA funding

Each small business selected for the Funded Track will be awarded $1m.

In Spring 2024, teams from both tracks will compete in a series of trials to be selected for the semi-finals – up to 20 teams will make it this far.

The semi-finals will take place at next year’s DEFCON conference. The top five teams in that competition will receive a $2m prize. They will also compete at the final during DEFCON 2025 for a $4m top prize, with $3m for second place and $1.5mfor third.

The $25m total funding ($18.5m in prizes and $7m for the Funded Track contestants) will be provided by DARPA.

Registration for the Funded Track is open until by September 19, while Open Track contestants have until December 15 to apply.

Teams will need at least one US permanent resident or citizen on the squad to be eligible to take part.

Adams said: “If successful, AIxCC will not only produce the next generation of cybersecurity tools but will show how AI can be used to better society by defending its critical underpinnings.”

Read more about Black Hat: New Zero-Day Vulnerabilities Could Instantly Drain Crypto Wallets

Rob McHenry, DARPA’s deputy director, said during a press call on August 9 that the AIxCC is modeled after the defense agency’s Grand Challenge for unmanned vehicles.

“The AI Cyber Challenge is an exciting new effort that uses DARPA’s challenge authority and our convening power to both push forward emerging AI capabilities and address the known risk to our critical infrastructure,” he said. “In the AI Cyber Challenge, our goal is to again create this kind of new ecosystem with a diverse set of creative cyber competitors, empowered by the country’s top AI firms, all pointed at new ways to secure the software infrastructure that underlies our society.”

White House and AI Pioneers Hand-in-Hand

This announcement is part of a broader commitment by the Biden-Harris administration to ensure that AI is developed safely and responsibly to protect Americans from harm and discrimination.

Earlier this year, the administration announced a commitment from several AI companies to participate in an independent, public evaluation of large language models at DEFCON 2023 – which will be attended by the White House’s director of the Office of Science and Technology Policy, Arati Prabhakar.

In July, it announced it had secured voluntary commitments from seven leading AI companies to manage the risks posed by the technology.

The White House has also published the Blueprint for an AI Bill of Rights and is currently developing an executive order on AI.

The AIxCC is also a way for generative AI to prove their intention to use their technology for the greater good – which they pledged as one of their commitments to the White House.

In July, four of them also launched the Frontier Model Forum, a new industry body whose mission is to work on regulating future foundational models.

Credit: Source link

Comments are closed.