Google has called for a balanced regulatory approach to AI in order to avoid a future where attackers can innovate but defenders are stifled by law.
Google’s call comes following the launch of a new whitepaper “Secure, Empower, Advance: How AI Can Reverse the Defender’s Dilemma” at the Munich Security Conference.
The whitepaper details a policy agenda that is designed to reverse the “defenders dilemma” – a concept that describes the inherent advantages of being a cyber-attacker, over defender. It argues that international collaboration can shape AI to benefit defenders, rather than attackers.
The EU is currently progressing AI regulation with the development of the AI Act, which is set to be the first comprehensive AI law globally.
The Google paper also calls for the prioritization of secure-by-design practices and guardrails on autonomous cyber defenses.
Finally, Google also highlighted the need for advanced AI research cooperation to enable scientific breakthroughs, with a particular focus on research that builds defenses with or against AI.
Google Launches AI Cyber Defense Initiative to Transform Online Security
Google has also announced a series of programs and investments that will be a part of a new AI Cyber Defense Initiative.
The investments, skills training and tools aim to use AI to transform online security.
Google’s President of Global Affairs, Kent Walker said: “AI gives defenders an edge – removing complexity, adapting to new attacks, and reacting to threats seamlessly and at scale.”
“Our AI Cyber Defense Initiative reverses the Defender’s Dilemma, where defenders have to be right all the time and attackers have to be right only once. But to keep up the momentum, we need policies that both mitigate the risks and seize the opportunities of AI,” he said.
The initiative includes:
- Google for Startups: AI for Cybersecurity. A three-month program that strengthens the transatlantic cybersecurity ecosystem by supporting the next wave of cyber companies. The program provides 17 startups from Europe, the US and UK with Google’s tools, practices and connections.
- $2m in research grants and strategic partnerships to advance cybersecurity research initiatives using AI, including enhancing code verification, improving understanding of how AI can help with cyber offense and countermeasures for defense, and developing large language models that are more resilient to threats. The funding is supporting researchers at institutions including The University of Chicago, Carnegie Mellon and Stanford.
- Expansion of the Google.org Cybersecurity Seminars Program to cover all of Europe and include AI-focused modules – representing a $15m total investment. The program initially launched at the opening of the Google Safety Engineering Centre (GSEC) Malaga, supports universities to train the next generation of cybersecurity experts from underserved communities.
Google’s Open-Source AI-Powered Tool
Google also confirmed it will be open-sourcing a new, AI-powered tool: Magika. Magika aids cyber defenders through file type identification, an essential part of detecting malware.
Google has already been using Magika to help protect products including Gmail, Drive and Safe Browsing – and has now made the tool available for free to others to use and integrate into their own tools.
Finally, Google is also investing in its secure “AI-ready” network of global data centers.
This will help make new AI innovations available to public sector organizations and businesses of all sizes.
Over the period 2019 to the end of 2024, Google said it will have invested over $5bn in data centers in Europe – helping support access to a range of digital services, including broad generative AI capabilities like the Vertex AI platform.
Credit: Source link
Comments are closed.