Secure AI framework (SAIF)
Google has created a Secure AI Framework that can be followed when securing your AI. It’s made up of six core elements:
- Strengthen and extend robust cybersecurity foundations within the artificial intelligence ecosystem. Utilize established secure-by-default infrastructure safeguards to ensure the security of AI systems, their applications, and users. The same safeguards you use for DevOps infrastructure-as-code (IaC) with SAST, DAST, and OWASP testing should be extended to AI coding.
- Ensure your AI models and code are vulnerability scanned and monitored once in production in the same way as any other software or cloud assets. This includes monitoring inputs into your AI system and having this included as part of your penetration tests.
- Ensure your AI is included in your incident response plans and red teaming. For your annual penetration testing, ensure the AI assets and environment are included.
- Establish platform-level controls...