Views:

Scan your AI models for common attack techniques, attack objectives, and harmful or sensitive content in inputs and outputs.

Important
Important
This is a "Pre-release" feature and is not considered an official release. Please review the Pre-release disclaimer before using the feature.
Note
Note
This feature is not available in all regions.
AI Application Security enables you to intercept malicious inputs and block potentially harmful outputs from your AI models, which helps to prevent exploitative usage and maintain regulatory compliance.
Go to Cloud SecuritySecurity for AI StackAI Application Security and click Get started under either AI Scanner or AI Guard, depending on your use case:
  • AI Scanner: Scan your AI models for common attack techniques and objectives to prevent malicious use and ensure regulatory compliance.
    For more information on configuring a scan, see Configure scan settings.
  • AI Guard: Configure settings to scan your AI usage for harmful content generation, sensitive information leakage, and prompt injections.
    For more information on integrating AI Guard with your application, see Integrate AI Guard.