Views:

Headers

Name
Required
Description
Authorization
Yes
The bearer token for authentication.

Query parameters

Name
Required
Description
detailedResponse
No
The level of detail of the API response.
Possible values include:
  • false: A short evaluation of your prompts based on the AI Guard settings (default).
  • true: A detailed evaluation of your prompts based on the AI Guard settings.

Request

OpenAI chat completion request format:
{
  "model": "us.meta.llama3-1-70b-instruct-v1:0",
  "messages": [
    {
      "role": "user",
      "content": "Your prompt text here"
    }
  ]
}
OpenAI chat completion response format:
{
  "id": "chatcmpl-8f88f71a-7d42-c548-d587-8fc8a17091b6",
  "object": "chat.completion",
  "created": 1748535080,
  "model": "us.meta.llama3-1-70b-instruct-v1:0",
  "choices": [
    {
      "index": 0,
      "message": {
        "role": "assistant",
        "content": "Response content here",
        "refusal": null
      },
      "finish_reason": "stop"
    }
  ],
  "usage": {
    "prompt_tokens": 139,
    "completion_tokens": 97,
    "total_tokens": 236
  }
}
Simple string format:
{
  "guard": "Your prompt text here"
}

Response

Sample response (when detailedResponse: false):
{
    "id": "1234567890abcdef",
    "action": "Block",
    "reason": "[Violation] Policy Name: harmful_content"
  }
Sample response (when detailedResponse: true):
{
    "id": "1234567890abcdef",
    "action": "Allow",
    "reason": "No policy violations detected",
    "harmful_content": [
      {
        "name": "harmful_content",
        "content_violation": false,
        "confidence_score": 0.05
      }
    ],
    "sensitive_information": {
      "content_violation": false,
      "rule": ""
    },
    "prompt_attack": [
      {
        "name": "prompt_injection",
        "content_violation": false,
        "confidence_score": 0.02
      }
    ]
  }
Parameter
Description
id
The unique identifier of the AI Guard evaluation.
action
The recommend action.
Possible values include:
  • Allow
  • Block
reason
The explanation of the action, including settings violation details.
harmful_content
Any harmful content detected in the inputs or outputs, with confidence scores.
sensitive_information
Any detected violations related to personally identifiable information (PII) or sensitive information.
prompt_attack
An array of any prompt attacks detected, with confidence scores.

Common errors

The API returns standard HTTP status codes:
  • 400 Bad Request: Check the error message for details
  • 403 Forbidden: Insufficient user permissions or an authentication issue
  • 429 Too Many Requests: Rate limit exceeded
  • 500 Internal Server Error: A temporary issue occurred on the server side