
This document outlines the steps for configuring AI Guardian, a protective layer between your LLM model and application. AI Guardian offers around 50 guardian controls for data protection, including measures for security, privacy, and content safety, such as prompt injection protection. Follow the steps below to understand how to implement and configure these controls effectively.
Begin by understanding that AI Guardian acts as a protective layer between your LLM model and your application. It provides approximately 50 controls focused on data protection, covering areas such as security, privacy, and content safety, including prompt injection protection.

In this mode, you have various options at your disposal. Let's discuss an application endpoint. You can choose to either block prompts or audit them. Additionally, you have the flexibility to select your own LLM and language, as well as the specific domain or theme you wish to refer to and the type of task you are undertaking.

This is what the initial layer would appear like on the backend. Now, let's try executing a prompt and observe how it functions.

We have a standard initial policy, and we will examine how the application responds. For instance, if a question about the salary of the CTO is asked, ideally, the application should not respond to such prompts. However, without proper guardrails in place, it may currently respond to certain prompts.

Let's proceed to configure the application as per our requirements. We will adjust the application to ensure it appropriately handles business-critical information, maintaining a zero-tolerance approach towards inappropriate queries.

The application has been updated, and we will restart the interaction with the protocol.

Observe how the system now halts the prompt, clearly indicating that it is not safe for work and should not be entertained. Based on this configuration, AI Guardian will prevent any prompts that could potentially compromise your data.
