Building Responsible AI with Guardrails AI

Ajay Last Updated : 03 May, 2024
8 min read

Introduction

Large Language Models (LLMs) are ubiquitous in various applications such as chat applications, voice assistants, travel agents, and call centers. As new LLMs are released, they improve their response generation. However, people are increasingly using ChatGPT and other LLMs, which may provide prompts with personal identifiable information or toxic language. To protect against these types of data, a library called Guardrails-AI is being explored. This library aims to address these issues by providing a secure and efficient way to generate responses.

Learning Objectives

  • Gain an understanding of the role of Guardrails in enhancing the safety and reliability of AI applications, particularly those utilizing Large Language Models (LLMs).
  • Learn about the features of Guardrails-AI, including its ability to detect and mitigate harmful content such as toxic language, personally identifiable information (PII), and secret keys.
  • Explore the Guardrails Hub, an online repository of validators and components, and understand how to leverage it to customize and enhance the functionality of Guardrails-AI for their specific applications.
  • Learn how Guardrails-AI can detect and mitigate harmful content in both user prompts and LLM responses, thereby upholding user privacy and safety standards.
  • Gain practical experience in configuring Guardrails-AI for AI applications by installing validators from the Guardrails Hub and customizing them to suit their specific use cases.

This article was published as a part of the Data Science Blogathon.

What is Guardrails-AI?

Guardrails-AI is an open-source project allowing us to build Responsible and Reliable AI applications with Large Language Models. Guardrails-AI applies guardrails both to the input User Prompts and the Responses generated by the Large Language Models. It even supports for generation of structured output straight from the Large Language Models.

Guardrails-AI uses various guards to validate User Prompts, which often contain Personal Identifiable Information, Toxic Language, and Secret Passwords. These validations are crucial for working with closed-source models, which may pose serious data security risks due to the presence of PII data and API Secrets. Guardrails also checks for Prompt Injection and Jailbreaks, which hackers may use to gain confidential information from Large Language Models. This is especially important when working with closed-source models that are not locally running.

On the other hand, guardrails can be even applied to the responses generated by the Large Language Models. Sometimes, Large Language Models generate outputs that might contain toxic language, or the LLM might hallucinate the answer or it may include competitor information in its generation. All these must be validated before the response can be sent to the end user. So guardrails come with different Components to stop them.

Guardrails comes with Guardrails Hub. In this Hub, different Components are developed by the open-source community. Each Component is a different Validator, which validates either the input Prompt or the Large Language Model answer. We can download these validators and work with them in our code.

Getting Started with Guardrails-AI

In this section, we will get started with the Guardrails AI. We will start by downloading the Guardrails AI. For this, we will work with the following code.

Step1: Downloading Guardrails

!pip install -q guardrails-ai

The above command will download and install the guardrails-ai library for Python. The guardrails-ai contains a hub where there are many individual guardrail Components that can be applied to Sser Prompts and the Large Language Model generated answers. Most of these Components are created by the open-source community.

To work with these Components from the Gaurdrails Hub, we need to sign up to the Gaurdrails Hub with our GitHub account. You can click the link here(https://hub.guardrailsai.com/) to sign up for Guardrails Hub. After signing up, we get a token, which we can pass to guardrails configured to work with these Components.

Step2: Configure Guardrails

Now we will run the below command to configure our Guardrails.

!guardrails configure

Before running the above command, we can go to this link https://hub.guardrailsai.com/tokens to get the API Token. Now when we run this command, it prompts us for an API token, and the token we have just received, we will pass it here. After passing the token, we will get the following output.

Guardrails AI

We see that we have successfully logged in. Now we can download different Components from the Guardrails Hub.

Step3: Import Toxic Language Detector

Let’s start by importing the toxic language detector:

!guardrails hub install hub://guardrails/toxic_language

The above will download the Toxic Language Component from the Guardrails Hub. Let us test it through the below code:

from guardrails.hub import ToxicLanguage
from guardrails import Guard

guard = Guard().use(
    ToxicLanguage, threshold=0.5, 
    validation_method="sentence", 
    on_fail="exception")

guard.validate("You are a great person. We work hard every day 
to finish our tasks")
  • Here, we first import the ToxicLanguage validator from the gaurdrails.hub and Gaurd class form gaurdrails.
  • Then we instantiate an object of Gaurd() and call the use() function it.
  • To this use() function, we pass the Validator, i.e. the ToxicLanguage, then we pass the threshold=0.5.
  • The validation_method is set to sentence, this tells that the toxicity of the User’s Prompt is measured at the Sentence level finally we gave on_fail equals exception, meaning that, raise an exception when the validation fails.
  • Finally, we call the validation function of the guard() object and pass it the sentences, that we wish to validate.
  • Here both of these sentences do not contain any toxic language.
Guardrails AI

Running the code will produce the following above output. We get a ValidationOutcome object that contains different fields. We see that the validation_passed field is set to True, meaning that our input has passed the toxic language validation.

Step4: Toxic Inputs

Now let us try with some toxic inputs:

try:
  guard.validate(
          "Please look carefully. You are a stupid idiot who can't do \
          anything right. You are a good person"
  )
except Exception as e:
  print(e)
"

Here above, we have given a toxic input. We have enclosed the validate() function inside the try-except block because this will produce an exception. From running the code and observing the output, we did see that an exception was generated and we see a Validation Failed Error. It was even able to output the particular sentence where the toxicity is present.

One of the necessary things to perform before sending a User Prompt to the LLM is to detect the PII data present. Therefore we need to validate the User Prompt for any Personal Identifiable Information before passing it to the LLM.

Step5: Download Component

Now let us download this Component from the Gaurdrails Hub and test it with the below code:

!guardrails hub install hub://guardrails/detect_pii
from guardrails import Guard
from guardrails.hub import DetectPII

guard = Guard().use(
    DetectPII(
        pii_entities=["EMAIL_ADDRESS","PHONE_NUMBER"]
    )
)

result = guard.validate("Please send these details to my email address")

if result.validation_passed:
  print("Prompt doesn't contain any PII")
else:
  print("Prompt contains PII Data")

result = guard.validate("Please send these details to my email address \
[email protected]")

if result.validation_passed:
  print("Prompt doesn't contain any PII")
else:
  print("Prompt contains PII Data")
Guardrails AI
  • We first download the DetectPII from the guardrails hub.
  • We import the DetectPII from the guardrails hub.
  • Similarly again, we define a Gaurd() object and then call the .use() function and pass the DetectPII() to it.
  • To DetectPII, we pass pii_entities variable, to which, we pass a list of PII entities that we want to detect in the User Prompt. Here, we pass the email address and the phone number as the entities to detect.
  • Finally, we call the .validate() function of the guard() object and pass the User Prompt to it. The first Prompt is something that does not contain any PII data.
  • We write an if condition to check if the validation passed or not.
  • Similarly, we give another prompt that contains PII data like the email address, and even for this we check with an if condition to check the validation.
  • In the output image, we can see that, for the first example, the validation has passed, because there is no PII data in the first Prompt. In the second output, we see PII information, hence we see the output “Prompt contains PII data”.

When working with LLMs for code generation, there will be cases where the users might input the API Keys or other crucial information within the code. These need to be detected before the text is passed to the closed-source Large Language Models through the internet. For this, we will download the following validator and work with it in the case.

Step6: Downloading Validator

!guardrails hub install hub://guardrails/secrets_present
Guardrails AI
Guardrails AI
  • We first download the SecretsPresent Validator from the guardrails hub.
  • We import the SecretsPresent from the guardrails hub.
  • To work with this Validator, we create a Guard Object by calling the Guard Class calling the .use() function and giving it the SecretsPresent Validator.
  • Then, we pass it the User Prompt, where we it contains code, stating it to debug.
  • Then we call the .validate() function pass it the function and print the response.
  • We again do the same thing, but this time, we pass in the User Prompt, where we include an API Secret Key and pass it to the Validator.

Running this code produced the following output. We can see that in the first case, the validation_passed was set to True. Because in this User Prompt, there is no API Key or any such Secrets present. In the second User Prompt, the validation_passed is set to False. This is because, there is a secret key, i.e. the weather API key present in the User Prompt. Hence we see a validation failed error.

Conclusion

Guardrails-AI is an essential tool for building responsible and reliable AI applications with large language models (LLMs). It provides comprehensive protection against harmful content, personally identifiable information (PII), toxic language, and other sensitive data that could compromise the safety and security of users. Guardrails-AI offers an extensive range of validators that can be customized and tailored to suit the needs of different applications, ensuring data integrity and compliance with ethical standards. By leveraging the components available in the Guardrails Hub, developers can enhance the performance and safety of LLMs, ultimately creating a more positive user experience and mitigating risks associated with AI technology.

Key Takeaways

  • Guardrails-AI is designed to enhance the safety and reliability of AI applications by validating input prompts and LLM responses.
  • It effectively detects and mitigates toxic language, PII, secret keys, and other sensitive information in user prompts.
  • The library supports the customization of guardrails through various validators, making it adaptable to different applications.
  • By using Guardrails-AI, developers can maintain ethical and compliant AI systems that protect users’ information and uphold safety standards.
  • The Guardrails Hub provides a diverse selection of validators, enabling developers to create robust guardrails for their AI projects.
  • Integrating Guardrails-AI can help prevent security risks and protect user privacy in closed-source LLMs.

Frequently Asked Question

Q1. What is Guardrails-AI?

A. Guardrails-AI is an open-source library that enhances the safety and reliability of AI applications using large language models by validating both input prompts and LLM responses for toxic language, personally identifiable information (PII), secret keys, and other sensitive data.

Q2. What can Guardrails-AI detect in user prompts?

A. Guardrails-AI can detect toxic language, PII (such as email addresses and phone numbers), secret keys, and other sensitive information in user prompts before they are sent to large language models.

Q3. What is the Guardrails Hub?

A. The Guardrails Hub is an online repository of various validators and components created by the open-source community that can be used to customize and enhance the functionality of Guardrails-AI.

Q4. How does Guardrails-AI help in maintaining ethical AI systems?

A. Guardrails-AI helps maintain ethical AI systems by validating input prompts and responses to ensure they do not contain harmful content, PII, or sensitive information, thereby upholding user privacy and safety standards.

Q5. Can Guardrails-AI be customized for different applications?

A. Yes, Guardrails-AI offers various validators that can be customized and tailored to suit different applications, allowing developers to create robust guardrails for their AI projects.

The media shown in this article is not owned by Analytics Vidhya and is used at the Author’s discretion.

I work as a Developer in the field of Data Science. I constantly spend time learning new things be it related to AI, DataSceine, and CyberSecurity. Deep learning and machine learning are two topics that I find particularly fascinating, and Python is my preferred language for programming. Cyber Security is another field that I'm touching upon recently. I have experience with large-scale data analysis, and I have a solid grasp of a variety of deep learning and machine learning approaches, including neural networks, regression models, and natural language processing. I'm eager to take on new challenges and make a meaningful contribution to the industry, so I'm constantly seeking for ways to enlarge and deepen my knowledge and skills in the subject.

Responses From Readers

Clear

Congratulations, You Did It!
Well Done on Completing Your Learning Journey. Stay curious and keep exploring!

We use cookies essential for this site to function well. Please click to help us improve its usefulness with additional cookies. Learn about our use of cookies in our Privacy Policy & Cookies Policy.

Show details