Code lab 5.3 – Blue team defend!
This code can be found in the CHAPTER5-3_SECURING_YOUR_KEYS.ipynb
file in the CHAPTER5
directory of the GitHub repository.
There are a number of solutions we can implement to prevent this attack from revealing our prompt. We are going to address this with a second LLM that acts as the guardian of the response. Using a second LLM to check the original response or to format and understand the input is a common solution for many RAG-related applications. We will show how to use it to better secure the code.
It is important to note up front, though, that this is just one example of a solution. The great security battle against potential adversaries is always shifting and changing. You must continuously stay diligent and come up with new and better solutions to prevent security breaches.
Add this line to your imports:
from langchain_core.prompts import PromptTemplate
This imports the PromptTemplate
class from the langchain_core.prompts...