Search icon CANCEL
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
ChatGPT for Cybersecurity Cookbook

You're reading from   ChatGPT for Cybersecurity Cookbook Learn practical generative AI recipes to supercharge your cybersecurity skills

Arrow left icon
Product type Paperback
Published in Mar 2024
Publisher Packt
ISBN-13 9781805124047
Length 372 pages
Edition 1st Edition
Languages
Tools
Arrow right icon
Author (1):
Arrow left icon
Clint Bodungen Clint Bodungen
Author Profile Icon Clint Bodungen
Clint Bodungen
Arrow right icon
View More author details
Toc

Table of Contents (13) Chapters Close

Preface 1. Chapter 1: Getting Started: ChatGPT, the OpenAI API, and Prompt Engineering FREE CHAPTER 2. Chapter 2: Vulnerability Assessment 3. Chapter 3: Code Analysis and Secure Development 4. Chapter 4: Governance, Risk, and Compliance (GRC) 5. Chapter 5: Security Awareness and Training 6. Chapter 6: Red Teaming and Penetration Testing 7. Chapter 7: Threat Monitoring and Detection 8. Chapter 8: Incident Response 9. Chapter 9: Using Local Models and Other Frameworks 10. Chapter 10: The Latest OpenAI Features 11. Index 12. Other Books You May Enjoy

Using Prompt Variables (Application: Manual Page Generator)

In this recipe, we’ll create a Linux-style manual page generator that will accept user input in the form of a tool’s name, and our script will generate the manual page output, similar to entering the man command in Linux Terminal. In doing so, we will learn how to use variables in a text file to create a standard prompt template that can be easily modified by changing certain aspects of it. This approach is particularly useful when you want to use user input or other dynamic content as part of the prompt while maintaining a consistent structure.

Getting ready

Ensure you have access to the ChatGPT API by logging in to your OpenAI account and have Python and the openai module installed.

How to do it…

Using a text file that contains the prompt and placeholder variables, we can create a Python script that will replace the placeholder with user input. In this example, we will use this technique to create a Linux-style manual page generator. Here are the steps:

  1. Create a Python script and import the necessary modules:
    from openai import OpenAI
  2. Define a function to open and read a file:
    def open_file(filepath):
        with open(filepath, 'r', encoding='UTF-8') as infile:
            return infile.read()
  3. Set up your API key:
    openai.api_key = open_file('openai-key.txt')
  4. Create the openai-key.txt file in the same manner as the previous recipe.
  5. Define the get_chat_gpt_response() function to send the prompt to ChatGPT and obtain a response:
    client = OpenAI()
    def get_chat_gpt_response(prompt):
      response = client.chat.completions.create(
        model="gpt-3.5-turbo",
        messages=[{"role": "user", "content": prompt}],
        max_tokens=600,
        temperature=0.7
      )
      text = response.choices[0].message.content.strip()
      return text
  6. Receive user input for the filename and read the content of the file:
    file = input("ManPageGPT> $ Enter the name of a tool: ")
    feed = open_file(file)
  7. Replace the <<INPUT>> variable in the prompt.txt file with the content of the file:
    prompt = open_file("prompt.txt").replace('<<INPUT>>', feed)
  8. Create the prompt.txt file with the following text:
    Provide the manual-page output for the following tool. Provide the output exactly as it would appear in an actual Linux terminal and nothing else before or after the manual-page output.
    <<INPUT>>
  9. Send the modified prompt to the get_chat_gpt_response() function and print the result:
    analysis = get_chat_gpt_response(prompt)
    print(analysis)

    Here’s an example of how the complete script should look:

    import openai
    from openai import OpenAI
    def open_file(filepath):
        with open(filepath, 'r', encoding='UTF-8') as infile:
            return infile.read()
    openai.api_key = open_file('openai-key.txt')
    client = OpenAI()
    def get_chat_gpt_response(prompt):
      response = client.chat.completions.create(
        model="gpt-3.5-turbo",
        messages=[{"role": "user", "content": prompt}],
        max_tokens=600,
        temperature=0.7
      )
      text = response['choices'][0]['message']['content'].strip()
      return text
    feed = input("ManPageGPT> $ Enter the name of a tool: ")
    prompt = open_file("prompt.txt").replace('<<INPUT>>', feed)
    analysis = get_chat_gpt_response(prompt)
    print(analysis)

How it works…

In this example, we created a Python script that utilizes a text file as a prompt template. The text file contains a variable called <<INPUT>> that can be replaced with any content, allowing for dynamic modification of the prompt without the need to change the overall structure. Specifically for this case, we are replacing it with user input:

  1. The openai module is imported to access the ChatGPT API, and the os module is imported to interact with the operating system and manage environment variables.
  2. The open_file() function is defined to open and read a file. It takes a file path as an argument, opens the file with read access and UTF-8 encoding, reads the content, and then returns the content.
  3. The API key for accessing ChatGPT is set up by reading it from a file using the open_file() function and then assigning it to openai.api_key.
  4. The get_chat_gpt_response() function is defined to send a prompt to ChatGPT and return the response. It takes the prompt as an argument, configures the API request with the desired settings, and then sends the request to the ChatGPT API. It extracts the response text, removes leading and trailing whitespaces, and returns it.
  5. The script receives user input for the Linux command. This content will be used to replace the placeholder in the prompt template.
  6. The <<INPUT>> variable in the prompt.txt file is replaced with the content of the file provided by the user. This is done using Python’s string replace() method, which searches for the specified placeholder and replaces it with the desired content.
  7. Prompt explanation: For this particular prompt, we tell ChatGPT exactly what type of output and formatting we are expecting since it has access to just about every manual page entry that can be found on the internet. By instructing it to provide nothing before or after the Linux-specific output, ChatGPT will not provide any additional details or narrative, and the output will resemble actual Linux output when using the man command.
  8. The modified prompt, with the <<INPUT>> placeholder replaced, is sent to the get_chat_gpt_response() function. The function sends the prompt to ChatGPT, which retrieves the response, and the script prints the analysis result. This demonstrates how to use a prompt template with a variable that can be replaced to create customized prompts for different inputs.

This approach is particularly useful in a cybersecurity context as it allows you to create standard prompt templates for different types of analysis or queries and easily modify the input data as needed.

There’s more...

  1. Use multiple variables in your prompt template: You can use more than one variable in your prompt template to make it even more versatile. For example, you can create a template with placeholders for different components of a cybersecurity analysis, such as IP addresses, domain names, and user agents. Just make sure you replace all the necessary variables before sending the prompt to ChatGPT.
  2. Customize the variable format: Instead of using the <<INPUT>> format, you can customize your variable format to better suit your needs or preferences. For example, you can use curly braces (for example, {input}) or any other format that you find more readable and manageable.
  3. Use environment variables for sensitive data: When working with sensitive data such as API keys, it’s recommended to use environment variables to store them securely. You can modify the open_file() function to read an environment variable instead of a file, ensuring that sensitive data is not accidentally leaked or exposed.
  4. Error handling and input validation: To make your script more robust, you can add error handling and input validation. This can help you catch common issues, such as missing or improperly formatted files, and provide clear error messages to guide the user in correcting the problem.

By exploring these additional techniques, you can create more powerful, flexible, and secure prompt templates for use with ChatGPT in your cybersecurity projects.

You have been reading a chapter from
ChatGPT for Cybersecurity Cookbook
Published in: Mar 2024
Publisher: Packt
ISBN-13: 9781805124047
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $19.99/month. Cancel anytime