Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Free Learning
Arrow right icon
Unlocking the Power of Auto-GPT and Its Plugins
Unlocking the Power of Auto-GPT and Its Plugins

Unlocking the Power of Auto-GPT and Its Plugins: Implement, customize, and optimize Auto-GPT for building robust AI applications

eBook
$9.99 $27.99
Paperback
$34.99
Subscription
Free Trial
Renews at $19.99p/m

What do you get with Print?

Product feature icon Instant access to your digital eBook copy whilst your Print order is Shipped
Product feature icon Paperback book shipped to your preferred address
Product feature icon Download this book in EPUB and PDF formats
Product feature icon Access this title in our online reader with advanced features
Product feature icon DRM FREE - Read whenever, wherever and however you want
Product feature icon AI Assistant (beta) to help accelerate your learning
OR
Modal Close icon
Payment Processing...
tick Completed

Shipping Address

Billing Address

Shipping Methods
Table of content icon View table of contents Preview book icon Preview Book

Unlocking the Power of Auto-GPT and Its Plugins

Introducing Auto-GPT

In the Preface, I wrote about what Auto-GPT is and where it came from, but I was asking myself, “Why would anyone read this book?

I mean, it is what it is – an automated form of artificial intelligence (AI) that may or may not help you do some tasks or be a fun toy that can be very spooky sometimes, right?

I want you to have a clear understanding of what you can or cannot do with it.

Of course, the more creative you get, the more it can do, but sometimes the boundaries appear to be more or less random. For example, let’s say you just built a house-building robot that for no apparent reason refuses to make the front door blue, even though you really want a blue door; it keeps going off-topic or even starts explaining what doors are.

Auto-GPT can be very frustrating when it comes to these limitations as they come from a combination of OpenAI’s restrictions (which they give in their GPT model) and the humans who write and edit Auto-GPT (along with you – the user who gives it instructions). What first appears to be a clear instruction can result in a very different outcome just by changing one single character.

For me, this is what makes it fascinating – you can always expect it to behave like a living being that can randomly choose to do otherwise and have its own mind.

Note

Always keep in mind that this is a fast-moving project, so code can and will be changed until this book is released. It may also be the case that you bought this book much later and Auto-GPT is completely different. Most of the content in this book focuses on version 0.4.1, but changes have been made and considered regarding version 0.5.0 as well.

For example, once I finished the draft of this book, the “Forge” (an idea we had at a team meeting) had already been implemented. This was an experiment that allowed other developers to build their own Auto-GPT variation.

The Auto-GPT project is a framework that contains Auto-GPT, which we’ll be working with in this book, and can start other agents made by other developers. Those agents are in the repositories of the programmers who added them, so we won’t dive into them here.

In this chapter, we aim to introduce you to Auto-GPT, including its history and development, as well as LangChain. This chapter will help you understand what Auto-GPT is, its significance, and how it has evolved. By the end of this chapter, you will have a solid foundation to build upon as we explore more advanced topics in the subsequent chapters.

We will cover the following main topics in this chapter:

  • Overview of Auto-GPT
  • History and development of Auto-GPT
  • Introduction to LangChain

Overview of Auto-GPT

Auto-GPT is more or less a category of what it already describes:

“An automated generative pretrained transformer”

This means it automates GPT or ChatGPT. However, in this book, the main focus is on Auto-GPT by name. If you haven’t heard of it and just grabbed this book out of curiosity, then you’re in the right place!

Auto-GPT started as an experimental self-prompting AI application that is an attempt to create an autonomous system capable of creating “agents” to perform various specialized tasks to achieve larger objectives with minimal human input. It is based on OpenAI’s GPT and was developed by Toran Bruce Richards, who is better known by his GitHub handle Significant Gravitas.

Now, how does Auto-GPT think? Auto-GPT creates prompts that are fed to large language models (LLMs) and allows AI models to generate original content and execute command actions such as browsing, coding, and more. It represents a significant step forward in the development of autonomous AI, making it the fastest-growing open source project in GitHub’s history (at the time of writing).

Auto-GPT strings together multiple instances of OpenAI’s language model – GPT – and by doing so creates so-called “agents” that are tasked with simplified tasks. These agents work together to accomplish complex goals, such as writing a blog, with minimal human intervention.

Now, let’s talk about how it rose to fame.

From an experiment to one of the fastest-growing GitHub projects

Auto-GPT was initially named Entrepreneur-GPT and was released on March 16, 2023. The initial goal of the project was to give GPT-4 autonomy to see if it could thrive in the business world and test its capability to make real-world decisions.

For some time, the development of Auto-GPT remained mostly unnoticed until late March 2023. However, on March 30, 2023, Significant Gravitas tweeted about the latest demo of Auto-GPT and posted a demo video, which began to gain traction. The real surge in interest came on April 2, 2023, when computer scientist Andrej Karpathy quoted one of Significant Gravitas’ tweets, saying that the next frontier of prompt engineering was Auto-GPT.

This tweet went viral, and Auto-GPT became a subject of discussion on social media. One of the agents that was created by Auto-GPT, known as ChaosGPT, became particularly famous when it was humorously assigned the task of “destroying humanity,” which contributed to the viral nature of Auto-GPT (https://decrypt.co/126122/meet-chaos-gpt-ai-tool-destroy-humanity).

Of course, we don’t want to destroy humanity; for a reference on what Entrepreneur-GPT can do, take a look at the old logs of Entrepreneur-GPT here:

https://github.com/Significant-Gravitas/Auto-GPT/blob/c6f61db06cde7bd766e521bf7df1dc0c2285ef73/.

The more creative you are with your prompts and configuration, the more creative Auto-GPT will be. This will be covered in Chapter 2 when we run our first Auto-GPT instance together.

LLMs – the core of AI

Although Auto-GPT can be used with other LLMs, it best leverages the power of GPT-4, a state-of-the-art language model by OpenAI.

It offers a huge advantage for users who don’t own a graphics card that can hold models such as GPT-4 equivalents. Although there are many 7-B and 13-B LLMs (B stands for billion parameters) that do compete with ChatGPT, they cannot hold enough context in each prompt to be useful or are just not stable enough.

At the time of writing, GPT-4 and GPT-3.5-turbo are both used with Auto-GPT by default. Depending on the complexity of the situation, Auto-GPT differs between two types of models:

  • Smart model
  • Fast model

When does Auto-GPT use GPT-3.5-turbo and not GPT-4 all the time?

When Auto-GPT goes through its thought process, it uses the fast model. For example, as Auto-GPT loops through its thoughts, it uses the configured fast model, but when it summarizes the content of a website or writes code, it will decide to use the smart model.

The default for the fast model is GPT-3.5-turbo. Although it isn’t as precise as GPT-4, its response time is much better, leading to a more fluent response time; GPT-4 can seem stuck if it thinks for too long.

OpenAI has also added new functionalities to assist applications such as Auto-GPT. One of them is the ability to call functions. Before this new feature, Auto-GPT had to explain to GPT what a command is and how to formulate it correctly in text. This resulted in many errors as GPT sometimes decides to change the syntax of the output that’s expected. This was a huge step forward as this feature now reduces the complexity of how commands are communicated and executed. This empowers GPT to better understand what the context of each task is.

So, why don’t we use an LLM directly? Because LLMs are only responsive:

  • They cannot fulfill any tasks
  • Their knowledge is fixed, and they cannot update it themselves
  • They don’t remember anything; only frameworks that run them can do it

How does Auto-GPT make use of LLMs?

Auto-GPT is structured in a way that it takes in an initial prompt from the user via the terminal:

Figure 1.1 – Letting Auto-GPT define its role

Figure 1.1 – Letting Auto-GPT define its role

Here, you can either define a main task or enter –-manual to then answer questions, as shown here:

Figure 1.2 – Setting Auto-GPT’s main goals

Figure 1.2 – Setting Auto-GPT’s main goals

The main prompt is then saved as an ai_settings.yaml file that may look like this:

ai_goals:
- Conduct a thorough analysis of the current state of the book
  and identify areas for improvement.
- Develop a comprehensive plan for creating task lists that will help you structure research, a detailed outline per chapter and individual parts.
- Be sure to ask the user for feedback and improvements.
- Continuously assess the current state of the work and use the speak property to give the user positive affirmations.
ai_name: AuthorGPT
ai_role: An AI-powered author and researcher specializing in creating comprehensive, well-structured, and engaging content on Auto-GPT and its plugins, while maintaining an open line of communication with the user for feedback and guidance.
api_budget: 120.0

Let’s look at some of the AI components in the preceding file:

  • First, we have ai_goals, which specifies the main tasks that Auto-GPT must undertake. It will use those to decide which individual steps to take. Each iteration will decide to follow one of the goals.
  • Then, we have ai_name, which is also taken as a reference and defines parts of the behavior or character of the bot. This means that if you call it AuthorGPT, it will play the role of a GPT-based author, while if you call it Author, it will try to behave like a person. It is generally hard to tell how it will behave because GPT mostly decides what it puts out on its own.
  • Finally, we have ai_role, which can be viewed as a more detailed role description. However, in my experience, it only nudges the thoughts slightly. Goals are more potent here.

Once this is done, it summarizes what it’s going to do and starts thinking correctly:

Figure 1.3 – Example of Auto-GPT’s thought process

Figure 1.3 – Example of Auto-GPT’s thought process

Thinking generally means that it is sending a chat completion request to the LLM.

This process can be slow – the more tokens that are used, the more processing that’s needed. In the Understanding tokens in LLMs section, we will take a look at what this means.

Once Auto-GPT has started “thinking,” it initiates a sequence of AI “conversations.” During these conversations, it forms a query, sends it to the LLM, and then processes the response. This process repeats until it finds a satisfactory solution or reaches the end of its thinking time.

This entire process produces thoughts. These fall into the following categories:

  • Reasoning
  • Planning
  • Criticism
  • Speak
  • Command

These individual thoughts are then displayed in the terminal and the user is asked whether they want to approve the command or not – it’s that simple.

Of course, a lot more goes on here, including a prompt being built to create that response.

Simply put, Auto-GPT passes the name, role, goals, and some background information. You can see an example here: https://github.com/PacktPublishing/Unlocking-the-Power-of-Auto-GPT-and-Its-Plugins/blob/main/Auto-GPT_thoughts_example.md.

Auto-GPT’s thought process – understanding the one-shot action

Let’s understand the thought process behind this one-shot action:

  • Overview of the thought process: Auto-GPT operates on a one-shot action basis. This approach involves processing each data block that’s sent to OpenAI as a single chat completion action. The outcome of this process is that a response text from GPT is generated that’s crafted based on a specified structure.
  • Structure and task definition for GPT: The structure that’s provided to GPT encompasses both the task at hand and the format for the response. This dual-component structure ensures that GPT’s responses are not only relevant but also adhere to the expected conversational format.
  • Role assignment in Auto-GPT: There are two role assignments here:
    • System role: The “system” role is crucial in providing context. It functions as a vessel for information delivery and maintains the historical thread of the conversation with the LLM.
    • User role: Toward the end of the process, a “user” role is assigned. This role is pivotal in guiding GPT to determine the subsequent command to execute. It adheres to a predefined format, ensuring consistency in interactions.
  • Command options and decision-making: GPT is equipped with various command options, including the following:
    • Ask the user (ask_user)
    • Sending messages (send_message)
    • Browsing (browse)
    • Executing code (execute_code)

In some instances, Auto-GPT may opt not to select any command. This typically occurs in situations of confusion, such as when the provided task is unclear or when Auto-GPT completes a task and requires user feedback for further action.

Either way, each response is only one text and just a text that is being autocompleted, meaning the LLM only responds once with such a response.

In the following example, I have the planner plugin activated; more on plugins later:

{
"thoughts": {
"text": "I need to start the planning cycle to create a plan for the book.",
"reasoning": "Starting the planning cycle will help me outline the steps needed to achieve my goals.",
"plan":
"- run_planning_cycle
- research Auto-GPT and its plugins
- collaborate with user
- create book structure
- write content
- refine content based on feedback",
"criticism": "I should have started the planning cycle earlier to ensure a smooth start.",
"speak": "I'm going to start the planning cycle to create a plan for the book."
},
"command": {
"name": "run_planning_cycle",
"args": {}
}
}

Each thought property is then displayed to the user and the “speak” output is read aloud if text-to-speech is enabled:

"I am going to start the planning cycle to create a plan for the book. I want to run planning cycle."

The user can now respond in one of the following ways:

  • y: To accept the execution.
  • n: To decline the execution and close Auto-GPT.
  • s: To let Auto-GPT re-evaluate its decisions.
  • y -n: To tell Auto-GPT to just keep going for the number of steps (for example, enter y -5 to allow it to run on its own for 5 steps). Here, n is always a number.

If the user confirms, the command is executed and the result of that command is added as system content:

# Check if there is a result from the command append it to the message
# history
if result is not None:
self.history.add("system", result, "action_result")

At this point, you’re probably wondering what history is in this context and why self?

Auto-GPT uses agents and the instance of the agent has its own history that acts as a short-term memory. It contains the context of what the previous messages and results were.

The history is trimmed down on every run cycle of the agent to make sure it doesn’t reach its token limit.

So, why not directly ask the LLM for a solution? There are several reasons for this:

  • While LLMs are incredibly sophisticated, they cannot solve complex, multi-step problems in a single query. Instead, they need to be asked a series of interconnected questions that guide them toward a final solution. This is where Auto-GPT shines – it can strategically ask these questions and digest the responses.
  • LLMs can’t maintain their context. They don’t remember previous queries or answers, which means they cannot build on past knowledge to answer future questions. Auto-GPT compensates for this by maintaining a history of the conversation, allowing it to understand the context of previous queries and responses and use that information to craft new queries.
  • While LLMs are powerful tools for generating human-like text, they cannot take initiative. They respond to prompts but don’t actively seek out new tasks or knowledge. Auto-GPT, on the other hand, is designed to be more proactive. It not only responds to the tasks that have been assigned to it but also proactively explores diverse ways to accomplish those tasks, making it a true autonomous agent.

Before we delve deeper into how Auto-GPT utilizes LLMs, it’s important to understand a key component of how these models process information: tokens.

Understanding tokens in LLMs

Tokens are the fundamental building blocks in LLMs such as GPT-3 and GPT-4. They are pieces of knowledge that vary in proximity to each other based on the given context. A token can represent a word, a symbol, or even fragments of words.

Tokenization in language processing

When training LLMs, text data is broken down into smaller units, or tokens. For instance, the sentence “ChatGPT is great!” would be divided into tokens such as ["ChatGPT", "is", "great", "!"]. The nature of a token can differ significantly across languages and coding paradigms:

  • In English, a token typically signifies a word or part of a word
  • In other languages, a token may represent a syllable or a character
  • In programming languages, tokens can include keywords, operators, or variables

Let’s look at some examples of tokenization:

  • Natural language: The sentence “ChatGPT is great!” tokenizes into ["ChatGPT", "is", "great", "!"].
  • Programming language: A Python code line such as print("Hello, World!") is tokenized as ["print", "(", " ", "Hello", "," , " ", "World", "!"", ")"].

Balancing detail and computational resources

Tokenization strategies aim to balance detail and computational efficiency. More tokens provide greater detail but require more resources for processing. This balance is crucial for the model’s ability to understand and generate text at a granular level.

Token limits in LLMs

The token limit signifies the maximum number of tokens that a model such as GPT-3 or GPT-4 can handle in a single interaction. This limit is in place due to the computational resources needed to process large numbers of tokens.

The token limit also influences the model’s “attention” capability – its ability to prioritize different parts of the input during output generation.

Implications of token limits

A model with a token limit may not fully process inputs that exceed this limit. For example, with a 20-token limit, a 30-token text would need to be broken into smaller segments for the model to process them effectively.

In programming, tokenization aids in understanding code structure and syntax, which is vital for tasks such as code generation or interpretation.

In summary, tokenization is a critical component in natural language processing (NLP), enabling LLMs to interpret and generate text in a meaningful and contextually accurate manner.

For instance, if you’re using the model to generate Python code and you input ["print", "("] as a token, you’d expect the model to generate tokens that form a valid argument to the print function – for example, [""Hello, World!"", ")"].

In the following chapters, we will delve deeper into how Auto-GPT works, its capabilities, and how you can use it to solve complex problems or automate tasks. We will also cover its plugins, which extend its functionality and allow it to interact with external systems so that it can order a pizza, for instance.

In a nutshell, Auto-GPT is like a very smart, very persistent assistant that leverages the power of the most advanced AI to accomplish the goals you set for it. Whether you’re an AI researcher, a developer, or simply someone who is fascinated by the potential of AI, I hope this book will provide you with the knowledge and inspiration you need to make the most of Auto-GPT.

At the time of writing (June 1, 2023), Auto-GPT can give you feedback not only through the terminal. There are a variety of text-to-speech engines that are currently built into Auto-GPT. Depending on what you prefer, you can either use the default, which is Google’s text-to-speech option, ElevenLabs, macOS’ say command (a low-quality Siri voice pack), or Silero TTS.

When it comes to plugins, Auto-GPT becomes even more powerful. Currently, there is an official repository for plugins that contains a list of awesome plugins such as Planner Plugin, Discord, Telegram, Text Generation for local or different LLMs, and more.

This modularity makes Auto-GPT the most exciting thing I’ve ever laid my hands on.

Launching and advancing Auto-GPT – a story of innovation and community

Auto-GPT’s development began with a bold vision to make the sophisticated technology of GPT-4 accessible and user-friendly. This initiative marked the start of an ongoing journey, with the project continually evolving through the integration of new features and improvements. At its core, Auto-GPT is a collaborative effort, continuously shaped by the input of a dedicated community of developers and researchers.

The genesis of Auto-GPT can be traced back to the discovery of GPT-4’s potential for autonomous task completion. This breakthrough was the catalyst for creating a platform that could fully utilize GPT-4’s capabilities, offering users extensive control and customization options.

The project gained initial popularity with an early version known as Entrepreneur-GPT, a key milestone that showcased Auto-GPT’s capabilities at the time. This phase of the project (documented here:

https://github.com/PacktPublishing/Unlocking-the-Power-of-Auto-GPT-and-Its-Plugins/blob/main/Entrepreneur-GPT.md) indicates the differences in prompts and functionalities compared to later stages. A review of the git history reveals Auto-GPT’s early abilities, including online research and using a local database for long-term memory.

The ascent of Auto-GPT was swift, attracting contributors – including myself – early in its development. My experience with this open source project was transformative, offering an addictive blend of passion and excitement for innovation. The dedication of the contributors brought a sense of pride, especially when you can see your work recognized by a wider audience, including popular YouTubers.

As an open source project, Auto-GPT thrived on voluntary contributions, leading to the formation of a team that significantly enhanced its structure. This team played a crucial role in managing incoming pull requests and guiding the development paths, thereby continually improving Auto-GPT’s core.

Despite its growing popularity, each new release of Auto-GPT brought enhanced power and functionality. These releases are stable versions that are meticulously tested by the community to ensure they are bug-free and ready for public use.

A critical component of Auto-GPT’s evolution is its plugins. These play a major role in the customization of the platform, allowing users to tailor it to their specific needs. Future discussions will delve deeper into these plugins and will explore their installation, usage, and impact on enhancing Auto-GPT’s capabilities. This exploration is vital as most customization happens through plugins unless significant contributions are made directly to the core platform through pull requests.

Introduction to LangChain

Although LangChain itself is not part of Auto-GPT, it is a crucial component of Auto-GPT’s development as it focuses on the process using control. This is in contrast to Auto-GPT’s emphasis on results without control.

LangChain is a powerful tool that enables users to build implementations of their own Auto-GPT using LLM primitives. It allows for explicit reasoning and the potential for Auto-GPT to become an autonomous agent.

With multiple alternatives of Auto-GPT arising, LangChain has become a part of many of them. One such example is AgentGPT.

LangChain’s unique approach to language processing and control makes it an essential part of AgentGPT’s functionality. By combining the strengths of LangChain and Auto-GPT, users can create powerful, customized solutions that leverage the full potential of GPT.

The intersection of LangChain and Auto-GPT

LangChain and Auto-GPT may have different areas of focus, but their shared goal of enhancing the capabilities of LLMs creates a natural synergy between them. LangChain’s ability to provide a structured, controllable process pairs well with Auto-GPT’s focus on autonomous task completion. Together, they provide an integrated solution that both controls the method and achieves the goal, striking a balance between the process and the result.

LangChain enables the explicit reasoning potential within Auto-GPT. It provides a pathway to transition the model from being a tool for human-directed tasks to a self-governing agent capable of making informed, reasoned decisions.

In addition, LangChain’s control over language processing enhances Auto-GPT’s ability to communicate user-friendly information in JSON format, making it an even more accessible platform for users. By optimizing language processing and control, LangChain significantly improves Auto-GPT’s interaction with users.

You can read more about it: https://docs.langchain.com/docs/.

Summary

In this chapter, we embarked on the exciting journey of exploring Auto-GPT, an innovative AI application that leverages the power of GPT-4 to autonomously solve tasks and operate in a browser environment. We delved into the history of Auto-GPT, understanding how it evolved from an ambitious experiment to a powerful tool that’s transforming the way we interact with AI.

We also explored the concept of tokens, which play a crucial role in how LLMs such as GPT-4 process information. Understanding this fundamental concept will help us better comprehend how Auto-GPT interacts with LLMs to generate meaningful and contextually relevant responses.

Furthermore, we touched on the role of LangChain, a tool that complements Auto-GPT by providing structured control over language processing. The intersection of LangChain and Auto-GPT creates a powerful synergy, enhancing the capabilities of Auto-GPT and paving the way for more advanced AI applications.

As we move forward, we will dive deeper into the workings of Auto-GPT, exploring its plugins, installation process, and how to craft effective prompts. We will also delve into more advanced topics, such as integrating your own LLM with Auto-GPT, setting up Docker, and safely and effectively using continuous mode.

Whether you’re an AI enthusiast, a developer, or simply someone curious about the potential of AI, this journey promises to be a fascinating one. So, buckle up, and let’s continue to unravel the immense potential of Auto-GPT together!

Left arrow icon Right arrow icon
Download code icon Download Code

Key benefits

  • Discover the untapped power of Auto-GPT, opening doors to limitless AI possibilities
  • Craft your own AI applications, from chat assistants to speech companions, with step-by-step guidance
  • Explore advanced AI topics like Docker configuration and LLM integration for cutting-edge AI development
  • Purchase of the print or Kindle book includes a free PDF eBook

Description

Unlocking the Power of Auto-GPT and Its Plugins reveals how Auto-GPT is transforming the way we work and live, by breaking down complex goals into manageable subtasks and intelligently utilizing the internet and other tools. With a background as a self-taught full stack developer and key contributor to Auto-GPT’s Inner Team, the author blends unconventional thinking with practical expertise to make Auto-GPT and its plugins accessible to developers at all levels. This book explores the potential of Auto-GPT and its associated plugins through practical applications. Beginning with an introduction to Auto-GPT, it guides you through setup, utilization, and the art of prompt generation. You'll gain a deep understanding of the various plugin types and how to create them. The book also offers expert guidance on developing AI applications such as chat assistants, research aides, and speech companions, while covering advanced topics such as Docker configuration, continuous mode operation, and integrating your own LLM with Auto-GPT. By the end of this book, you'll be equipped with the knowledge and skills needed for AI application development, plugin creation, setup procedures, and advanced Auto-GPT features to fuel your AI journey.

Who is this book for?

This book is for developers, data scientists, and AI enthusiasts interested in leveraging the power of Auto-GPT and its plugins to create powerful AI applications. Basic programming knowledge and an understanding of artificial intelligence concepts are required to make the most of this book. Familiarity with the terminal will also be helpful.

What you will learn

  • Develop a solid understanding of Auto-GPT's fundamental principles
  • Hone your skills in creating engaging and effective prompts
  • Effectively harness the potential of Auto-GPT's versatile plugins
  • Tailor and personalize AI applications to meet specific requirements
  • Proficiently manage Docker configurations for advanced setup
  • Ensure the safe and efficient use of continuous mode
  • Integrate your own LLM with Auto-GPT for enhanced performance
Estimated delivery fee Deliver to Colombia

Standard delivery 10 - 13 business days

$19.95

Premium delivery 3 - 6 business days

$40.95
(Includes tracking information)

Product Details

Country selected
Publication date, Length, Edition, Language, ISBN-13
Publication date : Sep 13, 2024
Length: 142 pages
Edition : 1st
Language : English
ISBN-13 : 9781805128281
Category :
Tools :

What do you get with Print?

Product feature icon Instant access to your digital eBook copy whilst your Print order is Shipped
Product feature icon Paperback book shipped to your preferred address
Product feature icon Download this book in EPUB and PDF formats
Product feature icon Access this title in our online reader with advanced features
Product feature icon DRM FREE - Read whenever, wherever and however you want
Product feature icon AI Assistant (beta) to help accelerate your learning
OR
Modal Close icon
Payment Processing...
tick Completed

Shipping Address

Billing Address

Shipping Methods
Estimated delivery fee Deliver to Colombia

Standard delivery 10 - 13 business days

$19.95

Premium delivery 3 - 6 business days

$40.95
(Includes tracking information)

Product Details

Publication date : Sep 13, 2024
Length: 142 pages
Edition : 1st
Language : English
ISBN-13 : 9781805128281
Category :
Tools :

Packt Subscriptions

See our plans and pricing
Modal Close icon
$19.99 billed monthly
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Simple pricing, no contract
$199.99 billed annually
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just $5 each
Feature tick icon Exclusive print discounts
$279.99 billed in 18 months
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just $5 each
Feature tick icon Exclusive print discounts

Frequently bought together


Stars icon
Total $ 134.97
Generative AI Application Integration Patterns
$49.99
Unlocking the Power of Auto-GPT and Its Plugins
$34.99
Building LLM Powered  Applications
$49.99
Total $ 134.97 Stars icon
Banner background image

Table of Contents

9 Chapters
Chapter 1: Introducing Auto-GPT Chevron down icon Chevron up icon
Chapter 2: From Installation to Your First AI-Generated Text Chevron down icon Chevron up icon
Chapter 3: Mastering Prompt Generation and Understanding How Auto-GPT Generates Prompts Chevron down icon Chevron up icon
Chapter 4: Short Introduction to Plugins Chevron down icon Chevron up icon
Chapter 5: Use Cases and Customization through Applying Auto-GPT to Your Projects Chevron down icon Chevron up icon
Chapter 6: Scaling Auto-GPT for Enterprise-Level Projects with Docker and Advanced Setup Chevron down icon Chevron up icon
Chapter 7: Using Your Own LLM and Prompts as Guidelines Chevron down icon Chevron up icon
Index Chevron down icon Chevron up icon
Other Books You May Enjoy Chevron down icon Chevron up icon

Customer reviews

Rating distribution
Full star icon Full star icon Full star icon Full star icon Half star icon 4.3
(3 Ratings)
5 star 33.3%
4 star 66.7%
3 star 0%
2 star 0%
1 star 0%
Paul Pollock Oct 04, 2024
Full star icon Full star icon Full star icon Full star icon Full star icon 5
If you've been curious about building your own AI, like a virtual assistant that can handle tasks for you, this book is a fantastic guide. Wladislav Cugunov takes you through everything step-by-step—from setting up Python and Docker to scaling Auto-GPT for larger projects. His clear explanations and practical examples make even advanced topics accessible. Whether you're a beginner or an experienced developer, you'll find tons of useful tips, from prompt engineering to hands-on projects. Highly recommended for anyone who loves experimenting with tech or wants to dive deeper into AI.
Amazon Verified review Amazon
ivan Oct 05, 2024
Full star icon Full star icon Full star icon Full star icon Empty star icon 4
The book on Auto-GPT and its plugins by Wladislav Cugunov is written in a way that it makes easy to grasp the positive impact Auto-GPT can have in generative AI, such as customer support, content automation, and better interactions between users and AI agents. The author simplifies the complexity of the installation, prompts generation, and plugin development for Auto-GPT. What's more, the book provides practical examples of Auto-GPT's utilization, showcasing its ability to interact with apps, software, and services both online and locally. If you are interested in learning how Auto-GPT expands on generative AI tools by handling follow-ups to an initial prompt until the task is complete and you want to develop your programming skills around this field, then this book is an exceptional starting point.
Amazon Verified review Amazon
Om S Sep 27, 2024
Full star icon Full star icon Full star icon Full star icon Empty star icon 4
This book, "Unlocking the Power of Auto-GPT and Its Plugins," is a simple guide to learning Auto-GPT. It walks you through the setup process and explains how to use it step by step. The book teaches you how to create AI applications like chat assistants and speech tools. It also covers how to make good prompts and use plugins to extend the features. There are advanced sections on Docker and using your own language models. Everything is explained in clear, easy-to-understand language, making it great for anyone looking to learn AI development with Auto-GPT.
Amazon Verified review Amazon
Get free access to Packt library with over 7500+ books and video courses for 7 days!
Start Free Trial

FAQs

What is the delivery time and cost of print book? Chevron down icon Chevron up icon

Shipping Details

USA:

'

Economy: Delivery to most addresses in the US within 10-15 business days

Premium: Trackable Delivery to most addresses in the US within 3-8 business days

UK:

Economy: Delivery to most addresses in the U.K. within 7-9 business days.
Shipments are not trackable

Premium: Trackable delivery to most addresses in the U.K. within 3-4 business days!
Add one extra business day for deliveries to Northern Ireland and Scottish Highlands and islands

EU:

Premium: Trackable delivery to most EU destinations within 4-9 business days.

Australia:

Economy: Can deliver to P. O. Boxes and private residences.
Trackable service with delivery to addresses in Australia only.
Delivery time ranges from 7-9 business days for VIC and 8-10 business days for Interstate metro
Delivery time is up to 15 business days for remote areas of WA, NT & QLD.

Premium: Delivery to addresses in Australia only
Trackable delivery to most P. O. Boxes and private residences in Australia within 4-5 days based on the distance to a destination following dispatch.

India:

Premium: Delivery to most Indian addresses within 5-6 business days

Rest of the World:

Premium: Countries in the American continent: Trackable delivery to most countries within 4-7 business days

Asia:

Premium: Delivery to most Asian addresses within 5-9 business days

Disclaimer:
All orders received before 5 PM U.K time would start printing from the next business day. So the estimated delivery times start from the next day as well. Orders received after 5 PM U.K time (in our internal systems) on a business day or anytime on the weekend will begin printing the second to next business day. For example, an order placed at 11 AM today will begin printing tomorrow, whereas an order placed at 9 PM tonight will begin printing the day after tomorrow.


Unfortunately, due to several restrictions, we are unable to ship to the following countries:

  1. Afghanistan
  2. American Samoa
  3. Belarus
  4. Brunei Darussalam
  5. Central African Republic
  6. The Democratic Republic of Congo
  7. Eritrea
  8. Guinea-bissau
  9. Iran
  10. Lebanon
  11. Libiya Arab Jamahriya
  12. Somalia
  13. Sudan
  14. Russian Federation
  15. Syrian Arab Republic
  16. Ukraine
  17. Venezuela
What is custom duty/charge? Chevron down icon Chevron up icon

Customs duty are charges levied on goods when they cross international borders. It is a tax that is imposed on imported goods. These duties are charged by special authorities and bodies created by local governments and are meant to protect local industries, economies, and businesses.

Do I have to pay customs charges for the print book order? Chevron down icon Chevron up icon

The orders shipped to the countries that are listed under EU27 will not bear custom charges. They are paid by Packt as part of the order.

List of EU27 countries: www.gov.uk/eu-eea:

A custom duty or localized taxes may be applicable on the shipment and would be charged by the recipient country outside of the EU27 which should be paid by the customer and these duties are not included in the shipping charges been charged on the order.

How do I know my custom duty charges? Chevron down icon Chevron up icon

The amount of duty payable varies greatly depending on the imported goods, the country of origin and several other factors like the total invoice amount or dimensions like weight, and other such criteria applicable in your country.

For example:

  • If you live in Mexico, and the declared value of your ordered items is over $ 50, for you to receive a package, you will have to pay additional import tax of 19% which will be $ 9.50 to the courier service.
  • Whereas if you live in Turkey, and the declared value of your ordered items is over € 22, for you to receive a package, you will have to pay additional import tax of 18% which will be € 3.96 to the courier service.
How can I cancel my order? Chevron down icon Chevron up icon

Cancellation Policy for Published Printed Books:

You can cancel any order within 1 hour of placing the order. Simply contact customercare@packt.com with your order details or payment transaction id. If your order has already started the shipment process, we will do our best to stop it. However, if it is already on the way to you then when you receive it, you can contact us at customercare@packt.com using the returns and refund process.

Please understand that Packt Publishing cannot provide refunds or cancel any order except for the cases described in our Return Policy (i.e. Packt Publishing agrees to replace your printed book because it arrives damaged or material defect in book), Packt Publishing will not accept returns.

What is your returns and refunds policy? Chevron down icon Chevron up icon

Return Policy:

We want you to be happy with your purchase from Packtpub.com. We will not hassle you with returning print books to us. If the print book you receive from us is incorrect, damaged, doesn't work or is unacceptably late, please contact Customer Relations Team on customercare@packt.com with the order number and issue details as explained below:

  1. If you ordered (eBook, Video or Print Book) incorrectly or accidentally, please contact Customer Relations Team on customercare@packt.com within one hour of placing the order and we will replace/refund you the item cost.
  2. Sadly, if your eBook or Video file is faulty or a fault occurs during the eBook or Video being made available to you, i.e. during download then you should contact Customer Relations Team within 14 days of purchase on customercare@packt.com who will be able to resolve this issue for you.
  3. You will have a choice of replacement or refund of the problem items.(damaged, defective or incorrect)
  4. Once Customer Care Team confirms that you will be refunded, you should receive the refund within 10 to 12 working days.
  5. If you are only requesting a refund of one book from a multiple order, then we will refund you the appropriate single item.
  6. Where the items were shipped under a free shipping offer, there will be no shipping costs to refund.

On the off chance your printed book arrives damaged, with book material defect, contact our Customer Relation Team on customercare@packt.com within 14 days of receipt of the book with appropriate evidence of damage and we will work with you to secure a replacement copy, if necessary. Please note that each printed book you order from us is individually made by Packt's professional book-printing partner which is on a print-on-demand basis.

What tax is charged? Chevron down icon Chevron up icon

Currently, no tax is charged on the purchase of any print book (subject to change based on the laws and regulations). A localized VAT fee is charged only to our European and UK customers on eBooks, Video and subscriptions that they buy. GST is charged to Indian customers for eBooks and video purchases.

What payment methods can I use? Chevron down icon Chevron up icon

You can pay with the following card types:

  1. Visa Debit
  2. Visa Credit
  3. MasterCard
  4. PayPal
What is the delivery time and cost of print books? Chevron down icon Chevron up icon

Shipping Details

USA:

'

Economy: Delivery to most addresses in the US within 10-15 business days

Premium: Trackable Delivery to most addresses in the US within 3-8 business days

UK:

Economy: Delivery to most addresses in the U.K. within 7-9 business days.
Shipments are not trackable

Premium: Trackable delivery to most addresses in the U.K. within 3-4 business days!
Add one extra business day for deliveries to Northern Ireland and Scottish Highlands and islands

EU:

Premium: Trackable delivery to most EU destinations within 4-9 business days.

Australia:

Economy: Can deliver to P. O. Boxes and private residences.
Trackable service with delivery to addresses in Australia only.
Delivery time ranges from 7-9 business days for VIC and 8-10 business days for Interstate metro
Delivery time is up to 15 business days for remote areas of WA, NT & QLD.

Premium: Delivery to addresses in Australia only
Trackable delivery to most P. O. Boxes and private residences in Australia within 4-5 days based on the distance to a destination following dispatch.

India:

Premium: Delivery to most Indian addresses within 5-6 business days

Rest of the World:

Premium: Countries in the American continent: Trackable delivery to most countries within 4-7 business days

Asia:

Premium: Delivery to most Asian addresses within 5-9 business days

Disclaimer:
All orders received before 5 PM U.K time would start printing from the next business day. So the estimated delivery times start from the next day as well. Orders received after 5 PM U.K time (in our internal systems) on a business day or anytime on the weekend will begin printing the second to next business day. For example, an order placed at 11 AM today will begin printing tomorrow, whereas an order placed at 9 PM tonight will begin printing the day after tomorrow.


Unfortunately, due to several restrictions, we are unable to ship to the following countries:

  1. Afghanistan
  2. American Samoa
  3. Belarus
  4. Brunei Darussalam
  5. Central African Republic
  6. The Democratic Republic of Congo
  7. Eritrea
  8. Guinea-bissau
  9. Iran
  10. Lebanon
  11. Libiya Arab Jamahriya
  12. Somalia
  13. Sudan
  14. Russian Federation
  15. Syrian Arab Republic
  16. Ukraine
  17. Venezuela