Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon

Create an AI-Powered Coding Project Generator.

Save for later
  • 8 min read
  • 22 Jun 2023

article-image

Overview

Making a smart coding project generator can be a game-changer for developers. With the help of large language models (LLM), we can generate entire code projects from a user-provided prompt.

In this article, we are developing a Python program that utilizes OpenAI's GPT-3.5 to generate code projects and slide presentations based on user-provided prompts. The program is designed as a command-line interface (CLI) tool, which makes it easy to use and integrate into various workflows.

 

create-an-ai-powered-coding-project-generator-img-0

Image 1: Weather App 

Features

 Our project generator will have the following features:

  • Generates entire code projects based on user-provided prompts
  • Generates entire slide presentations based on user-provided prompts (watch a demo here)
  • Uses OpenAI's GPT-3.5 for code generation
  • Outputs to a local project directory

Example Usage

 Our tool will be able to generate a code project from a user-provided prompt, for example, this line will create a snake game:

maiker "a snake game using just html and js";

 We can then open the generated project in our browser:

 open maiker-generated-project/index.html

create-an-ai-powered-coding-project-generator-img-1

Image 2: Generated Project

Implementation

 To ensure a comprehensive understanding of the project, let's break down the process of creating the AI-powered coding project generator step by step: 

1. Load environment variables: We use the `dotenv` package to load environment variables from a `.env` file. This file should contain your OpenAI API key.

from dotenv import load_dotenv

load_dotenv()

2. Set up OpenAI API client: We set up the OpenAI API client using the API key loaded from the environment variables.

import openai

openai.api_key = os.getenv("OPENAI_API_KEY")

3. Define the `generate_project` function: This function is responsible for generating code projects or slide presentations based on the user-provided prompt. Let's break down the function in more detail.

def generate_project(prompt: str, previous_response: str = "", type: str = "code") -> Dict[str, str]:

 The function takes three arguments:

  • prompt: The user-provided prompt describing the project to be generated.
  • previous_response: A string containing the previously generated files, if any. This is used to avoid generating the same files again if it does more than one loop.
  • type: The type of project to generate, either "code" or "presentation".

 Inside the function, we first create the system and user prompts based on the input type (code or presentation).

 if type == "presentation":
      # ... (presentation-related prompts)
else:
      # ... (code-related prompts)

 For code projects, we create a system prompt that describes the role of the API as a code generator and a user prompt that includes the project description and any previously generated files.

 For presentations, we create a system prompt that describes the role of the API as a reveal.js presentation generator and a user prompt that includes the presentation description.

Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €18.99/month. Cancel anytime

 Next, we call the OpenAI API to generate the code or presentation using the created system and user prompts.

 completion = openai.ChatCompletion.create(
      model="gpt-3.5-turbo",
      messages=[
      {
            "role": "system",
            "content": system_prompt,
      },
      {
            "role": "user",
            "content": user_prompt,
      },
      ],
      temperature=0,
)
 

We use the openai.ChatCompletion.create method to send a request to the GPT-3.5 model. The `messages` parameter contains an array of two messages: the system message and the user message. The `temperature` parameter is set to 0 to encourage deterministic output.

 Once we receive the response from the API, we extract the generated code from the response.

 generated_code = completion.choices[0].message.content

Generating the files to disk: We then attempt to parse the generated code as a JSON object. If the parsing is successful, we return the parsed JSON object, which is a dictionary containing the generated files and their content. If the parsing fails, we raise an exception with an error message.

try:
      if generated_code:
      generated_code = json.loads(generated_code)
except json.JSONDecodeError as e:
      raise click.ClickException(
      f"Code generation failed. Please check your prompt and try again. Error: {str(e)}, generated_code: {generated_code}"
      )
return generated_code

 This dictionary is then used by the `main` function to save the generated files to the specified output directory.```

4. Define the `main` function: This function is the entry point of our CLI tool. It takes a project prompt, an output directory, and the type of project (code or presentation) as input. It then calls the `generate_project` function to generate the project and saves the generated files to the specified output directory.

def main(prompt: str, output_dir: str, type: str):
      # ... (rest of the code)

 Inside the main function, we ensure the output directory exists, generate the project, and save the generated files.

# ... (inside main function)

os.makedirs(output_dir, exist_ok=True)

for _loop in range(max_loops):
      generated_code = generate_project(prompt, ",".join(generated_files), type)

      for filename, contents in generated_code.items():
      # ... (rest of the code)

 

5. **Create a Click command**: We use the `click` package to create a command-line interface for our tool. We define the command, its arguments, and options using the `click.command`, `click.argument`, and `click.option` decorators.

import click

@click.command()
@click.argument("prompt")
@click.option(
      "--output-dir",
      "-o",
      default="./maiker-generated-project",
      help="The directory where the generated code files will be saved.",
)
@click.option('-t', '--type', required=False, type=click.Choice(['code', 'presentation']), default='code')
def main(prompt: str, output_dir: str, type: str):
      # ... (rest of the code) 

6. Run the CLI tool: Finally, we run the CLI tool by calling the `main` function when the script is executed.

if __name__ == "__main__":
      main()

 In this article, we have used the`... (rest of the code)` as a placeholder to keep the explanations concise and focused on specific parts of the code. The complete code for the AI-powered coding project generator can be found in the GitHub repository at the following link: https://github.com/lusob/maiker-cli

By visiting the repository, you can access the full source code, which includes all the necessary components and functions to create the CLI tool. You can clone or download the repository to your local machine, install the required dependencies, and start using the tool to generate code projects and slide presentations based on user-provided prompts.   

Conclusion

With the current AI-powered coding project generator, you can quickly generate code projects and slide presentations based on user-provided prompts. By leveraging the power of OpenAI's GPT-3.5, you can save time and effort in creating projects and focus on other important aspects of your work. However, it is important to note that the complexity of the generated projects is currently limited due to the model's token limitations. GPT-3.5 has a maximum token limit, which restricts the amount of information it can process and generate in a single API call. As a result, the generated projects might not be as comprehensive or sophisticated as desired for more complex applications.

 The good news is that with the continuous advancements in AI research and the development of new models with larger context windows (e.g., models with more than 100k context tokens), we can expect significant improvements in the capabilities of AI-powered code generators. These advancements will enable the generation of more complex and sophisticated projects, opening up new possibilities for developers and businesses alike.

Author Bio

Luis Sobrecueva is a software engineer with many years of experience working with a wide range of different technologies in various operating systems, databases, and frameworks. He began his professional career developing software as a research fellow in the engineering projects area at the University of Oviedo. He continued in a private company developing low-level (C / C ++) database engines and visual development environments to later jump into the world of web development where he met Python and discovered his passion for Machine Learning, applying it to various large-scale projects, such as creating and deploying a recommender for a job board with several million users. It was also at that time when he began to contribute to open source deep learning projects and to participate in machine learning competitions and when he took several ML courses obtaining various certifications highlighting a MicroMasters Program in Statistics and Data Science at MIT and a Udacity Deep Learning nano degree. He currently works as a Data Engineer at a ride-hailing company called Cabify, but continues to develop his career as an ML engineer by consulting and contributing to open-source projects such as OpenAI and Autokeras.

Author of the book: Automated Machine Learning with AutoKeras