Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon

How-To Tutorials - AI Tools

89 Articles
article-image-openchatkit-could-be-your-open-source-alternative-to-chatgpt
Julian Melanson
20 Jun 2023
5 min read
Save for later

OpenChatKit Could be Your Open-Source Alternative to ChatGPT

Julian Melanson
20 Jun 2023
5 min read
It’s no surprise that LLMs like ChatGPT and Bard possess impressive capabilities. However, it’s important to note that they are proprietary software, subject to restrictions in terms of access and usage due to licensing constraints. Consequently, this limitation has generated significant interest within the open-source community, leading to the development of alternative solutions that emphasize freedom, transparency, and community-driven collaboration.In recent months, the open-source community Together has introduced OpenChatKit, an alternative to ChatGPT, aimed at providing developers with a versatile chatbot solution. OpenChatKit utilizes the GPT-NeoX language model developed by EleutherAI, which consists of an impressive 20 billion parameters. Additionally, the model has been fine-tuned specifically for chat use with 43 million instructions. OpenChatKit's performance surpasses the base model in the industry-standard HELM benchmark, making it a promising tool for various applications.OpenChatKit Components and CapabilitiesOpenChatKit is accompanied by a comprehensive toolkit available on GitHub under the Apache 2.0 license. The toolkit includes several key components designed to enhance customization and performance:Customization recipes: These recipes allow developers to fine-tune the language model for their specific tasks, resulting in higher accuracy and improved performance.Extensible retrieval system: OpenChatKit enables developers to augment bot responses by integrating information from external sources such as document repositories, APIs, or live-updating data during inference time. This capability enhances the bot's ability to provide contextually relevant and accurate responses.Moderation model: The toolkit incorporates a moderation model derived from GPT-JT-6B, fine-tuned to filter and determine which questions the bot should respond to. This feature helps ensure appropriate and safe interactions.Feedback mechanisms: OpenChatKit provides built-in tools for users to provide feedback on the chatbot's responses and contribute new datasets. This iterative feedback loop allows for continuous improvement and refinement of the chatbot's performance.OpenChatKit's Strengths and LimitationsDevelopers highlight OpenChatKit's strengths in specific tasks such as summarization, contextual question answering, information extraction, and text classification. It excels in these areas, offering accurate and relevant responses.However, OpenChatKit's performance is comparatively weaker in tasks that require handling questions without context, coding, and creative writing—areas where ChatGPT has gained popularity. OpenChatKit may occasionally generate erroneous or misleading responses (hallucinations), a challenge that is also encountered in ChatGPT. Furthermore, OpenChatKit sometimes struggles with smoothly transitioning between conversation topics and may occasionally repeat answers.Fine-Tuning and Specialized Use CasesOpenChatKit's performance can be significantly improved by fine-tuning it for specific use cases. Together, the developers are actively working on their own chatbots designed for learning, financial advice, and support requests. By tailoring OpenChatKit to these specific domains, they aim to enhance its capabilities and deliver more accurate and contextually appropriate responses.Comparing OpenChatKit with ChatGPTIn a brief evaluation, OpenChatKit did not demonstrate the same level of eloquence as ChatGPT. This discrepancy can partly be attributed to OpenChatKit's response limit of 256 tokens, which is less than the approximately 500 tokens in ChatGPT. As a result, OpenChatKit generates shorter responses. However, OpenChatKit outperforms ChatGPT in terms of response speed, generating replies at a faster rate. The language transition between different languages does not appear to pose any challenges for OpenChatKit, and it supports formatting options like lists and tables.The Role of User Feedback in ImprovementTogether recognizes the importance of user feedback in enhancing OpenChatKit's performance and plan to leverage it for further improvement. Actively involving users in providing feedback and suggestions ensures that OpenChatKit evolves to meet user expectations and becomes increasingly useful across a wide range of applications.The Future of Decentralized Training for AI ModelsThe decentralized training approach employed in OpenChatKit, as previously seen with GPT-JT, represents a potential future for large-scale open-source projects. By distributing the computational load required for training across numerous machines instead of relying solely on a central data center, developers can leverage the combined power of multiple systems. This decentralized approach not only accelerates training but also promotes collaboration and accessibility within the open-source community.Anticipating Future DevelopmentsOpenChatKit is the pioneer among open-source alternatives to ChatGPT. However, it is likely that other similar projects will emerge. Notably, Meta's LLaMa models, which were leaked, boast parameters three times greater than GPT-NeoX-20B. With these advancements, it is only a matter of time before chatbots based on these models enter the scene.Getting Started with OpenChatKitGetting started with the program is simple:Head to https://openchatkit.net/#demo in your browserRead through the guidelines and agree to the Terms & ConditionsType your prompt into the chatboxNow you can test-drive the program and see how you like it compared to other LLMS. SummaryIn summation, OpenChatKit presents an exciting alternative to ChatGPT. Leveraging the powerful GPT-NeoX language model and extensive fine-tuning for chat-based interactions, OpenChatKit demonstrates promising capabilities in tasks such as summarization, contextual question answering, information extraction, and text classification. While some limitations exist, such as occasional hallucinations and difficulty transitioning between topics, fine-tuning OpenChatKit for specific use cases significantly improves its performance. With the provided toolkit, developers can customize the chatbot to suit their needs, and user feedback plays a crucial role in the continuous refinement of OpenChatKit. As decentralized training becomes an increasingly prominent approach in open-source projects, OpenChatKit sets the stage for further innovations in the field, while also foreshadowing the emergence of more advanced chatbot models in the future.Author BioJulian Melanson is one of the founders of Leap Year Learning. Leap Year Learning is a cutting-edge online school that specializes in teaching creative disciplines and integrating AI tools. We believe that creativity and AI are the keys to a successful future and our courses help equip students with the skills they need to succeed in a continuously evolving world. Our seasoned instructors bring real-world experience to the virtual classroom and our interactive lessons help students reinforce their learning with hands-on activities.No matter your background, from beginners to experts, hobbyists to professionals, Leap Year Learning is here to bring in the future of creativity, productivity, and learning! 
Read more
  • 0
  • 0
  • 224

article-image-build-a-project-that-automates-your-code-review
Luis Sobrecueva
19 Jun 2023
15 min read
Save for later

Build a Project that Automates your Code Review

Luis Sobrecueva
19 Jun 2023
15 min read
Developers understand the importance of a solid code review process but find it time-consuming. Language Models (LLMs) offer a solution by providing insights into code flows and enforcing best practices. A project aims to automate code reviews using LLMs, revolutionizing developers' approach. An intelligent assistant will swiftly analyze code differences, generating feedback in seconds. Imagine having an AI-powered reviewer guiding you to write cleaner, more efficient code. The focus is to streamline the code review workflow, empowering developers to produce high-quality code while saving time. The system will offer comprehensive insights through automated analysis, highlighting areas that need attention and suggesting improvements. By embracing LLMs' potential and automation, this project aims to make code reviews seamless and rewarding. Join the journey to explore LLMs' impact on code review and enhance the development experience.Project OverviewIn this article, we are developing a Python program that will harness the power of OpenAI's ChatGPT for code review. This program will read diff changes from the standard input and generate comprehensive code review comments. The generated comments will be compiled into an HTML file, which will include AI-generated feedback for each diff file section, presented alongside the diff sections themselves as code blocks with syntax highlighting. To simplify the review process, the program will automatically open the HTML file in the user's default web browser.Image 1: Project PageBuild your ProjectLet's walk through the steps to build this code review program from scratch. By following these steps, you'll be able to create your own implementation tailored to your specific needs. Let's get started:1. Set Up Your Development Environment Ensure you have Python installed on your machine. You can download and install the latest version of Python from the official Python website.2. Install Required Libraries To interact with OpenAI's ChatGPT and handle diff changes, you'll need to install the necessary Python libraries. Use pip, the package installer for Python, to install the required dependencies. You can install packages by running the following command in your terminal:pip install openai numpy3. To implement the next steps, create a new Python file named `chatgpt_code_reviewer.py`.4. Import the necessary modules:import argparse import os import random import string import sys import webbrowser import openai from tqdm import tqdm5. Set up the OpenAI API key (you'll need to get a key at https://openai.com if you don't have one yet):openai.api_key = os.environ["OPENAI_KEY"]6. Define a function to format code snippets within the code review comments, ensuring they are easily distinguishable and readable within the generated HTML report.def add_code_tags(text):    # Find all the occurrences of text surrounded by backticks    import re    matches = re.finditer(r"`(.+?)`", text)    # Create a list to store the updated text chunks    updated_chunks = []    last_end = 0    for match in matches:     # Add the text before the current match     updated_chunks.append(text[last_end : match.start()])     # Add the matched text surrounded by <code> tags     updated_chunks.append("<b>`{}`</b>".format(match.group(1)))     # Update the last_end variable to the end of the current match     last_end = match.end()    # Add the remaining text after the last match    updated_chunks.append(text[last_end:])    # Join the updated chunks and return the resulting HTML string    return "".join(updated_chunks) 7. Define a function to generate a comment using ChatGPT:def generate_comment(diff, chatbot_context):    # Use the OpenAI ChatGPT to generate a comment on the file changes    chatbot_context.append(     {           "role": "user",           "content": f"Make a code review of the changes made in this diff: {diff}",     }    )    # Retry up to three times    retries = 3    for attempt in range(retries):     try:           response = openai.ChatCompletion.create(                 model="gpt-3.5-turbo",                 messages=chatbot_context,                 n=1,                 stop=None,                 temperature=0.3,           )     except Exception as e:           if attempt == retries - 1:                 print(f"attempt: {attempt}, retries: {retries}")                 raise e  # Raise the error if reached maximum retries           else:                 print("OpenAI error occurred. Retrying...")                 continue    comment = response.choices[0].message.content    # Update the chatbot context with the latest response    chatbot_context = [     {           "role": "user",                 "content": f"Make a code review of the changes made in this diff: {diff}",     },     {           "role": "assistant",           "content": comment,     }    ]    return comment, chatbot_context  The `generate_comment` function defined above uses the OpenAI ChatGPT to generate a code review comment based on the provided `diff` and the existing `chatbot_context`. It appends the user's request to review the changes in the `chatbot_context`. The function retries the API call up to three times to handle any potential errors. It makes use of the `openai.ChatCompletion.create()` method and provides the appropriate model, messages, and other parameters to generate a response. The generated comment is extracted from the response, and the chatbot context is updated to include the latest user request and assistant response. Finally, the function returns the comment and the updated chatbot context.This function will be a crucial part of the code review program, as it uses ChatGPT to generate insightful comments on the provided code diffs.8. Define a function to create the HTML output:def create_html_output(title, description, changes, prompt): random_string = "".join(random.choices(string.ascii_letters, k=5))    output_file_name = random_string + "-output.html"    title_text = f"\nTitle: {title}" if title else ""    description_text = f"\nDescription: {description}" if description else ""    chatbot_context = [     {           "role": "user",           "content": f"{prompt}{title_text}{description_text}",     }    ]    # Generate the HTML output    html_output = "<html>\n<head>\n<style>\n"    html_output += "body {\n    font-family: Roboto, Ubuntu, Cantarell, Helvetica Neue, sans-serif;\n    margin: 0;\n    padding: 0;\n}\n"    html_output += "pre {\n    white-space: pre-wrap;\n    background-color: #f6f8fa;\n    border-radius: 3px;\n  font-size: 85%;\n    line-height: 1.45;\n    overflow: auto;\n    padding: 16px;\n}\n"    html_output += "</style>\n"    html_output += '<link rel="stylesheet"\n href="https://cdnjs.cloudflare.com/ajax/libs/highlight.js/11.7.0/styles/default.min.css">\n <script src="https://cdnjs.cloudflare.com/ajax/libs/highlight.js/11.7.0/highlight.min.js"></script>\n'    html_output += "<script>hljs.highlightAll();</script>\n"    html_output += "</head>\n<body>\n"    html_output += "<div style='background-color: #333; color: #fff; padding: 20px;'>"    html_output += "<h1 style='margin: 0;'>AI code review</h1>"    html_output += f"<h3>Diff to review: {title}</h3>" if title else ""    html_output += "</div>"    # Generate comments for each diff with a progress bar    with tqdm(total=len(changes), desc="Making code review", unit="diff") as pbar:     for i, change in enumerate(changes):           diff = change["diff"]           comment, chatbot_context = generate_comment(diff, chatbot_context)           pbar.update(1)           # Write the diff and comment to the HTML           html_output += f"<h3>Diff</h3>\n<pre><code>{diff}</code></pre>\n"           html_output += f"<h3>Comment</h3>\n<pre>{add_code_tags(comment)}</pre>\n"    html_output += "</body>\n</html>"    # Write the HTML output to a file    with open(output_file_name, "w") as f:     f.write(html_output)    return output_file_name The `create_html_output` function defined above takes the `title`, `description`, `changes`, and `prompt` as inputs. It creates an HTML output file that contains the code review comments for each diff, along with the corresponding diff sections as code blocks with syntax highlighting. Let's explain it in more detail:First, the function initializes a random string to be used in the output file name. It creates the appropriate title and description text based on the provided inputs and sets up the initial `chatbot_context`. Next, the function generates the HTML structure and styling, including the necessary CSS and JavaScript libraries for syntax highlighting. It also includes a header section for the AI code review. Using a progress bar, the function iterates over each change in the changes list. For each change, it retrieves the `diff` and generates a comment using the `generate_comment` function. The progress bar is updated accordingly. The function then writes the `diff` and the corresponding comment to the HTML output. The `diff` is displayed within a `<pre><code>` block for better formatting, and the comment is wrapped in `<pre>` tags. The `add_code_tags` function is used to add code tags to the comment, highlighting any code snippets. After processing all the changes, the function completes the HTML structure by closing the `<body>` and `<html>` tags.Finally, the HTML output is written to a file with a randomly generated name. The file name is returned by the function as the output. This `create_html_output` function makes the final HTML output that presents the code review comments alongside the corresponding diff sections.9. Define a function to get diff changes from the pipeline:def get_diff_changes_from_pipeline():    # Get the piped input    piped_input = sys.stdin.read()    # Split the input into a list of diff sections    diffs = piped_input.split("diff --git")    # Create a list of dictionaries, where each dictionary contains a single diff section    diff_list = [{"diff": diff} for diff in diffs if diff]    return diff_list The `get_diff_changes_from_pipeline` function defined above retrieves the input from the pipeline, which is typically the output of a command like `git diff`. It reads the piped input using the`sys.stdin.read()`. The input is then split based on the "diff --git" string, which is commonly used to separate individual diff sections. This splits the input into a list of diff sections. By dividing the diff into separate sections, this function enables code reviews of very large projects. It overcomes the context limitation that LLMs have by processing each diff section independently. This approach allows for efficient and scalable code reviews, ensuring that the review process can handle projects of any size. The function returns the list of diff sections as the output, which can be further processed and utilized in the code review pipeline10. Define the main function:def main():    title, description, prompt = None, None, None    changes = get_diff_changes_from_pipeline()    # Parse command line arguments    parser = argparse.ArgumentParser(description="AI code review script")    parser.add_argument("--title", type=str, help="Title of the diff")    parser.add_argument("--description", type=str, help="Description of the diff")    parser.add_argument("--prompt", type=str, help="Custom prompt for the AI")    args = parser.parse_args()    title = args.title if args.title else title    description = args.description if args.description else description    prompt = args.prompt if args.prompt else PROMPT_TEMPLATE    output_file = create_html_output(title, description, changes, prompt)    try:     webbrowser.open(output_file)    except Exception:     print(f"Error running the web browser, you can try to open the outputfile: {output_file} manually") if __name__ == "__main__":    main() The `main` function serves as the entry point of the code review script. It begins by initializing the `title`, `description`, and `prompt` variables as `None`. Next, it calls the `get_diff_changes_from_pipeline` function to retrieve the diff changes from the pipeline. These changes will be used for the code review process. The script then parses the command line arguments using the `argparse` module. It allows specifying optional arguments such as `--title`, `--description`, and `--prompt` to customize the code review process. The values provided through the command line are assigned to the corresponding variables, overriding the default `None` values. After parsing the arguments, the `create_html_output` function is called to generate the HTML output file. The `title`, `description`, `changes`, and `prompt` is passed as arguments to the function. The output file name is returned and stored in the `output_file` variable. Finally, the script attempts to open the generated HTML file in the default web browser using the `webbrowser` module. If an error occurs during the process, a message is printed, suggesting manually opening the output file.11. Save the file and run it using the following command: git diff master..branch | python3 chatgpt_code_reviewer.pyA progress bar will be displayed and after a while, the browser will open an html file with the output of the command, depending on the number of files to review it may take a few seconds or a few minutes, in this video, you can see the process. Congratulations! You have now created your own ChatGPT code reviewer project from scratch. Remember to adapt and customize the prompt[1] based on your specific requirements and preferences. You can find the complete code on this ChatGPTCodeReviewer GitHub repository.Happy coding! Article Reference[1] The `prompt` is an essential component of the code review process using ChatGPT. It is typically a text that sets the context and provides instructions to the AI model about what is expected from its response like what it needs to review, specific questions, or guidelines to focus the AI's attention on particular aspects of the code.In the code, a default `PROMPT_TEMPLATE` is used if no custom prompt is provided. You can modify the `PROMPT_TEMPLATE` variable or pass your prompt using the `--prompt` argument to tailor the AI's behavior according to your specific requirements.By carefully crafting the prompt, you can help steer the AI's responses in a way that aligns with your code review expectations, ensuring the generated comments are relevant, constructive, and aligned with the desired code quality standards.Author Bio Luis Sobrecueva is a software engineer with many years of experience working with a wide range of different technologies in various operating systems, databases, and frameworks. He began his professional career developing software as a research fellow in the engineering projects area at the University of Oviedo. He continued in a private company developing low-level (C / C ++) database engines and visual development environments to later jump into the world of web development where he met Python and discovered his passion for Machine Learning, applying it to various large-scale projects, such as creating and deploying a recommender for a job board with several million users. It was also at that time when he began to contribute to open source deep learning projects and to participate in machine learning competitions and when he took several ML courses obtaining various certifications highlighting a MicroMasters Program in Statistics and Data Science at MIT and a Udacity Deep Learning nanodegree. He currently works as a Data Engineer at a ride-hailing company called Cabify, but continues to develop his career as an ML engineer by consulting and contributing to open-source projects such as OpenAI and Autokeras.Author of the book: Automated Machine Learning with AutoKeras 
Read more
  • 0
  • 0
  • 826

article-image-making-the-best-out-of-hugging-face-hub-using-langchain
Ms. Valentina Alto
17 Jun 2023
6 min read
Save for later

Making the best out of Hugging Face Hub using LangChain

Ms. Valentina Alto
17 Jun 2023
6 min read
Since the launch of ChatGPT in November 2022, everyone is talking about GPT models and OpenAI. There is no doubt that the Generative Pre-trained Transformers (GPT) architecture developed by OpenAI has demonstrated incredible results, also given the investments in training (almost 500 billion tokens) and complexity of the model (175 billion parameters for the GPT-3).Nevertheless, there is an incredible number of open-source Large Language Models (LLMs) that have been widespread in the last months. Below are some examples:Dolly: 12 billion parameters LLM developed by Databricks and trained on their ML platform. Source codeàhttps://github.com/databrickslabs/dollyStableML: This is a series of LLM developed by StabilityAI, the company behind the popular image generation model Stable Diffusion. The series encompasses a variety of LLMs, some of which are fine-tuned on specific use cases. Source codeàhttps://github.com/Stability-AI/StableLMFalcon LLM: A 40 billion parameters LLM developed by the Technology Innovation Institute and trained on a particularly high-quality dataset called RefinedWeb. Plus, as for now (June 2023) ranks 1 globally in the latest Hugging Face independent verification of open-source AI models. Source codeà https://huggingface.co/tiiuaeGPT NeoX and GPT-J: An open-source reproduction of the OpenAI’s GPT series developed by Eulether AI, with respectively 40 and 6 billion parameters. Source codeà https://huggingface.co/EleutherAI/gpt-neox-20b and https://huggingface.co/EleutherAI/gpt-j-6bOpenLLaMa: As for the previous class of models, also this one is an open-source reproduction of Meta AI’s LLaMA and has 3.7 billion parameters. Source codeàhttps://github.com/openlm-research/open_llamaIf you are interested in getting deeper into those models and their performance, you can reference the Hugging Face leaderboard here.Image1: Hugging Face Leaderboard Now, LLMs are great, yet to unlock their real power we need them to be positioned within an applicative logic. In other words, we want our LLMs to infuse intelligence within our applications.For this purpose, we will be using LangChain, a powerful lightweight SDK which makes it easier to integrate and orchestrate LLMs within applications. LangChain is one of the most popular LLMs orchestrators, yet if you want to explore further packages I encourage you to read about Semantic Kernel and Jarvis.One of the nice things about LangChain is its integration with external tools: those might be OpenAI (and other LLMs vendors), data sources, search APIs, and so on. In this article, we are going to explore how LangChain makes it easier to leverage open-source LLMs by leveraging its integration with the Hugging Face Hub.Welcome to the realm of open source LLMsThe Hugging Face Hub serves as a comprehensive platform comprising more than 120k models, 20kdatasets, and 50k demo apps (Spaces), all of which are openly accessible and shared as open-source projects. It provides an online environment where developers can effortlessly collaborate and collectively develop machine learning solutions. Thanks to LangChain, it is way easier to start interacting with open-source LLMs. Plus, you can also surround those models with all the libraries provided by LangChain in terms of prompt design, memory retention, chain management, and so on.Let’s see an implementation with Python. To reproduce the code, make sure to have: Python 3.7.1 or higheràyou can check your Python version running python --version in your terminalLangChain installedàyou can install it via pip install langchainThe huggingface_hub Python package installedà you can install it via pip install huggingface_butHugging Face Hub API keyàto get the API key, you can register into the portal here and then generate your secret key.For this example, I’m going to use the lightest version of Dolly, developed by Databricks and available in three sizes: 3, 7, and 12 billion parameters.from langchain import HuggingFaceHub from getpass import getpass HUGGINGFACEHUB_API_TOKEN = "your-api-key" import os os.environ["HUGGINGFACEHUB_API_TOKEN"] = HUGGINGFACEHUB_API_TOKEN repo_id = " databricks/dolly-v2-3b" llm = HuggingFaceHub(repo_id=repo_id, model_kwargs={"temperature":0, "max_length":64})As you can see from the above code, the only information we need are our Hugging Face Hub API key and the model’s repo ID; then, LangChain will take care of initializing our model thanks to the direct integration with Hugging Face Hub.Now that we have initialized our model, it is time to define the structure of the prompt:from langchain import PromptTemplate, LLMChain template = """Question: {question} Answer: Let's think step by step.""" prompt = PromptTemplate(template=template, input_variables=["question"]) llm_chain = LLMChain(prompt=prompt, llm=llm)Finally, we can feed our model with a first question:question = "In the first movie of Harry Potter, what is the name of the three-headed dog? “ print(llm_chain.run(question)) Output: The name of the three-headed dog in Harry Potter and the Philosopher Stone is Fuffy.Even though I tested the light version of Dolly with “only” 3 billion parameters, it came with pretty accurate results. Of course, for more complex tasks or real-world projects, heavier models might be taken into consideration, like the one emerging as top performers in the Hugging Face leaderboard mentioned at the beginning of this article.ConclusionThe realm of open-source LLM is growing exponentially, and this creates a vibrant environment of experimentation and tuning from which anyone can benefit. Plus, some interesting trends are rising, like the reduction of the number of models’ parameters in favour of an increase in quality of the training dataset. In fact, we saw that the current top performer among the open source model is Falcon LLM, with “only” 40 billion parameters, which gained its strength from the high-quality training dataset. Finally, with the development of orchestration frameworks like LangChain and similar, it’s getting easier and easier to leverage open source LLMs and integrate them into our applications. Referenceshttps://huggingface.co/docs/hub/indexOpen LLM Leaderboard — a Hugging Face Space by HuggingFaceH4Hugging Face Hub — 🦜🔗 LangChain 0.0.189Overview (huggingface.co)stabilityai (Stability AI) (huggingface.co)Stability-AI/StableLM: StableLM: Stability AI Language Models (github.com)Author BioValentina Alto graduated in 2021 in data science. Since 2020, she has been working at Microsoft as an Azure solution specialist, and since 2022, she has been focusing on data and AI workloads within the manufacturing and pharmaceutical industries. She has been working closely with system integrators on customer projects to deploy cloud architecture with a focus on modern data platforms, data mesh frameworks, IoT and real-time analytics, Azure Machine Learning, Azure Cognitive Services (including Azure OpenAI Service), and Power BI for dashboarding. Since commencing her academic journey, she has been writing tech articles on statistics, machine learning, deep learning, and AI in various publications and has authored a book on the fundamentals of machine learning with Python.Author of the book: Modern Generative AI with ChatGPT and OpenAI ModelsLink - Medium  LinkedIn  
Read more
  • 0
  • 0
  • 1208

article-image-microsoft-fabric-revolutionizing-data-analytics-and-ai-integration
Julian Melanson
15 Jun 2023
7 min read
Save for later

Microsoft Fabric: Revolutionizing Data Analytics and AI Integration

Julian Melanson
15 Jun 2023
7 min read
In today's data-driven world, organizations are constantly seeking ways to unlock the full potential of their data and seamlessly integrate artificial intelligence into their operations. Addressing this need, Microsoft has introduced Microsoft Fabric at the Build 2023 developer conference. This cutting-edge data analytics platform brings together essential data and analytics tools into a unified software-as-a-service (SaaS) product. In this article, we will explore the features of Microsoft Fabric, uncovering how it simplifies AI integration and revolutionizes data analytics, empowering organizations to harness the power of their data.A Holistic Approach to Data AnalyticsMicrosoft Fabric offers a holistic solution by integrating platforms such as Data Factory, Synapse, and Power BI into a cohesive and comprehensive platform. By replacing disparate systems with a simplified and cost-effective platform, Microsoft Fabric provides organizations with a unified environment for AI integration. It consolidates essential components like data integration, data engineering, data warehousing, data science, real-time analytics, applied observability, and business intelligence into a single ecosystem, equipping data professionals with a comprehensive suite of tools.Core Workloads in Microsoft Fabric:Microsoft Fabric supports seven core workloads that cater to various aspects of data analytics and AI integration, ensuring a seamless end-to-end experience:Data Factory: This workload facilitates seamless data integration by offering over 150 connectors to popular cloud and on-premises data sources. With its user-friendly drag-and-drop functionality, Data Factory simplifies data integration tasks.Synapse Data Engineering: With a focus on data engineering, this workload promotes collaboration among data professionals, streamlining the data engineering workflow and ensuring efficient data processing.Synapse Data Science: Designed for data scientists, this workload enables the development, training, deployment, and management of AI models. It offers an end-to-end workflow and infrastructure support for streamlined AI development.Synapse Data Warehousing: This workload provides a converged lake house and data warehousing experience. Optimizing SQL performance on open data formats, it offers efficient and scalable data warehousing capabilities.Synapse Real-Time Analytics: Tailored for handling high volumes of streaming data, this workload enables developers to analyze semi-structured data in real time with minimal latency. It caters to IoT devices, telemetry, and log data.Power BI Integration: Microsoft Fabric seamlessly integrates powerful visualization capabilities and AI-driven analytics from Power BI. This integration empowers business analysts and users to derive valuable insights through visually appealing representations.Data Activator: This workload offers a no-code experience for real-time data detection and monitoring. By triggering notifications and actions based on predefined data patterns, Data Activator ensures proactive data management.Simplifying AI DevelopmentMicrosoft Fabric addresses the challenges organizations face in AI development by providing a unified platform that encompasses all necessary capabilities to extract insights from data and make them accessible to AI models or end users. To enhance usability, Fabric integrates its own copilot tool, allowing users to interact with the platform using natural language commands and a chat-like interface. This feature streamlines code generation, query creation, AI plugin development, custom Q&A, visualization creation, and more.The Power of OneLake and Integration with Microsoft 365Microsoft Fabric is built upon the OneLake open data lake platform, which serves as a central repository for data. OneLake eliminates the need for data extraction, replication, or movement, acting as a single source of truth. This streamlined approach enhances data governance while offering a scalable pricing model that aligns with usage. Furthermore, Fabric seamlessly integrates with Microsoft 365 applications, enabling users to discover and analyze data from OneLake directly within tools such as Microsoft Excel. This integration empowers users to generate Power BI reports with a single click, transforming data analysis into a more intuitive and efficient process. Additionally, Fabric enables users of Microsoft Teams to bring data directly into their chats, channels, meetings, and presentations, expanding the reach of data-driven insights. Moreover, sales professionals using Dynamics 365 can leverage Fabric and OneLake to unlock valuable insights on customer relationships and business processes.Getting Started with Microsoft FabricFollow these steps to start your Fabric (Preview) trial.Open the Fabric homepage and select the Account Manager.In the Account Manager, select Start Trial.If prompted, agree to the terms and then select Start Trial.Once your trial capacity is ready, you receive a confirmation message. Select Got it to begin working in Fabric.Open your Account manager again. Notice that you now have a heading for Trial status. Your Account manager keeps track of the number of days remaining in your trial. You also see the countdown in your Fabric menu bar when you work on a product experience.You now have a Fabric (Preview) trial that includes a Power BI individual trial (if you didn't already have a Power BI paid license) and a Fabric (Preview) trial capacity.Below is a list of end-to-end tutorials available in Microsoft Fabric with links to Microsoft’s website.Multi-experience tutorialsThe following table lists tutorials that span multiple Fabric experiences.Tutorial nameScenarioLakehouseIn this tutorial, you ingest, transform, and load the data of a fictional retail company, Wide World Importers, into the lakehouse and analyze sales data across various dimensions.Data ScienceIn this tutorial, you explore, clean, and transform a taxicab trip dataset, and build a machine-learning model to predict trip duration at scale on a large dataset.Real-Time AnalyticsIn this tutorial, you use the streaming and query capabilities of Real-Time Analytics to analyze the New York Yellow Taxi trip dataset. You uncover essential insights into trip statistics, taxi demand across the boroughs of New York, and other related insights.Data warehouseIn this tutorial, you build an end-to-end data warehouse for the fictional Wide World Importers company. You ingest data into a data warehouse, transform it using T-SQL and pipelines, run queries, and build reports.Experience-specific tutorialsThe following tutorials walk you through scenarios within specific Fabric experiences.Tutorial nameScenarioPower BIIn this tutorial, you build a dataflow and pipeline to bring data into a lakehouse, create a dimensional model, and generate a compelling report.Data FactoryIn this tutorial, you ingest data with data pipelines and transform data with dataflows, then use the automation and notification to create a complete data integration scenario.Data Science end-to-end AI samplesIn this set of tutorials, learn about the different Data Science experience capabilities and examples of how ML models can address your common business problems.Data Science - Price prediction with RIn this tutorial, you build a machine learning model to analyze and visualize the avocado prices in the US and predict future prices.SummaryMicrosoft Fabric revolutionizes data analytics and AI integration by providing organizations with a unified platform that simplifies the complex tasks involved in AI development. By consolidating data-related functionalities and integrating with Microsoft 365 applications, Fabric empowers users to harness the power of their data and make informed decisions. With its support for diverse workloads and integration with OneLake, Microsoft Fabric paves the way for organizations to unlock the full potential of data-driven insights and AI innovation. As the landscape of AI continues to evolve, Microsoft Fabric stands as a game-changing solution for organizations seeking to thrive in the era of data-driven decision-making.Author BioJulian Melanson is a full-time teacher and best-selling instructor dedicated to helping students realize their full potential. With the honor of teaching over 150,000 students from 130 countries across the globe, he has honed his teaching skills to become an expert in this field. He focuses on unlocking the potential of creativity and productivity with AI tools and filmmaking techniques learned over the years, creating countless content for clients from many industries.
Read more
  • 0
  • 0
  • 93

article-image-babyagi-empowering-task-automation-through-advanced-ai-technologies
Rohan Chikorde
15 Jun 2023
8 min read
Save for later

BabyAGI: Empowering Task Automation through Advanced AI Technologies

Rohan Chikorde
15 Jun 2023
8 min read
In today's rapidly evolving technological landscape, the demand for efficient task automation has grown exponentially. Businesses and individuals alike are seeking innovative solutions to streamline their workflows, optimize resource allocation, and enhance productivity. In response to this demand, a groundbreaking autonomous AI agent called BabyAGI has emerged as a powerful tool for task management.BabyAGI stands out from traditional AI agents by leveraging cutting-edge technologies from OpenAI, Pinecone, LangChain, and Chroma. These technologies work in synergy to provide a comprehensive and versatile solution for automating tasks and achieving specific objectives.This comprehensive blog post aims to delve into the world of BabyAGI, exploring its unique features, functionalities, and potential applications. Additionally, it will provide a step-by-step guide on how to set up and run BabyAGI effectively, allowing users to harness its full potential for task automation. By the end of this blog post, you will have a deep understanding of BabyAGI and the tools required to utilize it successfully. Whether you're a project manager looking to optimize workflows, a content creator seeking innovative ideas, or an individual aiming to streamline personal tasks, BabyAGI can revolutionize your approach to task management.Understanding BabyAGIBabyAGI is an autonomous AI agent that brings together the power of various advanced technologies to revolutionize task automation. By leveraging the capabilities of OpenAI's GPT3.5 or GPT4 models, BabyAGI possesses a remarkable ability to generate creative ideas and insights. One of the distinguishing features of BabyAGI is its focus on brainstorming and ideation. Unlike traditional AI agents that rely on browsing external resources, BabyAGI draws upon its existing knowledge base, which is built using the vast amount of information available up until 2021. By avoiding interruptions caused by web searches, BabyAGI can maintain its train of thought and provide uninterrupted suggestions and solutions.The integration of Pinecone's vector database service is another key component of BabyAGI's capabilities. Pinecone enables BabyAGI to store and manage information in a unique way by converting it into vectors. These vectors capture the essential details of the data, making it easier for BabyAGI to understand and work with. LangChain, another technology incorporated into BabyAGI, enhances its integration potential. This means that BabyAGI can be seamlessly integrated into existing systems, allowing users to leverage its functionalities alongside other tools and resources.Chroma, the final technology in BabyAGI's arsenal, brings an innovative approach to the table. Chroma focuses on efficient task management, helping users prioritize and optimize their workflows. By automating repetitive and time-consuming tasks, BabyAGI frees up valuable resources and allows users to focus on more strategic and value-added activities. The combination of these advanced technologies in BabyAGI makes it a versatile and powerful tool for various industries and individuals. Project managers can leverage BabyAGI's brainstorming capabilities to generate new ideas and innovative approaches. Content creators can tap into BabyAGI's creativity to enhance their creative process and produce engaging content. Individuals can rely on BabyAGI to automate mundane tasks, freeing up time for more fulfilling activities.BabyAGI represents a significant leap forward in task automation. By combining the capabilities of OpenAI's models, Pinecone's vector database, LangChain's integration potential, and Chroma's efficiency, BabyAGI offers a unique and powerful solution for brainstorming, ideation, and task management. Its ability to generate creative ideas, prioritize tasks, and optimize workflows sets it apart as a revolutionary tool in the realm of AI-driven automation.How It WorksThe script operates through an endless loop, executing the following sequence:Retrieves the initial task from the list of tasks.Dispatches the task to the execution agent, which leverages OpenAI's API to accomplish the task within the given context.Enhances the obtained result and archives it in Chroma/Weaviate.Chroma is an open-source embedding database. Chroma makes it easy to build LLM apps by making knowledge, facts, and skills pluggable for LLMs. Weaviate is an open-source vector database. It allows you to store data objects and vector embeddings from your favorite ML models, and scale seamlessly into billions of data objects. Generates fresh tasks and reorganizes the task list based on the objective and the outcome of the preceding task.Image 1: Working, Pic Credits: https://github.com/yoheinakajima/babyagiThe function execution_agent() utilizes the OpenAI API, employing two parameters: the objective and the task. It sends a prompt to the OpenAI API, which in turn provides the task result. The prompt consists of a description of the AI system's task, the objective, and the task itself. The returned result is a string.In the function task_creation_agent(), OpenAI's API is employed to generate new tasks based on the objective and the result of the previous task. The function takes four parameters: the objective, the previous task's result, the task description, and the current task list. It sends a prompt to the OpenAI API, which response with a list of new tasks as strings. The function returns the new tasks as a list of dictionaries, where each dictionary contains the task name.The prioritization_agent() function utilizes the OpenAI API to rearrange the task list based on priority. It accepts the ID of the current task as a parameter. It sends a prompt to the OpenAI API, receiving the reprioritized task list as a numbered list.Lastly, the script utilizes Chroma/Weaviate to store and retrieve task results for context. It creates a Chroma/Weaviate collection using the specified table name in the TABLE_NAME variable. The script employs Chroma/Weaviate to store task results in the collection, including the task name and any additional metadata.Setting Up BabyAGITo unleash the power of BabyAGI, the first step is to ensure that Python and Git are installed on your system. Python serves as the programming language for BabyAGI, while Git enables easy access to the necessary files and updates. If you don't have Python and Git installed, you can obtain them from their respective sources. For Python, head over to python.org and download the latest version compatible with your operating system. Follow the installation instructions provided by the Python installer to set it up on your machine. It is recommended to choose the option to add Python to your system's PATH during installation for easier accessibility.For Windows users, Git can be obtained by visiting the official Git website. Download the appropriate installer for your system and execute it. The installer will guide you through the installation process, allowing you to choose the desired options. Select the option to add Git to your system's PATH for convenient usage. Once you have Python and Git installed, the next crucial step is to acquire an OpenAI API Key. This key grants you access to OpenAI's powerful GPT3.5 or GPT4 models, which are essential for BabyAGI's operation. To obtain an API Key, you need a paid OpenAI account.Visit the OpenAI website and log in to your account. Navigate to the API section and follow the instructions to obtain your API Key. It is crucial to keep this key secure as it provides access to powerful AI models and should not be shared with unauthorized individuals.  Image 2: OpenAI API KeyWith Python, Git, and the OpenAI API Key in place, you are now ready to proceed with setting up BabyAGI. The next step involves downloading the BabyAGI source code from its GitHub repository. Visit the GitHub page and locate the download option.Clone the repository via git clone https://github.com/yoheinakajima/babyagi.git and cd into the cloned repository. Install the required packages: pip install -r requirements.txtCopy the .env.example file to .env: cp .env.example .env. This is where you will set the following variables.Set your OpenAI API key in the OPENAI_API_KEY and OPENAPI_API_MODEL variables. In order to use with Weaviate you will also need to set up additional variables detailed here.             Set the name of the table where the task results will be stored in the TABLE_NAME variable.(Optional) Set the name of the BabyAGI instance in the BABY_NAME variable.(Optional) Set the objective of the task management system in the OBJECTIVE variable.(Optional) Set the first task of the system in the INITIAL_TASK variable.Run the script: python babyagi.pyNote: The intention behind this script is to run it continuously as an integral component of a task management system. It is crucial to exercise responsible usage due to the potential for high API utilization when running the script continuously.ConclusionBabyAGI is a revolutionary AI agent that brings together the strengths of OpenAI, LangChain, and Chroma to deliver a powerful tool for task automation. With its ability to generate creative ideas, optimize workflows, and enhance decision-making, BabyAGI is transforming the way professionals approach task management. By automating repetitive tasks and providing valuable insights, it empowers project managers, content creators, and decision-makers to achieve better outcomes and maximize productivity. As technology continues to evolve, BabyAGI stands at the forefront of innovation, paving the way for more efficient and intelligent task management in a wide range of industries.Author BioRohan is an accomplished AI Architect professional with a post-graduate in Machine Learning and Artificial Intelligence. With almost a decade of experience, he has successfully developed deep learning and machine learning models for various business applications. Rohan's expertise spans multiple domains, and he excels in programming languages such as R and Python, as well as analytics techniques like regression analysis and data mining. In addition to his technical prowess, he is an effective communicator, mentor, and team leader. Rohan's passion lies in machine learning, deep learning, and computer vision.LinkedIn
Read more
  • 0
  • 0
  • 283

article-image-build-an-irreverent-news-feed-in-python-and-bard-api
Darren Broemmer
14 Jun 2023
8 min read
Save for later

Build an Irreverent News Feed in Python and Bard API

Darren Broemmer
14 Jun 2023
8 min read
Now that we can invoke Bard from our code using this article, let’s build a news feed application that incorporates Generate AI capability. Not only will it grab the top headlines and stories, but it will also use the Bard API to transform them in a comical way. Note that the Bard API is an unofficial, cutting-edge technology in a rapidly changing area and is not production ready at the time of writing. The first step is to get the list of news stories. First, let’s use the Bard API to do this.As seen above, requests to Google Bard yield a natural language response, just like the web interface does. Thus, depending on your use case, you may need to specify a more formal output format so that your program can consume the data.Specify the output format in your prompt.Our news application could use the prompt, What are the top five news stories today?This will give us a text response with a numbered or bulleted list of stories. Unless we have a text-only user interface, our application will need to parse this out into individual stories for display.Thus, to get a computer-readable set of news stories, use a prompt in the form below. prompt = '''What are the top five news stories today?Structure the output as a list of xml objects each called story.Each story element should have child elements called title, content, and url.''' Here is an example of what this prompt returned. Sure, here are the top five news stories today, structured as a list of XML objects with the attributes title, content, and URL:```xml <stories> <story>    <title>US House Votes to Suspend Debt Ceiling and Avoid Default</title>    <content>The US House of Representatives voted on Thursday to suspend the debt ceiling and avoid a default on the country's debt. The vote was 221-212, with all Democrats and 11 Republicans voting in favor. The Senate is expected to vote on the measure later this week.</content> <url>https://www.cnn.com/2023/06/01/politics/us-debt-ceiling-vote/index.html</url> </story> … </stories> ```Note the pre-canned introduction at the beginning of the response. Multiple attempts to instruct Bard to omit this and only return the XML data were unsuccessful. However, Bard used a markdown format so extracting the substring from the response is fairly easy. Here is the code we used that leverages the prompt shown earlier.def get_bard_headlines():   answer = call_bard(prompt)   # We need to filter out the precanned response and get just the xml   # Bard's response beings with something like:   #   # Sure, here are the top five news stories today, structured as a   # list of XML objects with the attributes title, content, and url:   # ```xml   # <news>   # ...   # </news>   # ```   # Get just the json text from the response   delimeter = "```xml"   idx1 = answer.index(delimeter)   idx2 = answer.index("```", idx1 + 6)   # length of substring 1 is added to   # get string from next character   xml_data = answer[idx1 + len(delimeter) + 1: idx2]   article_list = []   # Parse the XML data   root = ET.fromstring(xml_data)   # Iterate over each 'item' element   for item in root.iter('story'):       title = item.find('title').text       content = item.find('content').text       url = item.find('url').text       article_dict = {'title': title, 'content': content, 'url': url}       article_list.append(article_dict)   return article_list  Why use an XML format and not JSON? In our testing with Bard, it did not always create correctly formatted JSON for this use case. It had trouble when the content included single or double quotation marks. See the example below where it incorrectly wrapped text outside of the quotes, resulting in an invalid JSON that caused an exception during parsing. For this news use case, XML provided reliable results that our application code could easily parse. For other use cases, JSON may work just fine. Be sure to perform testing on different data sets to see what is best for your requirements. Even if you are not familiar with parsing XML using Python, you can ask Bard to write the code for you:At this point, we have the basic news stories, but we want to transform them in a comic way. We can either combine the two steps into a single prompt, or we can use a second step to convert the headlines.One thing to keep in mind is that calls to the Bard API are often slower than typical services. If you use the output from Bard in a high-volume scenario, it may be advantageous to use a background process to get content from Bard and cache that for your web application.Another option to get fast, reliable headlines is to use the GNews API. It allows up to 100 free queries per day. Create an account to get your API token. The documentation on the site is very good, or you can ask Bard to write the code to call the API. It has a top headlines endpoint with several categories to choose from (world, business, technology, etc.), or you can perform a query to retrieve headlines using arbitrary keyword(s). Here is the code to get the top headlines.def get_gnews_headlines(category):   url = f"https://gnews.io/api/v4/top-headlines?category={category}&lang=en&country=us&max=10&apikey={gnews_api_key}"   with urllib.request.urlopen(url) as response:       data = json.loads(response.read().decode("utf-8"))       articles = data["articles"]       return articlesUse Bard to create a web interfaceBottle is a lightweight web server framework for Python. It is quick and easy to create web applications. We use the following prompt to have Bard write code for a news feed web page built using the Bottle framework.  Here is a snapshot of the resulting page. It can use some CSS styling, but not bad for a start. The links take you to the underlying news source and the full article. Use Bard to transform headlinesWith the basic news structure in place, Bard can be used for the creative part of transforming the headlines. The prompt and Python functions to do this are shown below. Bard sometimes returns multiple lines of output, so this function only returns the first line of actual output. reword_prompt = '''Rewrite the following news headline to have a comic twist to it. Use the comic style of Jerry Seinfeld. If the story is about really bad news, then do not attempt to rewrite it and simply echo back the headline. ''' def bard_reword_headline(headline):   answer = call_bard(reword_prompt + headline)   # Split response into lines   lines = answer.split('\n')   # Ignore the first line which is Bard’s canned response   lines = lines[1:]   for line in lines:       if len(line) > 0:           return line.strip("\"")   return "No headlines available" We then incorporate this into our web server code:import bottle import requests from gnews import get_gnews_headlines from bard import bard_reword_headline # Define the Bottle application. app = bottle.Bottle() # Define the route to display the news stories. @app.route("/") def index():   # Get the news articles from the gnews.io API.   # Category choices are: general, world, nation, business, technology, entertainment,   # sports, science, and health   articles = get_gnews_headlines("technology")   # Reword each headline   for article in articles:       reworded_headline = bard_reword_headline(article['title'])       article['comic_title'] = reworded_headline   # Render the template with the news stories.   return bottle.template("news_template.html", articles=articles) # Run the Bottle application. if __name__ == "__main__":   app.run(host="0.0.0.0", port=8080) Here is the result. I think Bard did a pretty good job. SummaryIn conclusion, building an irreverent news feed using Python and the Bard API is a practical and engaging project. By harnessing the power of Python, developers can extract relevant news data and employ creative techniques to inject humor and satire into the feed. The Bard API provides a vast collection of literary quotes that can be seamlessly integrated into the news content, adding a unique and entertaining touch. This project not only demonstrates the versatility of Python but also showcases the potential for innovative and humorous news delivery in the digital age.Author BioDarren Broemmer is an author and software engineer with extensive experience in Big Tech and Fortune 500. He writes on topics at the intersection of technology, science, and innovation.LinkedIn  
Read more
  • 0
  • 0
  • 138
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-animating-adobe-firefly-content-with-adobe-animate
Joseph Labrecque
14 Jun 2023
10 min read
Save for later

Animating Adobe Firefly Content with Adobe Animate

Joseph Labrecque
14 Jun 2023
10 min read
You can download the image source files for this article from hereAn Introduction to Adobe FireflyEarlier this year, Adobe unveiled its set of generative AI tools via https://firefly.adobe.com/. These services are similar to other generative image AIs such as Midjourney or Dalle-2 and are exposed through a web-based interface with a well-thought-out UI. Eventually, Adobe plans to integrate these procedures into existing software such as Photoshop, Premiere Pro, Express, and more.Originally as a waitlist that one would need to sign up for… Firefly is now open to anyone for use with an Adobe ID.Image 1: Adobe Firely  Firefly is the new family of creative generative AI models coming to Adobe products, focusing initially on image and text effect generation. Firefly will offer new ways to ideate, create, and communicate while significantly improving creative workflows. Firefly is the natural extension of the technology Adobe has produced over the past 40 years, driven by the belief that people should be empowered to bring their ideas into the world precisely as they imagine them. -- AdobeA few of the things that make Adobe Firefly different from similar generative AI services are the immediate approach to moving beyond prompt-based image generation to procedures such as prompt-driven text effects and vector recolorization… the fact that the Firefly model is based upon Adobe Stock content instead of general web content… and Adobe’s commitment to the Content Authenticity Initiative.As of early June 2023, the following procedures are available through Adobe Firefly: Text to Image: Generate images from a detailed text description.Generative Fill: Use a brush to remove objects, or paint in new ones from text descriptions.Text Effects: Apply styles or textures to text with a text prompt.Generative Recolor: Generate color variations of your vector artwork from a detailed text description.In addition to this, the public Adobe Photoshop beta also has deep integration with Firefly through the generative fill and other procedures exposed through time-tested Photoshop workflows.Generating Images with Text PromptsLet’s have a look at how the prompt-based image generation in Adobe Firefly works. From the Firefly start page, choose the option to generate content using Text to Image and type in the following prompt:cute black kitten mask with glowing green eyes on a neutral pastel backgroundOn the image generation screen, select None as the content type. You should see similar results as the screenshot pictures below.Image 2: Cute black kitten with green eyesFirefly generates a set of four variations based on the given text prompt. You can direct the AI to adjust certain aspects of the images that are generated by tweaking the prompt, specifying certain styles via the user interface, and making choices around aspect ratio and other aspects.Using Generative FillWe can enter the generative fill interface directly following the creation of our image assets by hovering over the chosen variant and clicking the Generative Fill option.Image 3: Generative fill outputThe generative fill UI contains tools for inserting and removing content from an existing image with the direction of text prompts as well. In the example below, we are brushing over the kitten’s forehead and specifying that we would like the AI to generate a crown in that place via a text prompt:Image 4: Generating a crown using a text promptClicking the Generate button will create a set of variants - similar to when we first created the image. At this point, we can choose the variant we like best and click the Download button to download the image to our local machine for further manipulation.Image 5: Image generated with a crown addedSee the file Firefly Inpaint 20230531190125.png to view the resultant image generated by Firefly with the crown added. You can download the files from here.Preparing for Animation with PhotoshopBefore bringing the generated kitten image into Animate, we’ll want to optionally remove the background using Photoshop. With the new contextual toolbar – Adobe makes this very simple. Simply open the generated image file in Photoshop and with the single available layer selected, choose Remove Background from the contextual toolbar that appears beneath the image.Image 6: Removing background in PhotoshopThe background is removed via a layer mask that you can then use to refine the portions of your image that have been hidden through the normal procedure of brushing black or white across the mask to hide or reveal content.See the file Kitten.png to view the image with the background removed by Photoshop.To export your image without the background, choose File… Export… Export As… from the application menu - and select to export a transparent .png file to a location of your local machine that you’ll remember.Animate Document SetupWe’ve prepared a new Animate document using the resulting .png file as a base. The stage measures 1024x1024 and has been given an appropriate background color. The FPS (frames per second) value is set to 30 – making every 30 frames in the timeline defines 1 second of motion. The kitten image has been added to the single layer in this document and is present on the stage:See the file KittenStarter.fla from here to if you’d like to follow along in the next section.Producing Motion from Generative Images with Adobe AnimateWhile Adobe Animate does not have any direct interaction with Firefly procedures yet – it has exhibited Adobe Sensei capabilities for a number of years now through automated lip-sync workflows that make use of the Sensei AI to match visemes with language sounds (phonemes) across the timeline for automated frame assignments. Since we are dealing with a kitten image and not a speaking character – we will not be using this technology for this article. 1. Instead, we’ll be using another new animation workflow available in Animate through the Asset Warp tool and the creation of Warped Objects. Select the Asset Warp tool from the toolbar. This tool allows us to transform images into Warped Objects and to place and manipulate pins to create a warp mesh.2. In the Properties panel, ensure in the Tool Properties tab that Creates Bones is toggled off within the Warp Options section. This will ensure we create pins and not bones as would be needed for a full rig.Image 7: Properties Panel 3. Using the Asset Warp tool, click the tip of the kitten’s right ear to establish a pin and transform the image to a Warped Object. An overlay mesh is displayed across the object – indicating the form and complexity of the newly created warped object.Image 8: Warped Object4. Add additional pins across the image to establish a set of control pins. I recommend moving clockwise around the kitten’s face and have placed a total of 8 pins in locations where I may want to control the mesh.Image 8: Kitten face clock-wise 5. Now, look at the timeline and note that we currently have a single keyframe at frame 1. It is good practice to establish your Warped Object and its associated pins first, before creating additional keyframes… but changes can be propagated across frames if necessary. Select frame 30 and insert a new keyframe from the menu above the timeline to duplicate the keyframe at frame 1. You’ll then have 2 identical keyframes in the timeline with a span of frames between them.6. Add a third keyframe at frame 15. This is the frame in which we will manipulate the Warped Object. The keyframes at frames 1 and 30 will remain as they are.7. Using the Asset Warp toll once again, click and drag the various pins at frame 15 to distort the mesh and change the underlying image in subtle ways. I pull the ears closer together and move the chin down and adjust the crown and cheeks very slightly. 8. We’ll now establish animation between these three keyframes with a Classic Tween. In the timeline, select a series of frames across both frames spans so they are highlighted in blue. From the men above the timeline, choose to Create classic tween.9. The frame spans take on a violet color which indicates a set of Classic Tweens have been established. The arrows across each tween indicate there are no technical errors. Scrub the blue playhead across the timeline to view the animation you’ve created from our Firefly content. Your Adobe Firefly content is now fully animated!Refining Warped Object MotionIf desired, you can refine the motion by inserting easing effects such as a Cubic Ease In Out across your tweens by selecting the frames of a tween and looking at the Frame tab in the Properties panel. In the Tweening section, simply click the Effect button to view your easing presets and double-click the one you wish to apply.See the file KittenComplete.fla to view the finished animation.Sharing Your WorkIt’s easy to share your animated Firefly creation through the menu at the upper right of the screen in Animate. Simply click the Quick Share and publish icon to begin.Choose the Publish option… and then Animated GIF (.gif) – and click the Publish button to generate the file.You will be able to locate the published .gif file in the same folder as your .fla document.See the file Kitten.gif to view the generated animated GIF.Author BioJoseph is a Teaching Assistant Professor, Instructor of Technology, University of Colorado Boulder / Adobe Education Leader / Partner by DesignJoseph Labrecque is a creative developer, designer, and educator with nearly two decades of experience creating expressive web, desktop, and mobile solutions. He joined the University of Colorado Boulder College of Media, Communication, and Information as faculty with the Department of Advertising, Public Relations, and Media Design in Autumn 2019. His teaching focuses on creative software, digital workflows, user interaction, and design principles and concepts. Before joining the faculty at CU Boulder, he was associated with the University of Denver as adjunct faculty and as a senior interactive software engineer, user interface developer, and digital media designer.Labrecque has authored a number of books and video course publications on design and development technologies, tools, and concepts through publishers which include LinkedIn Learning (Lynda.com), Peachpit Press, and Adobe. He has spoken at large design and technology conferences such as Adobe MAX and for a variety of smaller creative communities. He is also the founder of Fractured Vision Media, LLC; a digital media production studio and distribution vehicle for a variety of creative works.Joseph is an Adobe Education Leader and member of Adobe Partners by Design. He holds a bachelor’s degree in communication from Worcester State University and a master’s degree in digital media studies from the University of Denver.Author of the book: Mastering Adobe Animate 2023
Read more
  • 0
  • 0
  • 826

article-image-simplifying-code-review-using-google-bard
Darren Broemmer
11 Jun 2023
7 min read
Save for later

Simplifying Code Review using Google Bard

Darren Broemmer
11 Jun 2023
7 min read
Code reviews are an essential part of the software development process. They help improve the quality of code by identifying potential bugs and suggesting improvements. They ensure that the code is readable and maintainable. However, conducting code reviews takes time out of an engineer's busy schedule. There is also the potential that latent issues in the code may be overlooked. Humans do make mistakes after all. This is where Google Bard comes into play.Bard is a tireless and efficient code reviewer that you can put to work anytime. It is a Large Language Model (LLM) that was trained on a massive dataset of text and code. White Bard is still listed as experimental, it has learned how to review and explain code written in many different programming languages.How can Google Bard be used for code reviews?Code reviews typically involve three basic steps: Understand: The reviewer first reads the code and comprehends its purpose and structure.Identify: Potential bugs or issues are noted. Gaps in functionality may be called out. Error handling is reviewed, including edge cases that may not be handled properly.Optimize: Areas of improvement are noted, and refactoring suggestions are made. These suggestions may be in terms of maintainability, performance, or other area Fortunately, Bard can help in all three code review phases.Ask Bard for an explanation of the codeBard can generate a summary description that explains the purpose of the code along with an explanation of how it works. Simply ask Bard to explain what the code does. Code can be directly included in the prompt, or you can reference a GitHub project or URL. Keep in mind, however, that you will get more detailed explanations when asking about smaller amounts of code. Likewise, if you use the following prompt asking about the BabyAGI repo, you will get a summary description.Image 1: Google Bard’s explanation on Baby AGI The interesting thing about Bard’s explanation is that not only did it explain the code in human terms, but it also provided a pseudo-code snippet to make it easy to comprehend. This is impressive and extremely useful as a code reviewer. You have a roadmap to guide you through the rest of the review. Developers who are unfamiliar with the code or who need to understand it better can leverage this Bard capability.Given that LLMs are very good at text summarization, it is not surprising that they can do the same for code. This gives you a foundation for understanding the AGI concept and how BabyAGI implements it. Here is a highlight of the overview code Bard provided:def create_plan(text):    """Creates a plan that can be executed by the agent."""    # Split the text into a list of sentences.    sentences = text.split(".")    # Create a list of actions.    actions = []    for sentence in sentences:        action = sentence.strip()       actions.append(action)    return actions def main():    """The main function."""    # Get the task from the user.    task = input("What task would you like to complete? ")    # Generate the text that describes how to complete the task.    text = get_text(task)    # Create the plan.    plan = create_plan(text)    # Print the plan.    print(plan) if __name__ == "__main__":    main()BabyAGI uses a loop to continue iterating on nested tasks until it reaches its goal or is stopped by the user. Given a basic understanding of the code, reviews become much more effective and efficient.Identifying bugs using BardYou can also ask Bard to identify potential bugs and errors. Bard will analyze the code and look for patterns associated with bugs. It will point out edge cases or validations that may be missing. It also looks for code that is poorly formatted, has unused variables, or contains complex logic. Let’s have Bard review its own summary code shown above. The get_text function has been modified to use the Bard API. Our review prompt is as follows. It includes the code to review inline. Review the following Python code for potential bugs or errors: import requests import json def get_text(task):    """Generates text that describes how to complete a given task."""     bard = Bard()     answer = bard.get_answer(task)     return answer['content'] def create_plan(text):    """Creates a plan that can be executed by the agent."""    # ... remainder of code omitted here for brevity ...Image 2: Google Bard’s explanation of reviewed code Bard pointed out a number of potential issues. It noted that error handling is missing around the Bard API call. It commented that input validations should be added, as well as default values that could make the code more user-friendly.Optimizations using Google BardBard already provided some suggested improvements during its review. You can also ask Bard specifically to suggest optimizations to your code. Bard does this by analyzing the code and looking for opportunities to improve its efficiency, reliability, or security. For example, Bard can suggest ways to simplify code, improve performance, or make it more secure. Let’s do this for the main BabyAGI Python script. Instead of referencing the GitHub repository as we did in the explanation phase, we provide the main loop code inline so that Bard can provide more detailed feedback. The structure of the prompt is similar to the prior phase. Review the following Python code for potential optimizations:def main():    loop = True    while loop:        # As long as there are tasks in the storage…        # ... remainder of function omitted here for brevity ... Image 3: Google Bard’s explanation of reviewed code While the print suggestion is not particularly helpful, Bard does make an interesting suggestion regarding the sleep function. Beyond pointing out that it may become a bottleneck, it highlights that the sleep function used is not threadsafe. While time.sleep is blocking, the use of asyncio.sleep is non-blocking. Using the asyncio the method asks the event loop to run something else while the await statement finishes its execution. If we were to incorporate this script into a more robust web application, this would be an important optimization to make.General tips for using Bard during code reviews.Here are some tips for using Bard for code reviews: Be specific where possible. For example, you can ask for specific types of improvements and optimizations during your code review. The more specific the request, the better Bard will be able to respond to it.Provide detailed context where needed. The more information Bard has, the more effective the review feedback will be.Be open to feedback and suggestions from Bard, even if they differ from your own opinions. Consider and evaluate the proposed changes before implementing them.ConclusionGoogle Bard is a powerful tool that can be used to improve the quality of code. Bard can help to identify potential bugs and errors, suggest improvements, and provide documentation and explanations. This can help to improve the efficiency, quality, and security of code.If you are interested in using Google Bard for code reviews, you can sign up for the early access program. To learn more, visit the Google Bard website.Author BioDarren Broemmer is an author and software engineer with extensive experience in Big Tech and Fortune 500. He writes on topics at the intersection of technology, science, and innovation. LinkedIn 
Read more
  • 0
  • 0
  • 163

article-image-improvising-your-infrastructure-as-code-using-ai-tools
Russ McKendrick
11 Jun 2023
6 min read
Save for later

Improvising your Infrastructure as Code using AI-tools

Russ McKendrick
11 Jun 2023
6 min read
In Infrastructure as Code for Beginners, I briefly discussed the role of AI tools such as ChatGPT. Since then, new features have emerged, making this the perfect time to revisit the topic and evaluate whether my initial recommendations have evolved. Let's quickly recap: GPT-powered chat services, including ChatGPT, certainly have their place in Infrastructure as Code (IaC) projects. However, we are not quite at the point where we can ask these tools to create our projects from scratch. The primary reason for this is straightforward: GPT, while impressive, can make mistakes. This is fine, except that when GPT errors, it often does so by inventing information, which it then presents in a highly convincing manner. On the one hand, you have Infrastructure as Code which has been taking away the heavy lifting of managing infrastructure for quite a while; a lot of the tooling is well established even though it is still considered a fast-moving technology to most.On the contrary,  artificial intelligence tools such as ChatGPT show up and quite dramatically change what we thought was possible even just a year ago, impacting how we create, test, and maintain code. Here are just some of the use cases I have been using the two together for in the last few months.●      Code Quality: You can have ChatGPT check your code and make recommendations,●      Cutting Down on Human Error: ChatGPT can double-check and make sure you are not doing something dumb if you ask it to.●      Boosting Productivity: You can ask ChatGTP to add inline comments to your code, update variables, and handle the boring stuff. Remember, we're just scratching the surface here. The combo of AI and IaC is still fresh, and there's a lot of territory left to explore.Improvising your Terraform code with ChatGPTRather than talk anymore, let’s take a look at implementing of the use cases mentioned above, to start with let’s have ChatGPT review an example Terraform block and make some recommendations on how it could be improved.Here is the prompt that we can use: I have the following Terraform code; it creates a Storage Account in Microsoft Azure; resource "azurerm_storage_account" "example" {name                     = "saiacforbeg2022111534"resource_group_name      = azurerm_resource_group.example.namelocation                 = azurerm_resource_group.example.locationaccount_tier             = "Standard"account_replication_type = "GRS"} Could you check it and provide recommendations on how it could be improved? The prompt above made the following recommendations, all of which are valid observations: Image 1: Recommendations that ChatGPT gave me.As well as rewriting the code considering its recommendations, it also recognized that we are deploying an Azure Storage Account and pointed out the following: Additionally, please remember that the storage account names in Azure need to be globally unique and must be between 3 and 24 characters in length, and can include numbers and lowercase letters only. Thus, you might want to consider adding checks to ensure that these constraints are met. Now let’s ask ChatGPT to add some inline comments which explain what is going to the main block of code using the following prompt :Please add inline comments explaining what is happening to the code below … ChatGPT gave the following response:Image 2: Fully commented codeFinally, we can ask ChatGTP to check and make some suggestions about how we could better secure our Storage Account using the following prompt:Image 3:  list of recommendations for improving our Terraform code.Let’s see if ChatGPT could update the code with some of those suggestions with the following prompt and also have a look at the result: Image 4:  recommendations within the context of the original Terraform code. GPT vs Bing vs BardHow do other AI Chatbots handle our original prompt, let’s start with Bing; Image 5: Bing Chat had to say about our Terraform code.Well, that wasn’t helpful at all - let’s now see if Google Bard fares any better:Image 6: Google Bard had to say about our Terraform code.It didn’t just give us a thumbs up, which is a good start. It made similar recommendations to ChatGPT. However, while Bard made some suggestions, they were not as detailed, and it didn’t point out anything outside of what we asked it to, unlike ChatGPT, which recognized we were creating a storage account and gave us some additional pointers.Now don’t write Microsoft off just yet. While Bing didn’t blow me away with its initial responses, Microsoft’s various Copilot services, all of which are launching soon, all of which are GPT4 based, I expect them to be up to the same level of responsiveness of ChatGTP and in the case of GitHub Copilot X I expect it to potentially surpass ChatGTP in terms of usefulness and convenience as it is going to be built directly into your IDE and have all of GitHub to learn from.Given the rate of change, which feels like new tools and features and power is being added every other day now - the next 6 to 12 months is going to be an exciting time for anyone writing Infrastructure as Code as the tools we are using day-to-day start get more direct access to the AI tools we have been discussing.SummaryIn conclusion, this article delved into the process of improving code quality through various means, including seeking recommendations, incorporating inline code comments, and leveraging the capabilities of ChatGPT, Bing, and Google Bard. The demonstration emphasized the importance of continuously enhancing code and actively involving AI-powered tools to identify areas for improvement. Additionally, the article explored how ChatGPT's insights and suggestions were utilized to enhance storage security through stronger and more reliable code implementation. By employing these techniques, developers can achieve code optimization and bolster the overall security of their storage systems.Author BioRuss McKendrick is an experienced DevOps practitioner and system administrator with a passion for automation and containers. He has been working in IT and related industries for the better part of 30 years. During his career, he has had responsibilities in many different sectors, including first-line, second-line, and senior support in client-facing and internal teams for small and large organizations.He works almost exclusively with Linux, using open-source systems and tools across dedicated hardware and virtual machines hosted in public and private clouds at Node4, where he holds the title of practice manager (SRE and DevOps). He also buys way too many records!Author of the book: Infrastructure as Code for Beginners
Read more
  • 0
  • 0
  • 103

article-image-using-ai-functions-to-solve-complex-problems
Shaun McDonogh
09 Jun 2023
8 min read
Save for later

Using AI Functions to Solve Complex Problems

Shaun McDonogh
09 Jun 2023
8 min read
GPT-4 feels like Einstein's theory of general relativity, in that it will take programmers a long time to uncover all the new possibilities which is analogous to how physicists today are still extracting useful insights from Einstein's theory of general relativity.Here is one example. Big problems can be solved by breaking them down into small tasks. Our jobs usually boil down into performing one problem-solving or productive activity after the next. So, what if you could discern clearly what those smaller tasks were and give them to your side-kick brain to figure out and run for you ‘auto-magically’?Note that the above says, what if you could discern. Already we bump up against a lesson in using LLMs. Why do you need to discern it? Can’t GPT-4 be given the job of discerning the tasks required? If so, could it then orchestrate other GPT-4 “agents” to break down the task further? This is exactly where we can make use of a tool like Auto-GPT.Let’s look at an example that I am developing right now for an Auto-GPT using Rust.Say you build websites for a living. Websites need a web server. Web servers take a request (usually through an API), run some code in line with that request and send back a response. Let’s break down the goal of building a web server into what actors or agents might be required when building an Auto-GPT:Solution Architect: Work with the customer (you) to develop scope and aim of the application. Confirm the recommended technology stack to work within.Project (Agent) Manager: Compile and track preliminary list of tasks needed to complete the project.Data Manager: Identify required External and Internal API endpoints.Code Admin: Write code spec sheet for developer.Junior Developer:Take a first stab at code.Run unit testing.Senior Developer:Sense check initial code and adjust.Run unit testing.Quality Control: Test and send back code for adjustments.Customer (you): Supply feedback and request adjustments.You may have guessed that we will not be hiring 7 members of staff to build web servers. However, planning out this structure is necessary to help us organise building a bot that can itself build, test, and deploy any backend server you ask it to.Our agents will need tools to work with. They need to be given a set of instructions and functions that will get the subtask done without the usual response of “I am just an AI language model”. When developing such a tool, it became clear to me that a new paradigm has hit developers, “AI Functions”. What we are about to discuss brings up potential gaps in the guard-rails of GPT-4 which is unlikely to be news to the OpenAI team but may be of some surprise to you.What are AI Functions?The fact that AI hallucinates would annoy most people. But not to programmers and the folk who developed Marvin. Right now, when you ask ChatGPT to provide you with a response, it sends a whole bunch of jargon with it.What if I told you there was a hack, a way to get distilled and relevant information out of ChatGPT automatically through the OpenAI API that leads to a structured response that can drive actions in your code. Here is an example. Suppose you give an AI the following “function” which is just text disguised as a programming function:struct Sentiment {       happy_rating: 0, sad_rating: 0 } fn convert_comments_into_sentiment(comments: String) -> Sentiment {       // Function takes in comments from YouTube video       // Considers all aspects of what is said in comments       // Returns Sentiment struct with happy and sad rating       // Does not return anything else. No commentary.       return sentiment; }You then ask the AI to print only what the function would return and pass it the YouTube comments as an input. The AI (GPT-4) would then hallucinate a response, even without there being any actual code within the function. None. There is no code to perform this task.No jargon like “I am an AI I cannot blah blah blah” is returned. Nothing like that, you would extract the AI’s pure sentiment analysis in a format which you can then build into your application.The hack here is that we have built a pretend function, without any code in it. We described exactly what this function (written in Rust syntax in this example) does and the structure of its output using comments and a struct.GPT-4 then interprets what it thinks the function does and assesses it against the input provided (YouTube comments in this example). The output is then structured in the LLM’s response. Therefore, we can have LLMs supply responses that are structured in such a way that can be used by other agents (also the GPT-4 LLM).The entire process would look something like this: This is an oversimplified view of what is happening. Note that agents are not always needed. Sometimes the function can just be called as regular code, no LLM needed.The Bigger PictureWhy stop there? Auto-GPTs will likely then become like plugins. With Auto-GPTs popping up everywhere, most software related tasks can be taken care of. Rather than calling an API, your Auto-GPT could call another Auto-GPT. Rather than write AI Functions, a primary agent would know what Auto-GPTs should be assigned to various tasks. It could then write its own program that connects to them.This sort of recursion or self-improvement is where predicted growth in AI is already shown to function as predicted either exponentially or in sigmoid-like jumps. My own version of this self-programming task is already done. It was surprisingly trivial to do with GPT-4’s help. It works well, but it doesn’t replace human developers yet.Not All Sunshine and RosesGPT-4 is not without some annoying limitations; these include:Quality of performance at the given taskChanges in quality response depending on a slight change in input.Text formatting to occasionally include something the model should not.Size and dollar cost of outputTime, latency, delay and crashing of API output.These limitations could technically be worked around by breaking down tasks into even smaller chunks, but this is where I draw a line. What is the point of trying to automate anything, if you end up having to code everything? However, it has become clear through this exercise, that just a few improvements on the above limitations will lead to a significant amount of task automation with AutoGPTs.Quick Wins with Self-testingOne unique algorithm which has worked very well with the current technology is the ability for GPT-4 to write code and for our program to then execute and test the code and provide feedback to GPT-4 of what is not working. The AI Function then prompts GPT-4 as a “Senior Developer Agent” to re-write the code and return to the “Quality Agent” for more unit testing.What you end up with is a program that can write code, test it, and confirm all is working. This is working now and is useful. The downside is still the cost of making too many calls to the OpenAI API and some quality limitations with GPT-4. Therefore, until GPT-5, I remain working directly in my development environment or in the ChatGPT interface extracting quick code snippets.Next StepsIt sounds clichéd but I’m learning something new every day working with LLMs. A recurring realisation is that we should not only be trying to solve newer problems in less time, but rather, learning what the right questions to ask our current LLMs even are. They are capable of more than I believe we realise. When Open Source LLMs reach a GPT-5 like level, Auto-GPTs I suspect will be everywhere.Author BioShaun McDonogh is the lead developer and researcher at Crypto Wizards. He has 6 years’ experience in developing high performance trading software solutions with ML and 15 years’ experience working with frontend and backend web technologies.In his prior career, Shaun worked as a Finance Director, Data Scientist and Analytics Lead in various multi-million-dollar entities before making the transition to developing pioneering solutions for developers in emerging tech industries. His favorite programming languages to teach include Rust, Typescript and Python.Shaun’s mission is to support developers in navigating the ever-changing landscape of emerging technologies in computer software engineering through a new coding arm, Code Raiders.
Read more
  • 0
  • 0
  • 166
article-image-building-a-gan-model-in-tensorflow-easily-with-autogpt
Rohan Chikorde
08 Jun 2023
7 min read
Save for later

Building a GAN model in TensorFlow easily with AutoGPT

Rohan Chikorde
08 Jun 2023
7 min read
AutoGPT is a large language model that can be used to perform a variety of tasks, such as generating text, translating languages, writing different kinds of creative content, and answering your questions in an informative way. To use AutoGPT, you first need to specify a goal in natural language. AutoGPT will then attempt to achieve that goal by breaking it down into sub-tasks and using the internet and other tools in an automatic loop.In this blog, we will be using godmode.space for the demo. Godmode is a web platform to access the powers of AutoGPT and Baby AGI. AI agents are still in their infancy, but they are quickly growing in capabilities, and hope that Godmode will enable more people to tap into autonomous AI agents even in this early stage.First, you would need to create your API key and Install AutoGPT as shown in this article (Set Up and Run Auto-GPT with Docker). Once you set up your API key, now click on the Settings tab in godmode.space website and put in your API key. Once you click on Done, the following screen should appear:Image 1: API KeySpecifying the GoalWhen you specify a goal in natural language, AutoGPT will attempt to understand what you are asking for. It will then break down the goal into sub-tasks and if needed it will use the internet and other tools to achieve the goal.In this example, we will tell AutoGPT to Build a Generative Adversarial Network (GAN) from Scratch with TensorFlow.Automatically breaking down a goal into sub-tasksAutoGPT breaks down a goal into sub-tasks by identifying the steps that need to be taken to achieve the goal. For example, if the goal is to build a GANs, AutoGPT will break the goal down into the following sub-tasks (these goals will change based on the user’s prompt):·        Define the generator and discriminator architectures.·        Implement the training loop for the GAN.·        Evaluate the performance of the GAN on a test dataset.Here is an example of how you can use AutoGPT to “Build a Generative Adversarial Network (GAN) from Scratch with TensorFlow”: Specify a goal by adding a simple prompt Build a Generative Adversarial Network (GAN) from Scratch with TensorFlow                                                       Image 2: Specifying the final goalThe provided functionality will present suggested options, and we will choose all the available options. Additionally, you have the flexibility to provide your own custom inputs if desired as shown in the preceding image:Image 3: Breaking down the goal into smaller tasks  Break down the goal into sub-tasks as follows:  Define the generator and discriminator architectures.Write the code for a Generative Adversarial Network (GAN) to a file named gan.py. Implement the training loop for the GAN. Write text to a file named 'gan.py' containing code for a GAN model.Evaluate the performance of the GAN on a test dataset.Image 4: Broken down sub-taskNext, we will launch the AutoGPT in the God Mode.Image 5: Launching AutoGPT in God ModeAs observed, the agent has initiated and commenced by sharing its thoughts and reasoning and the proposed action as shown in the two images below:Images 6: Plan proposed by Auto-GPTConsidering the provided insights and rationale, we are in favor of approving this plan. It is open to receiving input and feedback, which will be considered to refine the subgoals accordingly.After approving the plan, the following are the results:Image 7: AutoGPT providing thoughts and reasoning for the task 8.      According to the next task specified, the next plan suggested by the AutoGPT is to write the entire code in gan.py file:Image 8: Plan to copy the entire code to a file On approving this plan, the action will commence and we will find the following screen as output after the task completion:Image 9: Thoughts and reasoning for the previous taskUpon opening the file, we can observe that the initial structure has been created. The necessary submodules have been generated, and the file is now prepared to accommodate the addition of code. Additionally, the required libraries have been determined and imported for use in the file”Image 10: Initial structure created inside the fileWe will proceed with AutoGPT generating the complete code and then review and approve the plan accordingly.Following is the snapshot of the generated output code, we can observe that AutoGPT has successfully produced the complete code and filled in the designated blocks according to the initial structure provided:(Please note that the snapshot provided showcases a portion of the generated output code. It is important to keep in mind that the complete code is extensive and cannot be fully provided here due to its size)Image 11: Final output AutoGPT Industry-specific use casesData Science and Analytics: AutoGPT can be used to automate tasks such as data cleaning, data wrangling, and data analysis. This can save businesses time and money, and it can also help them to gain insights from their data that they would not have been able to obtain otherwise.Software Development: AutoGPT can be used to generate code, which can help developers to save time and improve the quality of their code. It can also be used to create new applications and features, which can help businesses to stay ahead of the competition.Marketing and Content Creation: AutoGPT can be used to generate marketing materials such as blog posts, social media posts, and email campaigns. It can also be used to create creative content such as poems, stories, and scripts. This can help businesses to reach a wider audience and to engage with their customers in a more meaningful way.Customer Service: AutoGPT can be used to create chatbots that can answer customer questions and resolve issues. This can help businesses to provide better customer service and to save money on labor costs.Education: AutoGPT can be used to create personalized learning experiences for students. It can also be used to generate educational content such as textbooks, articles, and videos. This can help students to learn more effectively and efficiently.ConclusionAutoGPT is a powerful AI tool that has the potential to revolutionize the way we work and live. It can be used to automate tasks, generate creative content, and solve problems in a variety of industries. As AutoGPT continues to develop, it will become even more powerful and versatile. This makes it an incredibly exciting tool with the potential to change the world.Author BioRohan Chikorde is an accomplished AI Architect professional with a post-graduate in Machine Learning and Artificial Intelligence. With almost a decade of experience, he has successfully developed deep learning and machine learning models for various business applications. Rohan's expertise spans multiple domains, and he excels in programming languages such as R and Python, as well as analytics techniques like regression analysis and data mining. In addition to his technical prowess, he is an effective communicator, mentor, and team leader. Rohan's passion lies in machine learning, deep learning, and computer vision.LinkedIn
Read more
  • 0
  • 0
  • 111

article-image-writing-improvised-unit-test-cases-with-github-copilot
Mr. Avinash Navlani
08 Jun 2023
6 min read
Save for later

Writing improvised unit test cases with GitHub Copilot

Mr. Avinash Navlani
08 Jun 2023
6 min read
In the last couple of years, AI and ML have provided various tools and methodologies to transform the working styles of software developers. GitHub Copilot is an AI-powered tool that offers code suggestions for rapid software development. GitHub Copilot can be a game changer for developers across the world. It can increase productivity by recommending code snippets and auto-completions. In this tutorial, we will discuss the Unit tests, GitHub Copilot Features, account creation, setup VS code, JavaScript example, Python example, unit test example, and GitHub Copliot advantages and disadvantages. Setup your account at GitHub Copilot Account Let’s first create your account with GitHub Copilot and then use that account in IDE such as VS Code. We can set up your account from the following web page GitHub Copilot · Your AI pair programmer  Image 1:  GitHub Copilot · Your AI pair programmer. Click on the Get Copilot button and start your 30-day free trial but we need to enter the credit/debit card or pay-pal account details.  After this, activate your GitHub Copilot account by following the instruction mentioned in the GitHub quickstart guide for individual or organization access.  Image 2: Activate Github Copilot in Settings Setup in Visual Studio Code  The first step is to download and install the Visual Studio Code from the official download page. Install the GitHub Copilot extension by following the instruction in the quickstart guide. We can directly search for ext install GitHub.copilot in Quick file navigation (shortcut key: Ctrl + P)  Image3: Visual Studio Home Login with your GitHub credentials as mentioned in the quickstart guide. ​​​Create a Python Function is_prime In this section, we create the function is_prime with Copilot recommendation and see how it can help software developers in their coding journey. For executing any Python code, we need a Python file (with .py an extension). This file will help the interpreter to identify that this is a Python script. First, we create a Python file (prime_number.py) from the file tab by selecting the new file option or we can press Alt + Ctrl + N. After creating the Python script file, we will create the is_prime(number =) function. This is_prime() the function will check whether a number is prime or not. We can see in the below snapshot Copilot also suggests the docstring for the function as per the name of the function:  Image 4: Doc-string suggestions from the Copilot ​​​​​After creating the file, we added the is_prime function but on writing the logic for the is_prime() function Copilot is suggesting an internal if condition inside for loop of the is_prime() function. We can see that suggested if statement in the below snapshot.  Image 5: Github Copilot suggests an if statement After suggesting the if statement, Copilot suggests the return value True as shown below snapshot:   Image 6: Github Copilot suggests return value This is how we can write the Python function is_prime.  We can also write similar functions as per your project requirement. Writing code with GitHub Copilot makes it much faster and more productive for a developer by providing code snippet suggestions and auto completions. ​Write Unit Tests for Python Function is_prime Unit testing is a very important component of the coding module that builds confidence and faith in software developers and ensures bug-free and quality code.  The main objective of this testing is to ensure the expected functionality at individual and isolated component-level. Let’s see how we can define them using the AI-powered code suggestion platform GitHub Copilot: First, we will create a function unit_test_prime_number()  for unit testing the is_prime() function. Here, we can see how Copilot suggests the doc-string in the below snapshot:  Image 7: Github Copilot suggests a doc-string for the unit_test method After writing the doc-string, we will add the assertion condition for the unit test. As we can see Copilot is auto-suggesting the assertion condition for the test in the below snapshot:  Image 8: Github Copilot suggests assertion condition Similarly, Copilot is suggesting the other assertion conditions with different possible inputs for testing the is_prime() function as shown below snapshot:  Image 9: Github Copilot suggests other assertion conditions This is how GitHub Copilot can help us in writing faster unit-test. It speeds up development and increases developer productivity through code suggestions and auto-completions.  Let’s see the final output in the below coding snapshot:  Image 10: A final program created with GitHub Copliot suggestions In the above snapshot, we can see the complete code for the is_prime() function and unit_test_prime_number(). Also, we can see the output of the unit_test_prime_number() function in the Terminal. Advantages and Disadvantages of the Copilot Unit Test  GitHub Copilot offers the following advantages and disadvantages: Improve productivity and reduce development time by offering efficient test cases. Developers can focus on more significant problems rather focus on small code snippets for unit testing. A novice coder can also easily utilize its capability to ensure quality development. Sometimes it may suggest irrelevant or vulnerable code because it doesn’t have proper context and user behavior. As a developer, we also need to consider how edge cases are covered in unit tests.  Sometimes it may lead to syntax errors, so we must verify the syntax. Summary GitHub Copilot revolutionized the way programmers write code. It offers AI-based code suggestions and auto-completions. Copilot boosts developer productivity and rapid application development by providing accurate, relevant, robust, and quality code. Also, sometimes it may cause problems in covering the edge cases. In the above tutorial, I have shared examples from JavaScript and Python languages. We can also try other languages such as TypeScript, Ruby, C#, C++, and Go. Reference GitHub Copilot Quick Start Guide: https://docs.github.com/en/copilot/quickstart Unit tests https://www.freecodecamp.org/news/how-to-write-unit-tests-for-python-functions/ Author BioAvinash Navlani has over 8 years of experience working in data science and AI. Currently, he is working as Sr. Data scientist, Improving products and services for customers by using advanced analytics, deploying big data analytical tools, creating and maintaining models, and onboarding compelling new datasets. Previously, he was a Lecturer at the university level, where he trained and educated people in Data science subjects such as python for analytics, data mining, machine learning, Database Management, and NoSQL. Avinash has been involved in research activities in Data science and has been a keynote speaker at many conferences in India.  LinkedIn   Python Data Analysis, Third Edition    
Read more
  • 0
  • 0
  • 757

article-image-everything-you-need-to-know-about-pinecone-a-vector-database
Avinash Navlani
08 Jun 2023
5 min read
Save for later

Everything you need to know about Pinecone – A Vector Database

Avinash Navlani
08 Jun 2023
5 min read
In this 21st century of information, we need efficient reliable storage and faster information retrieval. Relational or older databases are the most crucial databases for any computer application, but they are unable to handle the data in different forms such as documents, key-value pairs, and graphs. Vector database is a novel approach that uses vectorization for efficient search, storage, and data analysis.  Image 1: Traditional Vs Vector Database Pinecone is one such vector database that is widely accepted across the industry for addressing challenges such as complexity and dimensionality. Pinecone is a cloud-native vector database that handles high-dimensional vector data. The core underlying approach for Pinecone is based on the Approximate Nearest Neighbor (ANN) search that efficiently locates faster matches and ranks them within a large dataset. In this tutorial, our focus will be on the pinecone database, its features, challenges, and use cases.  Working Mechanism Traditional databases search for exact query matches while vector databases search for the most similar vector to the input query. It uses ANN (Approximate Nearest Neighbour) search. It provides approximate results at high performance, accuracy, and speed. Let's see the vector database working mechanism.  Image 2: Vector Database Query Mechanism Vector databases first convert data into vectors and create indexing for faster searching. Vector database compares the indexed vector query and indexed vector in the database using the nearest neighbor or similarity matrix and computes the nearest most similar results. Finally, it post-processes the most similar results given by the nearest neighbor. Features Pinecone is a cloud-based vector database that offers various features and benefits to the infrastructure community: Fast and fresh vector search: Pinecone provides ultra-low query latency, even with billions of items. This means that users will always get a great experience, even when searching large datasets. Additionally, Pinecone indexes are updated in real-time, so users always have access to the most up-to-date information.  Filtered vector search: Pinecone allows you to combine vector search with metadata filters to get more relevant and faster results. For example, you could filter by product category, price, or customer rating.  Real-time updates: Pinecone supports real-time data updates, allowing for dynamic changes to the data. This contrasts with standalone vector indexes, which may require a full re-indexing process to incorporate new data. It has reliability, massive scalability, and security capability.  Backups and collections: Pinecone handle the routine operation of backing up all the data stored in the database. You can also selectively choose specific indexes that can be backed up in the form of “collections,” which store the data in that index for later use.  User-friendly API: Pinecone provides a user-friendly API layer that simplifies the development of high-performance vector search applications. This API layer is also language-agnostic, so you can use it with any programming language.  Programming language integration: It supports a wide range of programming languages for integration.  Cost-effectiveness: It is cost-effective because it offers cloud-native architecture. It offers pay-per-use based pricing. Challenges Pinecone vector database offers high-performance data search at a higher scale, but it also faces a few challenges such as:  Application integration with other applications will evolve over a period. Data privacy is the biggest concern for any database. Organizations need to implement proper authentication and authorization mechanisms. Vector-based models don’t explain the model's interpretability. So, it is challenging to interpret the underlying reason behind those relationships. Use cases Pinecone has a variety of real-life industry applications. Let’s discuss a few applications: Audio/Textual Search: Pinecone offers faster, fully deployment-ready search and similarity functionality for high-dimensional text and audio data. Natural language Processing: Pinecone utilizes AutoGPT to create context-aware solutions for document classification, semantic search, text summarization, sentiment analysis, and question-answering systems. Recommendations: Pinecone enables personalized recommendations with efficient similar items recommendations that improve user experience and satisfaction. Image and Video Analysis: Pinecone also has the capability of faster retrieval of image and video content. It is very useful in real-life surveillance and image recognition. Time series similarity search: Pinecone can detect Time-series patterns in historical time-series data using a similarity search service. such core capability is quite helpful for recommendations, clustering, and labeling applications. Summary Pinecone vector database is a vector-based database that offers high-performance search and similarity matching. It can deal with high-dimensional vector data at a higher scale, easy integration, and faster query results. Pinecone provides a reliable, and faster, option for searching at a higher scale. Author BioAvinash Navlani has over 8 years of experience working in data science and AI. Currently, he is working as a senior data scientist, improving products and services for customers by using advanced analytics, deploying big data analytical tools, creating and maintaining models, and onboarding compelling new datasets. Previously, he was a university lecturer, where he trained and educated people in data science subjects such as Python for analytics, data mining, machine learning, database management, and NoSQL. Avinash has been involved in research activities in data science and has been a keynote speaker at many conferences in India.Link - LinkedIn    Python Data Analysis, Third edition   
Read more
  • 0
  • 0
  • 18711
article-image-using-llamaindex-for-ai-assisted-knowledge-management
Andrei Gheorghiu
08 Jun 2023
10 min read
Save for later

Using LlamaIndex for AI-Assisted Knowledge Management

Andrei Gheorghiu
08 Jun 2023
10 min read
IntroductionOne of the hottest questions of the moment for a lot of strategy and decision makers across most industries worldwide is:How can AI help my business?Afterall, with great disruption also comes great opportunity. A sound business strategy should not ignore emerging changes in the market. We’re still at the early stages of understanding AI and I’m not going to provide a definitive answer to this question in this article, but the good news is that this article should provide a part of the answer.Knowledge is power, right?And yet we all know how it is to struggle trying to retain and efficiently re-use the knowledge we gather.We strive to learn from our successes and mistakes and we invest a lot of time and money in building fancy knowledge bases just to discover later that unfortunately we keep repeating the same mistakes and reinventing the wheel.In my experience as a consultant, the biggest issue (especially for medium and large companies) is not a lack of knowledge but on the contrary, it’s too much knowledge and an inability to use it in a timely and effective manner. The solutionThis article presents a very simple yet effective way of indexing large quantities of pre-existing knowledge that can later be retrieved by natural language queries or integrated with chatbot systems.As usual, take it as a starting point. The code example is trivial and lacks any error handling, but provides the building blocks to work from.My example builds on your existing knowledge base and leverages LlamaIndex and the power of Large Language Models (in this case GPT 3.5 Turbo from OpenAI).Why LlamaIndex? Well, created by Jerry Liu, LlamaIndex is a robust open-source resource that empowers you to organize and search your data for a variety of applications, including answering questions, summarizing information or serving as a part of a chatbot system. It provides data connectors to ingest your existing data sources in many different formats (such as text files, PDF, docs, SQL, etc.). It then allows you to structure your data (via indices or graphs) so that this data can be easily used with LLMs. In many ways, it is similar to Langchain but more focused on data storage and retrieval instead of automated AI agents.In short, this article will show you how, with just a few lines of code, you can index your enterprise's knowledge base and then have the ability to query and retrieve information from GPT 3.5 Turbo with your own knowledge base on top of that in the most natural way: plain English. Logic diagramCreating the IndexRetrieving the knowledge PrerequisitesMake sure you check these points before you start writing the code:Make sure you store your OpenAI API key in a local environment variable for secure and efficient access. The code works on the assumption that the API key is stored on your local environment (OPENAI_API_KEY).I’ve used Python v3.11. If you’re running an older version, an update is recommended to make sure you don’t run into any compatibility issues.Install the requirements:pip install openaipip install llama-index Create a subfolder in your .PY file’s location (in my example the name of the subfolder is ‘stories’). You will store your knowledge base in .TXT files in that location. If your knowledge articles are in different formats (e.g., PDF or DOCX) you will have to:Change the code to use a different LlamaIndex data connector (https://gpt-index.readthedocs.io/en/latest/reference/readers.html) – this is my recommended solution – or:Convert all your documents in .TXT format and use the code as it is.For my demo, I have created (with the help of GPT-4) three fictional stories that will represent our proprietary ‘knowledge’ base: Your ‘stories’ folder should now look like this: The CodeFor the sake of simplicity, I’ve split the functionality into two different scripts:Index_stories.py (responsible for reading the ‘stories’ folder, creating an index and saving it for later queries)Query_stories.py (demonstrating how to query GPT 3.5 and then filter the AI response through our own knowledge base)Let’s begin with Index_stories.py:from llama_index import GPTVectorStoreIndex, SimpleDirectoryReader # Loading from a directory documents = SimpleDirectoryReader('stories').load_data() # Construct a vector store index index = GPTVectorStoreIndex.from_documents(documents) # Save your index to a .json file index.storage_context.persist()As you can see, the code is using SimpleDirectoryReader from LlamaIndex to read all .TXT files from the ‘stories’ folder. It then creates a simple vector index that can later be used to run queries over the content of these documents.In case you’re wondering what a vector index represents, imagine you're in a library with thousands of books, and you're looking for a specific book. Instead of having to go through each book one by one, this index acts in a similar way to a library catalog.  It helps you find the book you're looking for quickly.In the context of this code, GPTVectorStoreIndex is like that library catalog. It's a tool that helps organize and find specific pieces of information (like documents or stories) quickly and efficiently. When you ask a question, it looks through all the information it has and finds the most relevant answer for you. It's like a super-efficient librarian that knows exactly where everything is.The last line of the code saves the index in a sub-folder called ‘storage’ so that we do not have to recreate it every time and we are able to reuse it in the future. Now, for the querying part. Here’s the second script: Query_stories.py:from llama_index import GPTVectorStoreIndex, StorageContext, load_index_from_storage import openai import os openai.api_key = os.getenv('OPENAI_API_KEY') def prompt_chatGPT(task): response = openai.ChatCompletion.create( model="gpt-3.5-turbo", messages=[ {"role": "system", "content": "You are a helpful assistant."}, {"role": "user", "content": task} ] ) AI_response = response['choices'][0]['message']['content'].strip() return AI_response # rebuild storage context storage_context = StorageContext.from_defaults(persist_dir="storage") # load index index = load_index_from_storage(storage_context) # Querying GPT 3.5 Turbo prompt = "Tell me how Tortellini Macaroni's brother managed to conquer Rome." answer = prompt_chatGPT(prompt) print('Original AI answer: ' + answer +'\n\n') # Refining the answer in the context of our knowledge base query_engine = index.as_query_engine() response = query_engine.query(f'The answer to the following prompt: "{prompt}" is :"answer". If the answer is aligned to our knowledge, return the answer. Otherwise return a corrected answer') print('Custom knowledge answer: ' + str(response))How it worksAfter indexing the ‘stories’ folder, once you run the query_stories.py script the code will first load the index from the ‘storage’ sub-folder. It then prompts the GPT 3.5 Turbo model with a hard-coded question: “Tell me how Tortellini Macaroni’s brother managed to conquer Rome”. After the response is received, it queries our ‘stories’ to see if the answer aligns with our ‘knowledge’. Then, you’ll receive two answers.The first one is the original answer from GPT 3.5 Turbo:As expected, the AI model identified Mr. Spaghetti as a potentially fictional character and could not find any historical references of him conquering Rome.The second answer though, checks with our ‘knowledge’ and, because we have different information in our ‘stories’, it modifies the answer into:If you’ve read the three GPT-4-created stories you’ve noticed that Story1.txt mentions Biscotti as a fictional conqueror of Rome but not his brother and Story2.txt mentions Tortellini and his farm adventures but does not mention any relationship with Biscotti. Only the third story (Story3.txt) describes the nature of their relationship.This shows not only that the vector index managed to correctly record the knowledge from the individual stories but also proves the query function managed to provide a contextual response to our question.In addition to the Vector Store Index, there are several other types of indexes that can be used depending on the specific needs of your project.For instance, the List Index simply stores Nodes as a sequential chain, making it a straightforward and efficient choice for certain applications, such as where the order of data matters and where you frequently need to access all the data in the order it was added. An example might be a timeline of events or a log of transactions, where you often want to retrieve all entries in chronological order.Another option is the Tree Index which builds a hierarchical tree from a set of Nodes, which can be particularly useful when dealing with complex, nested data. For instance, if you're building a file system explorer, a tree index would be a good choice because files and directories naturally form a tree-like structure.There’s also the Keyword Table Index, which extracts keywords from each Node and builds a mapping from each keyword to the corresponding Nodes. This can be a powerful tool for text-based queries, allowing for quick and precise retrieval of relevant information.Each of these indexes offers unique advantages, and the choice between them would depend on the nature of your data and the specific requirements of your use case. ConclusionNow, think about the possibilities. Instead of fictional stories, we could have a collection of our standard operating procedures, process descriptions, knowledge articles, disaster recovery plans, change schedules and so on and so forth. Or, as another example we could build a chatbot that can solve generic users requests simply using GPT3.5 knowledge but forwards more specific issues (indexed from our knowledge base) to a support team.This brings unlimited potential in automation of our business processes and improvement of the decision-making process. You get the best of both worlds: the power of Large Language Models combined with the value of your own knowledge base. Security considerationsWorking on this article made me realize that we cannot really trust our interactions with an AI model unless we are in full control of the entire technology stack. Just because the interface might look familiar, it doesn’t necessary mean that bad actors cannot compromise the integrity of the data by injecting false or censored responses to our queries. But that’s a story for another time! Final noteThis article barely scratches the surface of the full capabilities of LlamaIndex. It is not meant to be a comprehensive guide into this topic but rather serve as an example start point for integrating AI technologies in our day-to-day business processes. I encourage you to have an in-depth study of LlamaIndex’s capabilities (https://gpt-index.readthedocs.io/en/latest/) if you want to take advantage of its full capabilities.About the AuthorAndrei Gheorghiu is an experienced trainer with a passion for helping learners achieve their maximum potential. He always strives to bring a high level of expertise and empathy to his teaching.With a background in IT audit, information security, and IT service management, Andrei has delivered training to over 10,000 students across different industries and countries. He is also a Certified Information Systems Security Professional and Certified Information Systems Auditor, with a keen interest in digital domains like Security Management and Artificial Intelligence.In his free time, Andrei enjoys trail running, photography, video editing and exploring the latest developments in technology.You can connect with Andrei on:LinkedIn: https://www.linkedin.com/in/gheorghiu/Twitter: https://twitter.com/aqg8017 
Read more
  • 0
  • 0
  • 224