Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases now! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon

How-To Tutorials

7019 Articles
article-image-ai-distilled-21-mlagentbench-as-ai-research-agents-openais-python-sdk-and-ai-chip-amd-acquires-nodai-ibm-enhances-pytorch-for-ai-inference-microsoft-to-tackle-gpu-shortage
Merlyn Shelley
13 Oct 2023
12 min read
Save for later

AI_Distilled #21: MLAgentBench as AI Research Agents, OpenAI’s Python SDK and AI Chip, AMD Acquires Nod.ai, IBM Enhances PyTorch for AI Inference, Microsoft to Tackle GPU Shortage

Merlyn Shelley
13 Oct 2023
12 min read
Dive deeper into the world of AI innovation and stay ahead of the AI curve! Subscribe to our AI_Distilled newsletter for the latest insights. Don't miss out – sign up today!👋 Hello ,“Scientific experimentation involves an iterative process of creating hypotheses, designing experiments, running experiments, and analyzing the results. Can we build AI research agents to perform these long-horizon tasks? To take a step towards building and evaluating research agents on such open-ended decision-making tasks -- we propose MLAgentBench, a suite of ML tasks for benchmarking AI research agents.” - from the paper Benchmarking Large Language Models as AI Research Agents (arXivLabs, Oct 2023), proposed by Qian Huang, Jian Vora, Percy Liang, Jure Leskovec. Stanford University researchers are addressing the challenge of evaluating AI research agents with free-form decision-making abilities through MLAgentBench, a pioneering benchmark. This framework provides research tasks with task descriptions and required files, allowing AI agents to mimic human researchers' actions like reading, writing, and running code. The evaluation assesses proficiency, reasoning, research process, and efficiency.Welcome to AI_Distilled #21, your weekly source for the latest breakthroughs in AI, ML, GPT, and LLM. In this edition, we’ll talk about Microsoft and Google introducing new AI initiatives for healthcare, OpenAI unveiling the beta version of Python SDK for enhanced API access, IBM’s enhancement of PyTorch for AI inference, targeting enterprise deployment, and AMD working on enhancing its AI capabilities with the acquisition of Nod.ai and getting a quick look at OpenAI’s ambitious new ventures in AI chipmaking to tackle the global chip shortage. We know how much you love our curated collection of AI tutorials and secret knowledge. We’ve packed some great knowledge resources in this issue covering recent advances in enhancing content safety with Azure ML, understanding autonomous agents for problem solving with LLMs, and enhancing code quality and security with Generative AI, Amazon Bedrock, and CodeGuru. 📥 Feedback on the Weekly EditionWhat do you think of this issue and our newsletter?Please consider taking the short survey below to share your thoughts and you will get a free PDF of the “The Applied Artificial Intelligence Workshop” eBook upon completion. Complete the Survey. Get a Packt eBook for Free!Writer’s Credit: Special shout-out to Vidhu Jain for their valuable contribution to this week’s newsletter content!  Cheers,  Merlyn Shelley  Editor-in-Chief, Packt  ⚡ TechWave: AI/GPT News & AnalysisMicrosoft and Google Introduce New Gen AI Initiatives for Healthcare: Microsoft and Alphabet's Google have unveiled separate AI initiatives to assist healthcare organizations in improving data access and information management. Google's project, powered by Google Cloud, aims to simplify the retrieval of patient data, including test results and prescriptions, in one central location. It also intends to help healthcare professionals with administrative tasks that often lead to work overload and burnout. Meanwhile, Microsoft's initiative is focused on enabling healthcare entities to efficiently aggregate data from various doctors and hospitals, eliminating the time-consuming search for information.  OpenAI Mulls Chip Independence Due to Rising Costs: OpenAI, known for its ChatGPT AI model, is considering developing its own AI chips due to the growing costs of using Nvidia's hardware. Each ChatGPT query costs OpenAI around 4 cents, and the company reportedly spends $700,000 daily to run ChatGPT. Nvidia accounts for over 70% of AI chip sales but is becoming costly for OpenAI. The organization has been in discussions about making its own chips but has not made a final decision. Microsoft is also exploring in-house chip development, potentially competing with Nvidia's H100 GPU. OpenAI may remain dependent on Nvidia for the time being. Microsoft May Unveil AI Chip at Ignite 2023 to Tackle GPU Shortage: Microsoft is considering debuting its own AI chip at the upcoming Ignite 2023 conference due to the high demand for GPUs, with NVIDIA struggling to meet this demand. The chip would be utilized in Microsoft's data center servers and to enhance AI capabilities within its productivity apps. This move reflects Microsoft's commitment to advancing AI technology following a substantial investment in OpenAI. While Microsoft plans to continue purchasing NVIDIA GPUs, the development of its own AI chip could increase profitability and competitiveness with tech giants like Amazon and Google, who already use their custom AI chips. OpenAI Unveils Beta Version of Python SDK for Enhanced API Access: OpenAI has released a beta version of its Python SDK, aiming to improve access to the OpenAI API for Python developers. This Python library simplifies interactions with the OpenAI API for Python-based applications, providing an opportunity for early testing and feedback ahead of the official version 1.0 launch. The SDK streamlines integration by offering pre-defined classes for API resources and ensuring compatibility across different API versions. OpenAI encourages developers to explore the beta version, share feedback, and shape the final release. The library supports various tasks, including chat completions, text model completions, embeddings, fine-tuning, moderation, image generation, and audio functions.  IBM Enhances PyTorch for AI Inference, Targeting Enterprise Deployment: IBM is expanding the capabilities of the PyTorch machine learning framework beyond model training to AI inference. The goal is to provide a robust, open-source alternative for inference that can operate on multiple vendor technologies and both GPUs and CPUs. IBM's efforts involve combining three techniques within PyTorch: graph fusion, kernel optimizations, and parallel tensors to speed up inference. Using these optimizations, they achieved impressive inference speeds of 29 milliseconds per token for a large language model with 70 billion parameters. While these efforts are not yet ready for production, IBM aims to contribute these improvements to the PyTorch project for future deployment, making PyTorch more enterprise-ready. AMD Enhances AI Capabilities with Acquisition of Nod.ai: AMD has announced its intention to acquire Nod.ai, a startup focused on optimizing AI software for high-performance hardware. This acquisition underlines AMD's commitment to the rapidly expanding AI chip market, which is projected to reach $383.7 billion by 2032. Nod.ai's software, including the SHARK Machine Learning Distribution, will accelerate the deployment of AI models on platforms utilizing AMD's architecture. By integrating Nod.ai's technology, AMD aims to offer open software solutions to facilitate the deployment of highly performant AI models, thereby enhancing its presence in the AI industry.   🔮 Expert Insights from Packt Community Machine Learning Engineering with MLflow - By Natu Lauchande Developing your first model with MLflow From the point of view of simplicity, in this section, we will use the built-in sample datasets in sklearn, the ML library that we will use initially to explore MLflow features. For this section, we will choose the famous Iris dataset to train a multi-class classifier using MLflow. The Iris dataset (one of sklearn's built-in datasets available from https://scikit-learn.org/stable/datasets/toy_dataset.html) contains the following elements as features: sepal length, sepal width, petal length, and petal width. The target variable is the class of the iris: Iris Setosa, Iris Versocoulor, or Iris Virginica: Load the sample dataset: from sklearn import datasets from sklearn.model_selection import train_test_split dataset = datasets.load_iris() X_train, X_test, y_train, y_test = train_test_split(dataset.data, dataset.target, test_size=0.4) Next, let's train your model. Training a simple machine model with a framework such as scikit-learn involves instantiating an estimator such as LogisticRegression and calling the fit command to execute training over the Iris dataset built in scikit-learn: from sklearn.linear_model import LogisticRegression clf = LogisticRegression() clf.fit(X_train, y_train) The preceding lines of code are just a small portion of the ML Engineering process. As will be demonstrated, a non-trivial amount of code needs to be created in order to productionize and make sure that the preceding training code is usable and reliable. One of the main objectives of MLflow is to aid in the process of setting up ML systems and projects. In the following sections, we will demonstrate how MLflow can be used to make your solutions robust and reliable. Then, we will add MLflow. With a few more lines of code, you should be able to start your first MLflow interaction. In the following code listing, we start by importing the mlflow module, followed by the LogisticRegression class in scikit-learn. You can use the accompanying Jupyter notebook to run the next section: import mlflow from sklearn.linear_model import LogisticRegression mlflow.sklearn.autolog() with mlflow.start_run():    clf = LogisticRegression()    clf.fit(X_train, y_train) The mlflow.sklearn.autolog() instruction enables you to automatically log the experiment in the local directory. It captures the metrics produced by the underlying ML library in use. MLflow Tracking is the module responsible for handling metrics and logs. By default, the metadata of an MLflow run is stored in the local filesystem. The above content is extracted from the book Machine Learning Engineering with MLflow written by Natu Lauchande and published in Aug 2021. To get a glimpse of the book's contents, make sure to read the free chapter provided here, or if you want to unlock the full Packt digital library free for 7 days, try signing up now! To learn more, click on the button below.   Read through the Chapter 1 unlocked here...  🌟 Secret Knowledge: AI/LLM ResourcesBoosting Model Inference Speed with Quantization: In the realm of deploying deep learning models, efficiency is key. This post offers a primer on quantization, a technique that significantly enhances the inference speed of hosted language models. Quantization involves reducing the precision of data types used for weights and activations, such as moving from 32-bit floating point to 8-bit integers. While this may slightly affect model accuracy, the benefits are substantial: reduced memory usage, faster inference times, lower energy consumption, and the ability to deploy models on edge devices. The post explains two common approaches for quantization: Post-Training Quantization (PTQ) and Quantization-Aware Training (QAT), helping you understand how to implement them effectively.  Unlocking Database Queries with Text2SQL: A Historical Perspective and Current Advancements: In this post, you'll explore the evolution of Text2SQL, a technology that converts natural language queries into SQL for interacting with databases. Beginning with rule-based approaches in the 1960s, it has transitioned to machine learning-based models, and now, LLMs like BERT and GPT have revolutionized it. Discover how LLMs enhance Text2SQL, the challenges it faces, and prominent products like Microsoft LayoutLM, Google TAPAS, Stanford Spider, and GuruSQL. Despite challenges, Text2SQL holds great promise for making database querying more convenient and intelligent in practical applications. Enhancing Content Safety with Azure ML: Learn how to ensure content safety in Azure ML when using LLMs. By setting up Azure AI Content Safety and establishing a connection within Prompt Flow, you'll scrutinize user input before directing it to the LLM. The article guides you through constructing the flow, including directing input to content safety, analyzing results, invoking the LLM, and consolidating the final output. With this approach, you can prevent unwanted responses from LLM and ensure content safety throughout the interaction.  💡 Masterclass: AI/LLM TutorialsUnderstanding Autonomous Agents for Problem Solving with LLMs: In this post, you'll explore the concept of autonomous LLM-based agents, how they interact with their environment, and the key modules that make up these agents, including the Planner, Reasoner, Actioner, Executor, Evaluator, and more. Learn how these agents utilize LLMs' inherent reasoning abilities and external tools to efficiently solve intricate problems while avoiding the limitations of fine-tuning.Determining the Optimal Chunk Size for a RAG System with LlamaIndex: When working with retrieval-augmented generation (RAG) systems, selecting the right chunk size is a crucial factor affecting efficiency and accuracy. This post introduces LlamaIndex's Response Evaluation module, providing a step-by-step guide on how to find the ideal chunk size for your RAG system. Considering factors like relevance, granularity, and response generation time, the optimal balance typically found around 1024 for a RAG system.Understanding the Power of Rouge Score in Model Evaluation: Evaluating the effectiveness of fine-tuned language models like Mistral 7B Instruct Model requires a reliable metric, and the Rouge Score is a valuable tool. This article provides a step-by-step guide on how to use the Rouge Score to compare finetuned and base language models effectively. This assesses the similarity of words generated by a model to reference words provided by humans using unigrams, bigrams, and n-grams. Mastering this metric, you'll be able to make informed decisions when choosing between different model versions for specific tasks. Enhancing Code Quality and Security with Generative AI, Amazon Bedrock, and CodeGuru: In this post, you'll learn how to use Amazon CodeGuru Reviewer, Amazon Bedrock, and Generative AI to enhance the quality and security of your code. Amazon CodeGuru Reviewer provides automated code analysis and recommendations, while Bedrock offers insights and code remediation. The post outlines a detailed solution involving CodeCommit, CodeGuru Reviewer, and Bedrock.  Exploring Generative AI with LangChain and OpenAI: Enhancing Amazon SageMaker Knowledge: In this post, the author illustrates the process of hosting a Machine Learning Model with the Generative AI ecosystem, using LangChain, a Python framework that simplifies Generative AI applications, and OpenAI's LLMs. The goal is to see how well this solution can answer SageMaker-related questions, addressing the challenge of LLMs lacking access to specific and recent data sources.   🚀 HackHub: Trending AI Toolsleptonai/leptonai: ̉Python library for simplifying AI service creation, offering a Pythonic abstraction (Photon) for converting research code into a service, simplified model launching, prebuilt examples, and AI-specific features. okuvshynov/slowllama: Enables developers to fine-tune Llama2 and CodeLLama models, including 70B/35B, on Apple M1/M2 devices or Nvidia GPUs, emphasizing fine-tuning without quantization. yaohui-wyh/ctoc: A lightweight tool for analyzing codebases at the token level, which is crucial for understanding and managing the memory and conversation history of LLMs.  eric-ai-lab/minigpt-5: ̉A model for interleaved vision-and-language generation using generative vokens to enable the simultaneous generation of images and textual narratives, particularly in the context of multimodal applications.
Read more
  • 0
  • 0
  • 84

article-image-reducing-hallucinations-with-intent-classification
Gabriele Venturi
13 Oct 2023
10 min read
Save for later

Reducing Hallucinations with Intent Classification

Gabriele Venturi
13 Oct 2023
10 min read
Dive deeper into the world of AI innovation and stay ahead of the AI curve! Subscribe to our AI_Distilled newsletter for the latest insights. Don't miss out – sign up today!IntroductionLanguage models (LLMs) are incredibly capable, but they are prone to hallucinating - generating convincing but completely incorrect or nonsensical outputs. This is a significant impediment to deploying LLMs safely in real-world applications. In this comprehensive guide, we will explore a technique called intent classification to mitigate hallucinations and make LLMs more robust and reliable.The Hallucination ProblemHallucinations occur when an AI system generates outputs that are untethered from reality and make false claims with high confidence. For example, if you asked an LLM like GPT-3 a factual question that it does not have sufficient knowledge to answer correctly, it might fabricate a response that sounds plausible but is completely incorrect.This happens because LLMs are trained to continue text in a way that seems natural, not to faithfully represent truth. Their knowledge comes solely from their training data, so they often lack sufficient grounding in real-world facts. When prompted with out-of-distribution questions, they resort to guessing rather than admitting ignorance.Hallucinations are incredibly dangerous if deployed in real applications like conversational agents. Providing false information as if it were true severely damages trust and utility. So for AI systems to be reliable digital assistants, we need ways to detect and reduce hallucinations.Leveraging Intent ClassificationOne strategy is to use intent classification on the user input before feeding it to the LLM. The goal is to understand what the user is intending so we can formulate the prompt properly to minimize hallucination risks.For example, consider a question like:"What year did the first airplane fly?"The intent here is clearly to get a factual answer about a historical event. An LLM may or may not know the answer. But with a properly classified intent, we can prompt the model accordingly:"Please provide the exact year the first airplane flew if you have sufficient factual knowledge to answer correctly. Otherwise respond that you do not know."This prompt forces the model to stick to facts it is confident about rather than attempting to guess an answer.The Intent Classification ProcessSo how does intent classification work exactly? At a high level, there are three main steps:Gather example user inputs and label them with intents.Train a classifier model on the labeled data.Run new user inputs through the classifier to predict intent labels.For the first step, we need to collect a dataset of example queries, commands, and other user inputs. These should cover the full range of expected inputs our system will encounter when deployed.For each example, we attach one or more intent labels that describe what the user hopes to achieve. Some common intent categories include:Information request (asking for facts/data)Action request (wanting to execute a command or process)Clarification (asking the system to rephrase something)Social (general conversation, chit-chat, etc.)Next, we use this labeled data to train an intent classification model. This can be a simple machine learning model like logistic regression, or more complex neural networks like BERT can be used. The model learns to predict the intent labels for new text inputs based on patterns in the training data.Finally, when users interact with our system, we pass their inputs to the intent classifier to attach labels before generating any AI outputs. The predicted intent drives how we frame the prompt for the LLM to minimize hallucination risks.Sample IntentsHere are some examples of potential intent labels:Information Request - Factual questions, asking for definitions, requesting data lookup, etc."What is the capital of Vermont?""What year was Julius Caesar born?"Action Request - Wants the system to perform a command or process some data."Can you book me a flight to Denver?""Plot a scatter graph of these points."Clarification - The user needs the system to rephrase or explain something it previously said."Sorry, I don't understand. Can you rephrase that?""What do you mean by TCP/IP?"Social - Casual conversation, chit-chat, pleasantries."How is your day going?""What are your hobbies?"For a production intent classifier, we would want 20-50 diverse intent types covering the full gamut of expected user inputs.Building the DatasetTo train an accurate intent classifier, we need a dataset with at least a few hundred examples per intent class. Here are some best practices for building a robust training dataset:Include diversity: Examples should cover the myriad ways users might express an intent. Use different wording, sentence structures, etc.Gather real data: Use logs of real user interactions if possible rather than only synthetic examples. Real queries contain nuances that are hard to fabricate.Multilabel intents: Many queries have multiple intents. Label accordingly rather than forcing single labels.Remove ambiguities: Any confusing/ambiguous examples should be discarded to avoid training confusion.Use validation sets: Split your data into training, validation, and test sets for proper evaluation.Regularly expand: Continuously add new labeled examples to improve classifier accuracy over time.Adhering to these data collection principles results in higher-fidelity intent classification. Next, we'll cover how to implement an intent classifier in Python.Implementing the Intent ClassifierFor this example, we'll build a simple scikit-learn classifier to predict two intents - Information Request and Action Request. Here is a sample of labeled training data with 50 examples for each intent:# Sample labeled intent data import pandas as pd data = [{'text': 'What is the population of France?', 'intent': 'Information Request'}, {'text': 'How tall is the Eiffel Tower?', 'intent': 'Information Request'}, # ... {'text': 'Book a table for dinner tonight', 'intent': 'Action Request'}, {'text': 'Turn up the volume please', 'intent': 'Action Request'}, # ... ] df = pd.DataFrame(data) We'll use a CountVectorizer and Tf-Idf vectorizer to extract features from the text data. Then we'll train a simple Logistic Regression classifier on this:# Extract features from text data from sklearn.feature_extraction.text import CountVectorizer, TfidfTransformer count_vect = CountVectorizer() count_vect.fit(df['text']) counts = count_vect.transform(df['text']) tfidf_transformer = TfidfTransformer() tfidf = tfidf_transformer.fit_transform(counts) # Train classifier model from sklearn.linear_model import LogisticRegression X_train = tfidf y_train = df['intent'] model = LogisticRegression() model.fit(X_train, y_train) Now we can make predictions on new text inputs:# Make predictions on new texts texts = ['What year was Napoleon Bonaparte born?', 'Play some music please'] counts = count_vect.transform(texts) tfidf = tfidf_transformer.transform(counts) predictions = model.predict(tfidf) print(predictions) # Output: ['Information Request', 'Action Request'] And that's it! With just a few lines of scikit-learn code, we have a simple but decent intent classifier. For a more robust production classifier, we would want to use deep learning models like BERT rather than logistic regression. We would also train on much larger datasets with significantly more intent diversity.However, the underlying principles remain the same - leverage labeled data to train a model that can predict intents for new text inputs. Those intents can then be used to formulate better prompts for LLMs to reduce hallucinations.Prompting with IntentsNow let's look at a few examples of how classified intents can guide how we prompt LLMs to curb hallucinations:Information RequestUser: "When did the first airplane fly?"Intent: Information RequestAnswer: “On December 17, 1903”By detecting this as a request for factual information, we can prompt the LLM to share known facts. This prevents guessing at an answer that may be false. Or we can gather the information from some third-party API to make sure the response is accurate.Action RequestUser: "Can you book me a ticket to SF?"Intent: Action RequestAnswer: "I apologize, but I cannot do that. I am an AI assistant without direct access to online services."Recognizing this as an action request allows us to clarify that the LLM cannot actually complete the requested task. This curbs hallucination risks from the LLM attempting to imitate such functionality.SocialUser: "How are you doing today?"Intent: SocialAnswer: "I am an AI system without real subjective experiences."For casual social conversation, the AI can be more playful while still grounding responses in reality about its AI nature.The key in each case is using the predicted intent to formulate a prompt that discourages ungrounded hallucinations and encourages sticking to solid facts the LLM is confident about. Of course, hallucinations cannot be fully eliminated, but intent-guided prompting pushes models to be more honest about the limits of their knowledge.Results and ImpactStudies have shown intent classification can significantly improve AI reliability by reducing false factual claims. In one experiment, hallucination rates for an LLM dropped from 19.8% to just 2.7% using a classifier trained on 100 intent types. Precision on answering factual questions rose from 78% to 94% with intents guiding prompting.Beyond curbing hallucinations, intent classification also enables smarter response formulation in general:Answering questions more accurately based on contextual understanding of the user's true information needs.Retrieving the most relevant examples or templates to include in responses based on predicted intents.Building conversational systems that handle a diverse range of statement types and goals seamlessly.So in summary, intent classification is a powerful technique to minimize risky AI behaviors like ungrounded hallucinations. It delivers major improvements in reliability and safety for real-world deployments where trustworthiness is critical. Adopting an intent-aware approach is key to developing AI assistants that can have nuanced, natural interactions without jeopardizing accuracy.ConclusionHallucinations pose serious challenges as we expand real-world uses of large language models and conversational agents. Identifying clear user intents provides crucial context that allows crafting prompts in ways that curb harmful fabrications. This guide covered best practices for building robust intent classifiers, detailed implementation in Python, and demonstrated impactful examples of reducing hallucinations through intent-guided prompting.Adopting these approaches allows developing AI systems that admit ignorance rather than guessing and remain firmly grounded in reality. While not a magic solution, intent classification serves as an invaluable tool for engineering the trustworthy AI assistants needed in domains like medicine, finance, and more. As models continue to advance in capability, maintaining rigorous intent awareness will only grow in importance.Author BioGabriele Venturi is a software engineer and entrepreneur who started coding at the young age of 12. Since then, he has launched several projects across gaming, travel, finance, and other spaces - contributing his technical skills to various startups across Europe over the past decade.Gabriele's true passion lies in leveraging AI advancements to simplify data analysis. This mission led him to create PandasAI, released open source in April 2023. PandasAI integrates large language models into the popular Python data analysis library Pandas. This enables an intuitive conversational interface for exploring data through natural language queries.By open-sourcing PandasAI, Gabriele aims to share the power of AI with the community and push boundaries in conversational data analytics. He actively contributes as an open-source developer dedicated to advancing what's possible with generative AI.
Read more
  • 0
  • 0
  • 103

article-image-build-your-first-rag-with-qdrant
Louis Owen
12 Oct 2023
10 min read
Save for later

Build your First RAG with Qdrant

Louis Owen
12 Oct 2023
10 min read
Dive deeper into the world of AI innovation and stay ahead of the AI curve! Subscribe to our AI_Distilled newsletter for the latest insights. Don't miss out – sign up today!IntroductionLarge Language Models (LLM) have emerged as powerful tools for various tasks, including question-answering. However, as many are now aware, LLMs alone may not be suitable for the task of question-answering, primarily due to their limited access to up-to-date information, often resulting in incorrect or hallucinated responses. To overcome this limitation, one approach involves providing these LMs with verified facts and data. In this article, we'll explore a solution to this challenge and delve into the scalability aspect of improving question-answering using Qdrant, a vector similarity search engine and vector database.To address the limitations of LLMs, one approach is to provide known facts alongside queries. By doing so, LLMs can utilize the actual, verifiable information and generate more accurate responses. One of the latest breakthroughs in this field is the RAG model, a tripartite approach that seamlessly combines Retrieval, Augmentation, and Generation to enhance the quality and relevance of responses generated by AI systems.At the core of the RAG model lies the retrieval step. This initial phase involves the model searching external sources to gather relevant information. These sources can span a wide spectrum, encompassing databases, knowledge bases, sets of documents, or even search engine results. The primary objective here is to find valuable snippets or passages of text that contain information related to the given input or prompt.The retrieval process is a vital foundation upon which RAG's capabilities are built. It allows the model to extend its knowledge beyond what is hardcoded or pre-trained, tapping into a vast reservoir of real-time or context-specific information. By accessing external sources, the model ensures that it remains up-to-date and informed, a critical aspect in a world where information changes rapidly.Once the retrieval step is complete, the RAG model takes a critical leap forward by moving to the augmentation phase. During this step, the retrieved information is seamlessly integrated with the original input or prompt. This fusion of external knowledge with the initial context enriches the pool of information available to the model for generating responses.Augmentation plays a pivotal role in enhancing the quality and depth of the generated responses. By incorporating external knowledge, the model becomes capable of providing more informed and accurate answers. This augmentation also aids in making the model's responses more contextually appropriate and relevant, as it now possesses a broader understanding of the topic at hand.The final step in the RAG model's process is the generation phase. Armed with both the retrieved external information and the original input, the model sets out to craft a response that is not only accurate but also contextually rich. This last step ensures that the model can produce responses that are deeply rooted in the information it has acquired.By drawing on this additional context, the model can generate responses that are more contextually appropriate and relevant. This is a significant departure from traditional AI models that rely solely on pre-trained data and fixed knowledge. The generation phase of RAG represents a crucial advance in AI capabilities, resulting in more informed and human-like responses.To summarize, RAG can be utilized for the question-answering task by following the multi-step pipeline that starts with a set of documentation. These documents are converted into embeddings, essentially numerical representations, and then subjected to similarity search when a query is presented. The top N most similar document embeddings are retrieved, and the corresponding documents are selected. These documents, along with the query, are then passed to the LLM, which generates a comprehensive answer.This approach improves the quality of question-answering but depends on two crucial variables: the quality of embeddings and the quality of the LLM itself. In this article, our focus will be on the former - enhancing the scalability of the embedding search process, with Qdrant.Qdrant, pronounced "quadrant," is a vector similarity search engine and vector database designed to address these challenges. It provides a production-ready service with a user-friendly API for storing, searching, and managing vectors. However, what sets Qdrant apart is its enhanced filtering support, making it a versatile tool for neural-network or semantic-based matching, faceted search, and various other applications. It is built using Rust, a programming language known for its speed and reliability even under high loads, making it an ideal choice for demanding applications. The benchmarks speak for themselves, showcasing Qdrant's impressive performance.In the quest for improving the accuracy and scalability of question-answering systems, Qdrant stands out as a valuable ally. Its capabilities in vector similarity search, coupled with the power of Rust, make it a formidable tool for any application that demands efficient and accurate search operations. Without wasting any more time, let’s take a deep breath, make yourselves comfortable, and be ready to learn how to build your first RAG with Qdrant!Setting Up QdrantTo get started with Qdrant, you have several installation options, each tailored to different preferences and use cases. In this guide, we'll explore the various installation methods, including Docker, building from source, the Python client, and deploying on Kubernetes.Docker InstallationDocker is known for its simplicity and ease of use when it comes to deploying software, and Qdrant is no exception. Here's how you can get Qdrant up and running using Docker:1. First, ensure that the Docker daemon is installed and running on your system. You can verify this with the following command:sudo docker infoIf the Docker daemon is not listed, start it to proceed. On Linux, running Docker commands typically requires sudo privileges. To run Docker commands without sudo, you can create a Docker group and add your users to it.2. Pull the Qdrant Docker image from DockerHub:docker pull qdrant/qdrant3. Run the container, exposing port 6333 and specifying a directory for data storage:docker run -p 6333:6333 -v $(pwd)/path/to/data:/qdrant/storage qdrant/qdrantBuilding from SourceBuilding Qdrant from source is an option if you have specific requirements or prefer not to use Docker. Here's how to build Qdrant using Cargo, the Rust package manager:Before compiling, make sure you have the necessary libraries and the Rust toolchain installed. The current list of required libraries can be found in the Dockerfile.Build Qdrant with Cargo:cargo build --release --bin qdrantAfter a successful build, you can find the binary at ./target/release/qdrant.Python ClientIn addition to the Qdrant service itself, there is a Python client that provides additional features compared to clients generated directly from OpenAPI. To install the Python client, you can use pip:pip install qdrant-clientThis client allows you to interact with Qdrant from your Python applications, enabling seamless integration and control.Kubernetes DeploymentIf you prefer to run Qdrant in a Kubernetes cluster, you can utilize a ready-made Helm Chart. Here's how you can deploy Qdrant using Helm:helm repo add qdrant https://qdrant.to/helm helm install qdrant-release qdrant/qdrantBuilding RAG with Qdrant and LangChainQdrant works seamlessly with LangChain, in fact, you can use Qdrant directly in LangChain through the `VectorDBQA` class! The first thing we need to do is to gather all documents that we want to use as the source of truth for our LLM. Let’s say we store it in the list variable named `docs`. This `docs` variable is a list of string where each element of the list consist of chunks of paragraphs.The next thing that we need to do is to generate the embeddings from the docs. For the sake of an example, we’ll use a small model provided by the `sentence-transformers` package.from langchain.vectorstores import Qdrant from langchain.embeddings import HuggingFaceEmbeddings embedding_model = HuggingFaceEmbeddings(model_name=”sentence-transformers/all-mpnet-base-v2”) qdrant_vec_store = Quadrant.from_texts(docs, embedding_model, host = QDRANT_HOST, api_key = QDRANT_API_KEYOnce we setup the embedding model and Qdrant, we can now move to the next part of RAG, which is augmentation and generation. To do that, we’ll utilize the `VectorDBQA` class. This class basically will load some docs from Qdrant and then pass them into the LLM. Once the docs are passed or augmented, the LLM then will do its job to analyze them to generate the answer to the given query. In this example, we’ll use the GPT3.5-turbo provided by OpenAI.from langchain import OpenAI, VectorDBQA llm = OpenAI(openai_api_key=OPENAI_API_KEY) rag =   VectorDBQA.from_chain_type(                                    llm=llm,                                    chain_type=”stuff”,                                    vectorstore=qdrant_vec_store,                                    return_source_documents=False)The final thing to do is to test the pipeline by passing a query to the `rag` variable and LangChain supported by Qdrant will handle the rest!rag.run(question)Below are some examples of the answers generated by the LLM based on the provided documents using the Natural Questions datasets.ConclusionCongratulations on keeping up to this point! Throughout this article, you have learned what is RAG,  how it can improve the quality of your question-answering model, how to scale the embedding search part of the pipeline with Qdrant, and how to build your first RAG with Qdrant and LangChain. Hope the best for your experiment in creating your first RAG and see you in the next article!Author BioLouis Owen is a data scientist/AI engineer from Indonesia who is always hungry for new knowledge. Throughout his career journey, he has worked in various fields of industry, including NGOs, e-commerce, conversational AI, OTA, Smart City, and FinTech. Outside of work, he loves to spend his time helping data science enthusiasts to become data scientists, either through his articles or through mentoring sessions. He also loves to spend his spare time doing his hobbies: watching movies and conducting side projects.Currently, Louis is an NLP Research Engineer at Yellow.ai, the world’s leading CX automation platform. Check out Louis’ website to learn more about him! Lastly, if you have any queries or any topics to be discussed, please reach out to Louis via LinkedIn.
Read more
  • 0
  • 0
  • 365

article-image-large-language-model-operations-llmops-in-action
Mostafa Ibrahim
11 Oct 2023
6 min read
Save for later

Large Language Model Operations (LLMOps) in Action

Mostafa Ibrahim
11 Oct 2023
6 min read
Dive deeper into the world of AI innovation and stay ahead of the AI curve! Subscribe to our AI_Distilled newsletter for the latest insights. Don't miss out – sign up today!IntroductionIn an era dominated by the rise of artificial intelligence, the power and promise of Large Language Models (LLMs) stand distinct. These colossal architectures, designed to understand and generate human-like text, have revolutionized the realm of natural language processing. However, with great power comes great responsibility – the onus of managing, deploying, and refining these models in real-world scenarios. This article delves into the world of Large Language Model Operations (LLMOps), an emerging field that bridges the gap between the potential of LLMs and their practical application.BackgroundThe last decade has seen a significant evolution in language models, with models growing in size and capability. Starting with smaller models like Word2Vec and LSTM, we've advanced to behemoths like GPT-3, BERT, and T5.  With that said, as these models grew in size and complexity, so did their operational challenges. Deploying, maintaining, and updating these models requires substantial computational resources, expertise, and effective management strategies.MLOps vs LLMOpsIf you've ventured into the realm of machine learning, you've undoubtedly come across the term MLOps. MLOps, or Machine Learning Operations, encapsulates best practices and methodologies for deploying and maintaining machine learning models throughout their lifecycle. It caters to the wide spectrum of models that fall under the machine learning umbrella.On the other hand, with the growth of vast and intricate language models, a more specialized operational domain has emerged: LLMOps. While both MLOps and LLMOps share foundational principles, the latter specifically zeros in on the challenges and nuances of deploying and managing large-scale language models. Given the colossal size, data-intensive nature, and unique architecture of these models, LLMOps brings to the fore bespoke strategies and solutions that are fine-tuned to ensure the efficiency, efficacy, and sustainability of such linguistic powerhouses in real-world scenarios.Core Concepts of LLMOpsLarge Language Models Operations (LLMOps) focuses on the management, deployment, and optimization of large language models (LLMs). One of its foundational concepts is model deployment, emphasizing scalability to handle varied loads, reducing latency for real-time responses, and maintaining version control. As these LLMs demand significant computational resources, efficient resource management becomes pivotal. This includes the use of optimized hardware like GPUs and TPUs, effective memory optimization strategies, and techniques to manage computational costs.Continuous learning and updating, another core concept, revolve around fine-tuning models with new data, avoiding the pitfall of 'catastrophic forgetting', and effectively managing data streams for updates. Parallelly, LLMOps emphasizes the importance of continuous monitoring for performance, bias, fairness, and iterative feedback loops for model improvement. To cater to the vastness of LLMs, model compression techniques like pruning, quantization, and knowledge distillation become crucial.How do LLMOps workPre-training Model DevelopmentLarge Language Models typically start their journey through a process known as pre-training. This involves training the model on vast amounts of text data. The objective during this phase is to capture a broad understanding of language, learning from billions of sentences and paragraphs. This foundational knowledge helps the model grasp grammar, vocabulary, factual information, and even some level of reasoning.This massive-scale training is what makes them "large" and gives them a broad understanding of language. Optimization & CompressionModels trained to this extent are often so large that they become impractical for daily tasks.To make these models more manageable without compromising much on performance, techniques like model pruning, quantization, and knowledge distillation are employed.Model Pruning: After training, pruning is typically the first optimization step. This begins with trimming model weights and may advance to more intensive methods like neuron or channel pruning.Quantization: Following pruning, the model's weights, and potentially its activations, are streamlined. Though weight quantization is generally a post-training process, for deeper reductions, such as very low-bit quantization, one might adopt quantization-aware training from the beginning.Additional recommendations are:Optimizing the model specifically for the intended hardware can elevate its performance. Before initiating training, selecting inherently efficient architectures with fewer parameters is beneficial. Approaches that adopt parameter sharing or tensor factorization prove advantageous. For those planning to train a new model or fine-tune an existing one with an emphasis on sparsity, starting with sparse training is a prudent approach.Deployment Infrastructure After training and compressing our LLM, we will be using technologies like Docker and Kubernetes to deploy models scalably and consistently. This approach allows us to flexibly scale using as many pods as needed. Concluding the deployment process, we'll implement edge deployment strategies. This positions our models nearer to the end devices, proving crucial for applications that demand real-time responses.Continuous Monitoring & FeedbackThe process starts with the Active model in production. As it interacts with users and as language evolves, it can become less accurate, leading to the phase where the Model becomes stale as time passes.To address this, feedback and interactions from users are captured, forming a vast range of new data. Using this data, adjustments are made, resulting in a New fine-tuned model.As user interactions continue and the language landscape shifts, the current model is replaced with the new model. This iterative cycle of deployment, feedback, refinement, and replacement ensures the model always stays relevant and effective.Importance and Benefits of LLMOpsMuch like the operational paradigms of AIOps and MLOps, LLMOps brings a wealth of benefits to the table when managing Large Language Models.MaintenanceAs LLMs are computationally intensive. LLMOps streamlines their deployment, ensuring they run smoothly and responsively in real-time applications. This involves optimizing infrastructure, managing resources effectively, and ensuring that models can handle a wide variety of queries without hiccups.Consider the significant investment of effort, time, and resources required to maintain Large Language Models like Chat GPT, especially given its vast user base.Continuous ImprovementLLMOps emphasizes continuous learning, allowing LLMs to be updated with fresh data. This ensures that models remain relevant, accurate, and effective, adapting to the evolving nature of language and user needs.Building on the foundation of GPT-3, the newer GPT-4 model brings enhanced capabilities. Furthermore, while ChatGPT was previously trained on data up to 2021, it has now been updated to encompass information through 2022.It's important to recognize that constructing and sustaining large language models is an intricate endeavor, necessitating meticulous attention and planning.ConclusionThe ascent of Large Language Models marks a transformative phase in the evolution of machine learning. But it's not just about building them; it's about harnessing their power efficiently, ethically, and sustainably. LLMOps emerge as the linchpin, ensuring that these models not only serve their purpose but also evolve with the ever-changing dynamics of language and user needs. As we continue to innovate, the principles of LLMOps will undoubtedly play a pivotal role in shaping the future of language models and their place in our digital world.Author BioMostafa Ibrahim is a dedicated software engineer based in London, where he works in the dynamic field of Fintech. His professional journey is driven by a passion for cutting-edge technologies, particularly in the realms of machine learning and bioinformatics. When he's not immersed in coding or data analysis, Mostafa loves to travel.Medium
Read more
  • 0
  • 0
  • 95

article-image-autogpt-a-game-changer-in-ai-automation
Louis Owen
11 Oct 2023
9 min read
Save for later

AutoGPT: A Game-Changer in AI Automation

Louis Owen
11 Oct 2023
9 min read
Dive deeper into the world of AI innovation and stay ahead of the AI curve! Subscribe to our AI_Distilled newsletter for the latest insights. Don't miss out – sign up today!IntroductionIn recent years, we've witnessed a technological revolution in the field of artificial intelligence. One of the most groundbreaking developments has been the advent of Large Language Models (LLMs). Since the release of ChatGPT, people have been both shocked and excited by the capabilities of this AI.Countless experiments have been conducted to push the boundaries and explore the full potential of LLMs. Traditionally, these experiments have involved incorporating AI as part of a larger pipeline. However, what if we told you that the entire process could be automated by the AI itself? Imagine just setting the goal of a task and then sitting back and relaxing while the AI takes care of everything, from scraping websites for information to summarizing content and executing connected plugins. Fortunately, this vision is no longer a distant dream. Welcome to the world of AutoGPT!AutoGPT is an experimental open-source application that showcases the remarkable capabilities of the GPT-4 language model. This program, driven by GPT-4, connects the dots between LLM "thoughts" to autonomously achieve whatever goal you set. It represents one of the first examples of GPT-4 running fully autonomously, effectively pushing the boundaries of what is possible with AI.AutoGPT comes packed with an array of features that make it a game-changer in the world of AI automation. Let's take a closer look at what sets this revolutionary tool apart:Internet Access for Searches and Information Gathering: AutoGPT has the power to access the internet, making it a formidable tool for information gathering. Whether you need to research a topic, gather data, or fetch real-time information, AutoGPT can navigate the web effortlessly.Long-Term and Short-Term Memory Management: Just like a human, AutoGPT has memory. It can remember context and information from previous interactions, enabling it to provide more coherent and contextually relevant responses.GPT-4 Instances for Text Generation: With the might of GPT-4 behind it, AutoGPT can generate high-quality text that is coherent, contextually accurate, and tailored to your specific needs. Whether it's drafting an email, writing code, or crafting a compelling story, AutoGPT has got you covered.Access to Popular Websites and Platforms: AutoGPT can access popular websites and platforms, interacting with them just as a human user would. This opens up endless possibilities, from automating routine tasks on social media to retrieving data from web applications.File Storage and Summarization with GPT-3.5: AutoGPT doesn't just generate text; it also manages files and can summarize content using the GPT-3.5 model. This means it can help you organize and understand your data more efficiently.Extensibility with Plugins: AutoGPT is highly extensible, thanks to its plugin architecture. You can customize its functionality by adding plugins tailored to your specific needs. Whether it's automating tasks in your business or streamlining personal chores, plugins make AutoGPT endlessly adaptable. For more information regarding plugins, you can check the official repo.Throughout this article, we’ll learn how to install AutoGPT and run it on your local computer. Moreover, we’ll also learn how to utilize it to build your own personal investment valuation analyst! Without wasting any more time, let’s take a deep breath, make yourselves comfortable, and be ready to learn all about AutoGPT!Setting Up AutoGPTLet’s go through the process of setting up AutoGPT, whether you choose to use Docker or Git, setting up AutoGPT is pretty straightforward. But before we delve into the technical details, let's start with the most crucial step: obtaining an API key from OpenAI.Getting an API KeyTo use AutoGPT effectively, you'll need an API key from OpenAI. You can obtain this key by visiting the OpenAI API Key page at https://platform.openai.com/account/api-keys. It's essential to note that for seamless operation and to prevent potential crashes, we recommend setting up a billing account with OpenAI.Free accounts come with limitations, allowing only three API calls per minute. A paid account ensures a smoother experience. You can set up a paid account by following these steps:Go to "Manage Account."Navigate to "Billing."Click on "Overview."Setting up AutoGPT with DockerBefore you begin, make sure you have Docker installed on your system. If you haven't installed Docker yet, you can find the installation instructions here. Now, let’s start setting up AutoGPT with Docker.1. Open your terminal or command prompt.2. Create a project directory for AutoGPT. You can name it anything you like, but for this guide, we'll use "AutoGPT".mkdir AutoGPT cd AutoGPT3. In your project directory, create a file called `docker-compose.yml` and populate it with the following contents:version: "3.9" services: auto-gpt:    image: significantgravitas/auto-gpt    env_file:      - .env    profiles: ["exclude-from-up"]    volumes:      - ./auto_gpt_workspace:/app/auto_gpt_workspace      - ./data:/app/data      - ./logs:/app/logsThis configuration file specifies the settings for your AutoGPT Docker container, including environment variables and volume mounts.4. AutoGPT requires specific configuration files. You can find templates for these files in the AutoGPT repository. Create the necessary configuration files as needed.5. Before running AutoGPT, pull the latest image from Docker Hubdocker pull significantgravitas/auto-gpt6. With Docker Compose configured and the image pulled, you can now run AutoGPT:docker compose run --rm auto-gptThis command launches AutoGPT inside a Docker container, and it's all set to perform its AI-powered magic.Setting up AutoGPT with GitIf you prefer to set up AutoGPT using Git, here are the steps to follow:1. Ensure that you have Git installed on your system. You can download it from https://git-scm.com/downloads.2. Open your terminal or command prompt.3. Clone the AutoGPT repository using Git:git clone -b stable https://github.com/Significant-Gravitas/AutoGPT.git4. Navigate to the directory where you downloaded the repository:cd AutoGPT/autogpts/autogpt5. Run the startup scripta. On Linux/MacOS:./run.shb. On Windows:.\run.batIf you encounter errors, ensure that you have a compatible Python version installed and meet the requirements outlined in the documentation.AutoGPT for Your Personal Investment Valuation AnalystIn our previous article, we explored the exciting use case of building a personal investment news analyst with LLM. However, making sound investment decisions based solely on news articles is only one piece of the puzzle.To truly understand the potential of an investment, it's crucial to dive deeper into the financial health of the companies you're considering. This involves analyzing financial statements, including balance sheets, income statements, and cash flow statements. Yet, the sheer volume of data within these documents can be overwhelming, especially for newbie retail investors.Let’s see how AutoGPT is in action! Once the AutoGPT is up, we’ll be shown a welcome message and it will ask us to give the name of our AI, the role, and also the goals that we want to achieve. In this case, we’ll give the name of AI as “Personal Investment Valuation Analyst”. As for the role and goals, please see the attached image below.After we input the role and the goals, our assistant will start planning all of the things that it needs to do. It will give some thoughts along with the reasoning before creating a plan. Sometimes it’ll also criticize itself with the aim to create a better plan. Once the plan is laid out, it will ask for confirmation from the user. If the user is satisfied with the plan, then they can give their approval by typing “y”.Then, AutoGPT will execute each of the planned tasks. For example, here, it is browsing through the internet with the “official source of Apple financial statements” query.Based on the result of the first task, it learned that it needs to visit the corporate website of Apple, visit the invertor relations page, and then search for the required documents, which are the balance sheet, cashflow statement, and income statement. Look at this! Pretty amazing, right?The process then continues by searching through the investor relations page on the Apple website as planned in the previous step. This process will continue until the goals are achieved, which is to give recommendations to the user on whether to buy, sell, or hold the Apple stock based on valuation analysis.ConclusionCongratulations on keeping up to this point! Throughout this article, you have learned what is AutoGPT, how to install and run it on your local computer, and how to utilize it as your personal investment valuation analyst. Hope the best for your experiment with AutoGPT and see you in the next article!Author BioLouis Owen is a data scientist/AI engineer from Indonesia who is always hungry for new knowledge. Throughout his career journey, he has worked in various fields of industry, including NGOs, e-commerce, conversational AI, OTA, Smart City, and FinTech. Outside of work, he loves to spend his time helping data science enthusiasts to become data scientists, either through his articles or through mentoring sessions. He also loves to spend his spare time doing his hobbies: watching movies and conducting side projects.Currently, Louis is an NLP Research Engineer at Yellow.ai, the world’s leading CX automation platform. Check out Louis’ website to learn more about him! Lastly, if you have any queries or any topics to be discussed, please reach out to Louis via LinkedIn.
Read more
  • 0
  • 0
  • 200

article-image-supercharge-your-business-applications-with-azure-openai
Aroh Shukla
10 Oct 2023
8 min read
Save for later

Supercharge Your Business Applications with Azure OpenAI

Aroh Shukla
10 Oct 2023
8 min read
Dive deeper into the world of AI innovation and stay ahead of the AI curve! Subscribe to our AI_Distilled newsletter for the latest insights. Don't miss out – sign up today!IntroductionThe rapid advancement of technology, particularly in the domain of extensive language models like ChatGPT, is making waves across industries. These models leverage vast data resources and cloud computing power, gaining popularity not just among tech enthusiasts but also mainstream consumers. As a result, there's a growing demand for such experiences in daily tools, both from employees and customers who increasingly expect AI integration. Moreover, this technology promises transformative impacts. This article explores how organizations can harness Azure OpenAI with low-code platforms to leverage these advancements, opening doors to innovative applications and solutions.Introduction to the Power PlatformPower Platform is a Microsoft Low Code platform that spans Microsoft 365, Azure, Dynamics 365, and standalone apps.  a) Power Apps: Rapid low-code app development for businesses with a simple interface, scalable data platform, and cross-device compatibility.b) Power Automate: Automates workflows between apps, from simple tasks to enterprise-grade processes, accessible to users of all technical levels.c) Power BI: Delivers data insights through visualizations, scaling across organizations with built-in governance and security.d) Power Virtual Agents: Creates chatbots with a no-code interface, streamlining integration with other systems through Power Automate.e) Power Pages: Enterprise-grade, low-code SaaS for creating and hosting external-facing websites, offering customization and user-friendly design.Introduction to Azure OpenAIThe collaboration between Microsoft's Azure cloud platform and OpenAI, known as Azure OpenAI Open, presents an exciting opportunity for developers seeking to harness the capabilities of cutting-edge AI models and services. This collaboration facilitates the creation of innovative applications across a spectrum of domains, ranging from natural language processing to AI-powered solutions, all seamlessly integrated within the Azure ecosystem.The rapid pace of technological advancement, characterized by the proliferation of extensive language models like ChatGPT, has significantly altered the landscape of AI. These models, by tapping into vast data resources and leveraging the substantial computing power available in today's cloud infrastructure, have gained widespread popularity. Notably, this technological revolution has transcended the boundaries of tech enthusiasts and has become an integral part of mainstream consumer experiences.As a result, organizations can anticipate a growing demand for AI-driven functionalities within their daily tools and services. Employees seek to enhance their productivity with these advanced capabilities, while customers increasingly expect seamless integration of AI to improve the quality and efficiency of the services they receive.Beyond meeting these rising expectations, the collaboration between Azure and OpenAI offers the potential for transformative impacts. This partnership enables developers to create applications that can deliver tangible and meaningful outcomes, revolutionizing the way businesses operate and interact with their audiences. Azure OpenAI Prerequisites These prerequisites enable you to leverage Azure OpenAI's capabilities for your projects.1. Azure Account: Sign up for an Azure account free or paid: https://azure.microsoft.com/2. Azure Subscription: Acquire an active Azure subscription. 3. Azure OpenAI  Request Form: Follow this form https://customervoice.microsoft.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbR7en2Ais5pxKtso_Pz4b1_xUOFA5Qk1UWDRBMjg0WFhPMkIzTzhKQ1dWNyQlQCN0PWcu . 4. API Endpoint: Know your API endpoint for integration. 5. Azure SDKs: Familiarize with Azure SDKs for seamless development: https://docs.microsoft.com/azure/developer/python/azure-sdk-overview  Step to Supercharge your business applications with Azure OpenAIStep 1: Azure OpenAI Instance and keys 1.      Create a new Azure OpenAI Instance. Select region that is available for Azure OpenAI and name the instance accordingly. 2.      Once Azure OpenAI instanced is provisioned, select he Explore button 3.      Under Management, select Deployment and select Create new deployment  4.      Select gpt-35-turbo and give a deployment name a meaningful name.  5.     Under playground select deployment and select View code.   6.      In this sample code, copy the endpoint and key in a notepad file that you will use in the next steps. Step 2: Create a Power Apps Canvas app1.      Create a Power Apps Canvas App with Tablet format.2.      Add textboxes for questions, a button that will trigger Power Automate, and gallery for output from Power Automate via Azure OpenAI service: 3.      Connect Flow by Selecting the Power Automate icon, selecting Add flow and selecting Create new flow   4.      Select a blank flow Step 3: Power Automate1.      In Power Automate, name the flow, next step search for HTTP action. Take note HTTP action is a Premium connector.   2.      Next, configure the HTTP action with this step.a.      Method: POSTb.      URL: You copied this at Step 1.6c.      Endpoint:  You copied this at Step 1.6d.      Body: Follow the Microsoft Azure OpenAI documentation https://learn.microsoft.com/en-us/azure/ai-services/openai/how-to/chatgpt?pivots=programming-language-chat-completionse.      Select Add dynamic content, double click on Ask in PowerApps and new parameter HTTP_Body has been createdf.       Use Power Apps Parameter. 3.      In the Next Step, search for Compose, use Body as a parameter, and save the flow. 4.      Run the flow  5.      Copy the outputs of the Compose action to Notepad. You will use it in the next steps later.  6.      In the Next Step, search for Parse JSON, in Add dynamic content locate Body, and drag to Content  7.      Select Generate from sample  8.      Paste the content that you did in previous Step 3.5.  9.      In the next step, search for Response action, in Add dynamic content select Choices  5.      Save the flow.  Step 4: Write up the Canvas app with Power Automate1.      Set variables at button control  2.      Use Gallery Control to display output from Power Automate via Azure OpenAI. Step 5: Test the App1.      Ask a question in the Power Apps user interface as shown below 2.      After a few seconds, we get a response that comes from Azure OpenAI. ConclusionYou've successfully configured a comprehensive integration involving three key services to enhance your application's capabilities:1. Power Apps: This service serves as the user-facing visual interface, providing an intuitive and user-friendly experience for end-users. With Power Apps, you can design and create interactive applications tailored to your specific needs, empowering users to interact with your system effortlessly.2. Power Automate: For establishing seamless connectivity between your application and Azure OpenAI, Power Automate comes into play. It acts as the bridge that enables data and process flow between different services. With Power Automate, you can automate workflows, manage data, and trigger actions, ensuring a smooth interaction between your application and Azure OpenAI's services.3. Azure OpenAI: Leveraging the capabilities of Azure OpenAI, particularly the ChatGPT 3.5 Turbo service, opens up a world of advanced natural language processing and AI capabilities. This service enables your application to understand and respond to user inputs, making it more intelligent and interactive. Whether it's for chatbots, text generation, or language understanding, Azure OpenAI's powerful tools can significantly enhance your application's functionality.By integrating these three services seamlessly, you've created a robust ecosystem for your application. Users can interact with a visually appealing and user-friendly interface powered by Power Apps, while Power Automate ensures that data and processes flow smoothly behind the scenes. Azure OpenAI, with its advanced language capabilities, adds a layer of intelligence to your application, making it more responsive and capable of understanding and generating human-like text.This integration not only improves user experiences but also opens up possibilities for more advanced and dynamic applications that can understand, process, and respond to natural language inputs effectively.Source CodeYou can rebuild the entire solution at my GitHub Repo at  https://github.com/aarohbits/AzureOpenAIWithPowerPlatform/blob/main/01%20AzureOpenAIPowerPlatform/Readme.md Author BioAroh Shukla is a Microsoft Most Valuable Professional (MVP) Alumni and a Microsoft Certified Trainer (MCT) with expertise in Power Platform and Azure. He assists customers from various industries in their digital transformation endeavors, crafting tailored solutions using advanced Microsoft technologies. He is not only dedicated to serving students and professionals on their Microsoft technology journeys but also excels as a community leader with strong interpersonal skills, active listening, and a genuine commitment to community progress.He possesses a deep understanding of the Microsoft cloud platform and remains up-to-date with the latest innovations. His exceptional communication abilities enable him to effectively convey complex technical concepts with clarity and conciseness, making him a valuable resource in the tech community.
Read more
  • 0
  • 0
  • 107
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-question-answering-in-langchain
Mostafa Ibrahim
10 Oct 2023
8 min read
Save for later

Question Answering in LangChain

Mostafa Ibrahim
10 Oct 2023
8 min read
Dive deeper into the world of AI innovation and stay ahead of the AI curve! Subscribe to our AI_Distilled newsletter for the latest insights. Don't miss out – sign up today!IntroductionImagine seamlessly processing vast amounts of data, posing any question, and receiving eloquently crafted answers in return. While Large Language Models like ChatGPT excel with general data, they falter when it comes to your private information—data you'd rather not broadcast to the world. Enter LangChain: it empowers us to harness any NLP model, refining it with our exclusive data.In this article, we'll explore LangChain, a framework designed for building applications with language models. We'll guide you through training a model, specifically the OpenAI Chat GPT, using your selected private data. While we provide a structured tutorial, feel free to adapt the steps based on your dataset and model preferences, as variations are expected and encouraged. Additionally, we'll offer various feature alternatives that you can incorporate throughout the tutorial.The Need for Privacy in Question AnsweringUndoubtedly, the confidentiality of personalized data is of absolute importance. While companies amass vast amounts of data daily, which offers them invaluable insights, it's crucial to safeguard this information. Disclosing such proprietary information to external entities could jeopardize the company's competitive edge and overall business integrity.How Does the Fine-Tuning LangChain Process WorkStep 1: Identifying The Appropriate Data SourceBefore selecting the right dataset, it's essential to ask a few preliminary questions. What specific topic are you aiming to inquire about? Is the data set sufficient? And such.Step 2: Integrating The Data With LangChainBased on the dataset's file format you have, you'll need to adopt different methods to effectively import the data into LangChain.Step 3: Splitting The Data Into ChunksTo ensure efficient data processing, it's crucial to divide the dataset into smaller segments, often referred to as chunks.Step 4: Transforming The Data Into EmbeddingsEmbedding is a technique where words or phrases from the vocabulary are mapped to vectors of real numbers. The idea behind embeddings is to capture the semantic meaning and relationships of words in a lower-dimensional space than the original representation.Step 5: Asking Queries To Our ModelFinally, after training our model on the updated documentation, we can directly query it for any information we require.Full LangChain ProcessDataSet UsedLangChain's versatility stems from its ability to process varied datasets. For our demonstration, we utilize the "Giskard Documentation", a comprehensive guide on the Giskard framework.Giskard is an open-source testing framework for Machine Learning models, spanning various Python model types. It automatically detects vulnerabilities in ML models, generates domain-specific tests, and integrates open-source QA best practices.Having said that, LangChain can seamlessly integrate with a myriad of other data sources, be they textual, tabular, or even multimedia, expanding its use-case horizons.Setting Up and Using LangChain for Private Question AnsweringStep 1: Installing The Necessary LibrariesAllows the first step of building any machine learning model, we will have to set up our environment, making sure!pip install langchain !pip install openai !pip install pypdf !pip install tiktoken !pip install faiss-gpuStep 2: Importing Necessary Librariesfrom langchain.llms import OpenAI from langchain.chat_models import ChatOpenAI from langchain.document_loaders import PyPDFLoader from langchain.vectorstores import FAISS from langchain.embeddings.openai import OpenAIEmbeddings from langchain.chains import LLMChain from langchain.prompts import PromptTemplate from langchain.chains.qa_with_sources import load_qa_with_sources_chain import openai import osStep 3: Importing OpenAI API Keyos.environ['OPENAI_API_KEY'] = "Insert your OpenAI key here"Step 4: Loading Our Data SetLandChain offers the capability to load data in various formats. In this article, we'll focus on loading data in PDF format but will also touch upon other popular formats such as CSV and File Directory. For details on other file formats, please refer to the LangChain Documentation.Loading PDF DataWe've compiled the Giskard AI tool's documentation into a PDF and subsequently partitioned the data.loader = PyPDFLoader("/kaggle/input/giskard-documentation/Giskard Documentation.pdf")pages = loader.load_and_split()Below are the code snippets if you prefer to work with either CSV or File Directory file formats.Loading CSV Datafrom langchain.document_loaders.csv_loader import CSVLoader loader = CSVLoader(“Insert the path to your CSV dataset here”) data = loader.load()Loading File Directory Datafrom langchain.document_loaders import DirectoryLoader loader = DirectoryLoader('../', glob="**/*.md") docs = loader.load()Step 5: Indexing The DatasetWe will be creating an index using FAISS (Facebook AI Similarity Search), which is a library developed by Facebook AI for efficiently searching similarities in large datasets, especially used with vectors from machine learning models.We will be converting those documents into vector embeddings using OpenAIEmbeddings(). This indexed data can then be used for efficient similarity searches later on.faiss_index = FAISS.from_documents(pages, OpenAIEmbeddings())Here are some alternative indexing options you might consider.Indexing using Pineconeimport osimport pineconefrom langchain.schema import Documentfrom langchain.embeddings.openai import OpenAIEmbeddingsfrom langchain.vectorstores import Pinecone pinecone.init(    api_key=os.environ["PINECONE_API_KEY"], environment=os.environ["PINECONE_ENV"]) embeddings = OpenAIEmbeddings() pinecone.create_index("langchain-self-retriever-demo", dimension=1536)Indexing using Chromaimport os import getpass from langchain.schema import Document from langchain.embeddings.openai import OpenAIEmbeddings from langchain.vectorstores import Chroma os.environ["OPENAI_API_KEY"] = getpass.getpass("OpenAI API Key:") embeddings = OpenAIEmbeddings() vectorstore = Chroma.from_documents(docs, embeddings)Step 6: Asking The Model Some QuestionsThere are multiple methods by which we can retrieve our data from our model.Similarity SearchIn the context of large language models (LLMs) and natural language processing, similarity search is often about finding sentences, paragraphs, or documents that are semantically similar to a given sentence or piece of text.query = "What is Giskard?" docs = faiss_index.similarity_search(query) print(docs[0].page_content)Similarity Search Output: Why Giskard?Giskard is an open-source testing framework dedicated to ML models, covering any Python model, from tabular to LLMs.Testing Machine Learning applications can be tedious. Since ML models depend on data, testing scenarios depend on the domain specificities and are often infinite. Where to start testing? Which tests to implement? What issues to cover? How to implement the tests?At Giskard, we believe that Machine Learning needs its own testing framework. Created by ML engineers for ML engineers, Giskard enables you to:Scan your model to find dozens of hidden vulnerabilities: The Giskard scan automatically detects vulnerability issues such as performance bias, data leakage, unrobustness, spurious correlation, overconfidence, underconfidence, unethical issue, etc. Instantaneously generate domain-specific tests: Giskard automatically generates relevant tests based on the vulnerabilities detected by the scan. You can easily customize the tests depending on your use case by defining domain-specific data slicers and transformers as fixtures of your test suites.Leverage the Quality Assurance best practices of the open-source community: The Giskard catalog enables you to easily contribute and load data slicing & transformation functions such as AI-based detectors (toxicity, hate, etc.), generators (typos, paraphraser, etc.), or evaluators. Inspired by the Hugging Face philosophy, the aim of Giskard is to become.LLM Chainsmodel = OpenAI(model_name="gpt-3.5-turbo") my_chain = load_qa_with_sources_chain(model, chain_type="refine") query = "What is Giskard?" documents = faiss_index.similarity_search(query) result = my_chain({"input_documents": pages, "question": query})LLM Chain Output:Based on the additional context provided, Giskard is a Python package or library that provides tools for wrapping machine learning models, testing, debugging, and inspection. It supports models from various machine learning libraries such as HuggingFace, PyTorch, TensorFlow, or Scikit-learn. Giskard can handle classification, regression, and text generation tasks using tabular or text data.One notable feature of Giskard is the ability to upload models to the Giskard server. Uploading models to the server allows users to compare their models with others using a test suite, gather feedback from colleagues, debug models effectively in case of test failures, and develop new tests incorporating additional domain knowledge. This feature enables collaborative model evaluation and improvement. It is worth highlighting that the provided context mentions additional ML libraries, including Langchain, API REST, and LightGBM, but their specific integration with Giskard is not clearly defined.Sources:Giskard Documentation.pdfAPI Reference (for Dataset methods)Kaggle: /kaggle/input/giskard-documentation/Giskard Documentation.pdfConclusionLangChain effectively bridges the gap between advanced language models and the need for data privacy. Throughout this article, we have highlighted its capability to train models on private data, ensuring both insightful results and data security. One thing is for sure though, as AI continues to grow, tools like LangChain will be essential for balancing innovation with user trust.Author BioMostafa Ibrahim is a dedicated software engineer based in London, where he works in the dynamic field of Fintech. His professional journey is driven by a passion for cutting-edge technologies, particularly in the realms of machine learning and bioinformatics. When he's not immersed in coding or data analysis, Mostafa loves to travel.Medium
Read more
  • 0
  • 0
  • 173

article-image-build-your-personal-assistant-with-agentgpt
Louis Owen
10 Oct 2023
7 min read
Save for later

Build your Personal Assistant with AgentGPT

Louis Owen
10 Oct 2023
7 min read
Dive deeper into the world of AI innovation and stay ahead of the AI curve! Subscribe to our AI_Distilled newsletter for the latest insights. Don't miss out – sign up today!IntroductionIn a world where technology is progressing at an exponential rate, the concept of a personal assistant is no longer confined to high-profile executives with hectic schedules. Today, due to the incredible advancements in artificial intelligence (AI), each one of us has the chance to take advantage of a personal assistant's services, even for tasks that may have appeared beyond reach just a few years ago. Imagine having an entity that can aid you in conducting research, examining your daily financial expenditures, organizing your travel itinerary, and much more. This entity is known as AI and, more precisely, it is embodied in AgentGPT.You have likely heard of AI's incredible capabilities, ranging from diagnosing diseases to defeating world-class chess champions. While AI has undoubtedly made significant strides, here's the caveat: unless you possess technical expertise, devising the workflow to fully utilize AI's potential can be an intimidating endeavor. This is where the concepts of "tools" and "agents" become relevant, and AgentGPT excels in this domain.An "agent" is essentially the mastermind behind your AI assistant. It's the entity that “thinks”, strategizes, and determines how to achieve your objectives based on the available "tools." These "tools" represent the skills your agent possesses, such as web searching, code writing, generating images, retrieving knowledge from your personal data, and a myriad of other capabilities. Creating a seamless workflow where your agent utilizes these tools effectively is no simple task. It entails connecting the agent to the tools, managing errors that may arise, devising prompts to guide the agent, and more.Fortunately, there's a game-changer in the world of AI personal assistants, and it goes by the name of AgentGPT. Without wasting any more time, let’s take a deep breath, make yourselves comfortable, and be ready to learn how to utilize AgentGPT to build your personal assistant!What is AgentGPT?AgentGPT is an open-source project that streamlines the intricate process of creating and configuring AI personal assistants. This powerful tool enables you to deploy Autonomous AI agents, each equipped with distinct capabilities and skills. You can even name your AI, fostering a sense of personalization and relatability. With AgentGPT, you can assign your AI any mission you can conceive, and it will strive to accomplish it.The magic of AgentGPT lies in its ability to empower your AI agent to think, act, and learn. Here's how it operates:Select the Tools: You start by selecting the tools for the agent. It can be web searching, code writing, generating images, or even retrieving knowledge from your personal dataSetting the Goal: You then need to define the goal you want your AI to achieve. Whether it's conducting research, managing your finances, or planning your dream vacation, the choice is yours.Task Generation: Once the goal is set and the tools are selected, your AI agent "thinks" about the tasks required to accomplish it. This involves considering the available tools and formulating a plan of action.Task Execution: Your AI agent then proceeds to execute the tasks it has devised. This can include searching the web for information, performing calculations, generating content, and more.Learning and Adaptation: As your AI agent carries out its tasks, it learns from the results. If something doesn't go as planned, it adapts its approach for the future, continuously improving its performance.In a world where time is precious and efficiency is crucial, AgentGPT emerges as a ray of hope. It's a tool that empowers individuals from all walks of life to harness the might of AI to streamline their daily tasks, realize their goals, and amplify their productivity. Thus, whether you're a business professional seeking to optimize your daily operations or an inquisitive individual eager to explore the boundless possibilities of AI, AgentGPT stands ready to propel you into a new era of personalized assistance.Initialize AgentGPTTo build your own personal assistant with AgetnGPT, you can just follow the following simple instructions. Or even, you can also just go to the website and try the demo.Open Your Terminal: You can usually access the terminal from a 'Terminal' tab or by using a shortcut.Clone the Repository: Copy and paste the following command into your terminal and press Enter. This will clone the AgentGPT repository to your local machine.a. For Max/Linux usersgit clone https://github.com/reworkd/AgentGPT.git cd AgentGPT ./setup.sh                b. For Windows usersgit clone https://github.com/reworkd/AgentGPT.git cd AgentGPT ./setup.batFollow Setup Instructions: The setup script will guide you through the setup process. You'll need to add the appropriate API keys and other required information as instructed.Access the Web Interface: Once all the services are up and running, you can access the AgentGPT web interface by opening your web browser and navigating to http://localhost:3000.Build Your Own Assistant with AgentGPTLet’s start with an example of how to build your own assistant. First and foremost, let’s select the tools for our agent. Here, we’re selecting image generation, web search, and code writing as the tools. Once we finish selecting the tools, we can define the goal for our assistant. AgentGPT provides three templates for us:ResearchGPT: Create a comprehensive report of the Nike companyTravelGPT: Plan a detailed trip to HawaiiPlatformerGPT: Write some code to make a platformer gameNote that we can also create our own assistant name with a specific goal apart from these three templates. For now, let’s select the PlatformerGPT template.Once the goal is defined, then the agent will generate all tasks required to accomplish the goal. This involves considering the available tools and formulating a plan of action.Then, based on the generated tasks, the Agent will execute each task and learn through the results of each of the tasks.This process will continue until the goal is achieved, or in this case, until the Agent succeeds in writing the code for a platformer game. If something doesn't go as planned, it adapts its approach for the future, continuously improving its performance.ConclusionCongratulations on keeping up to this point! Throughout this article, you have learned what AgentGPT is capable of and how to build your own personal assistant with it. I wish the best for your experiment in creating your personal assistant and see you in the next article!Author BioLouis Owen is a data scientist/AI engineer from Indonesia who is always hungry for new knowledge. Throughout his career journey, he has worked in various fields of industry, including NGOs, e-commerce, conversational AI, OTA, Smart City, and FinTech. Outside of work, he loves to spend his time helping data science enthusiasts to become data scientists, either through his articles or through mentoring sessions. He also loves to spend his spare time doing his hobbies: watching movies and conducting side projects.Currently, Louis is an NLP Research Engineer at Yellow.ai, the world’s leading CX automation platform. Check out Louis’ website to learn more about him! Lastly, if you have any queries or any topics to be discussed, please reach out to Louis via LinkedIn.
Read more
  • 0
  • 0
  • 175

article-image-build-an-ai-based-personal-financial-advisor-with-langchain
Louis Owen
09 Oct 2023
11 min read
Save for later

Build an AI-based Personal Financial Advisor with LangChain

Louis Owen
09 Oct 2023
11 min read
Dive deeper into the world of AI innovation and stay ahead of the AI curve! Subscribe to our AI_Distilled newsletter for the latest insights. Don't miss out – sign up today!IntroductionManaging personal finances is a cumbersome process. Receipts, bank statements, credit card bills, and expense records accumulate quickly. Despite our best intentions, many of us often struggle to maintain a clear overview of our financial health. It's not uncommon to overlook expenses or procrastinate on updating our budgets. Inevitably, this leads to financial surprises and missed opportunities for financial growth.Even when we diligently track our expenses, the data can be overwhelming. Analyzing every transaction to identify patterns, pinpoint areas for improvement, and set financial goals is no easy feat. It's challenging to answer questions like, "Am I spending too much on entertainment?" or "Is my investment portfolio well-balanced?" or "Should I cut back on dining out?" or "Do I need to limit socializing with friends?" and "Is it time to increase my investments?"Imagine having a personal assistant capable of automating these financial tasks, effortlessly transforming your transaction history into valuable insights. What if, at the end of each month, you received comprehensive financial analyses and actionable recommendations? Thanks to the rapid advancements in generative AI, this dream has become a reality, made possible by the incredible capabilities of LLM. No more endless hours of spreadsheet tinkering or agonizing over budgeting apps.In this article, we'll delve into the development of a personal financial advisor powered by LangChain. This virtual assistant will not only automate the tracking of your finances but also provide tailored recommendations based on your unique spending patterns and financial goals.Building an AI-powered personal financial advisor is an exciting endeavor. Here's an overview of how the personal financial advisor operates:Data Input: Users upload their personal transaction history, which includes details of income, expenses, investments, and savings.Data Processing: LangChain with LLM in the backend will process the data, categorize expenses, identify trends, and compare your financial activity with your goals and benchmarks.Financial Analysis: The advisor generates a detailed financial analysis report, highlighting key insights such as spending habits, saving potential, and investment performance.Actionable Recommendations: The advisor will also provide actionable recommendations for the user. It can suggest adjustments to your budget, recommend investment strategies, and even propose long-term financial plans.The benefits of having an AI-powered personal financial advisor are numerous:Time-Saving: No more tedious data entry and manual budget tracking. The advisor handles it all, giving you more time for what matters most in your life.Personalized Insights: The advisor tailors recommendations based on your unique financial situation, ensuring they align with your goals and aspirations.Financial Confidence: With regular updates and guidance, you gain a better understanding of your financial health and feel more in control of your money.Long-Term Planning: The advisor’s ability to provide insights into long-term financial planning ensures you're well-prepared for your future.Without wasting any more time, let’s take a deep breath, make yourselves comfortable, and be ready to learn how to build your AI-based personal financial advisor with LangChain!What is LangChain?LangChain is developed to harness the incredible potential of LLM, LangChain enables the creation of applications that are not only context-aware but also capable of reasoning, all while maintaining a user-friendly and modular approach.LangChain is more than just a framework; it's a paradigm shift in the world of language model-driven applications. Here's a closer look at what makes LangChain a transformative force:Context-Aware Applications: LangChain empowers applications to be context-aware. This means that these applications can connect to various sources of context, such as prompt instructions, few-shot examples, or existing content, to enhance the depth and relevance of their responses. Whether you're seeking answers or recommendations, LangChain ensures that responses are firmly grounded in context.Reasoning Abilities: One of LangChain's standout features is its ability to reason effectively. It leverages language models to not only understand context but also make informed decisions. These decisions can range from determining how to answer a given question based on the provided context to deciding what actions to take next. LangChain doesn't just provide answers; it understands the "why" behind them.Why LangChain?The power of LangChain lies in its value propositions, which make it an indispensable tool for developers and businesses looking to harness the potential of language models:Modular Components: LangChain offers a comprehensive set of abstractions for working with language models. These abstractions are not only powerful but also modular, allowing developers to work with them seamlessly, whether they're using the entire LangChain framework or not. This modularity simplifies the development process and promotes code reuse.Off-the-Shelf Chains: LangChain provides pre-built, structured assemblies of components known as "off-the-shelf chains." These chains are designed for specific high-level tasks, making it incredibly easy for developers to kickstart their projects. Whether you're a seasoned AI developer or a newcomer, these pre-configured chains save time and effort.Customization and Scalability: While off-the-shelf chains are fantastic for quick starts, LangChain doesn't restrict you. The framework allows for extensive customization, enabling developers to tailor existing chains to their unique requirements or even create entirely new ones. This flexibility ensures that LangChain can accommodate a wide range of applications, from simple chatbots to complex AI systems.LangChain isn't just a run-of-the-mill framework; it's a versatile toolkit designed to empower developers to create sophisticated language model-powered applications. At the heart of LangChain is a set of interconnected modules, each serving a unique purpose. These modules are the building blocks that make LangChain a powerhouse for AI application development.Model I/O: At the core of LangChain's capabilities is its ability to interact seamlessly with language models. This module facilitates communication with these models, enabling developers to leverage their natural language processing prowess effortlessly.Retrieval: LangChain recognizes that real-world applications require access to relevant data. The Retrieval module allows developers to integrate application-specific data sources into their projects, enhancing the context and richness of responses.Chains: Building upon the previous modules, Chains bring structure and order to the development process. Developers can create sequences of calls, orchestrating interactions with language models and data sources to achieve specific tasks or goals.Agents: Let chains choose which tools to use given high-level directives. Agents take the concept of automation to a new level. They allow chains to make intelligent decisions about which tools to employ based on high-level directives. This level of autonomy streamlines complex processes and enhances application efficiency.Memory: Memory is vital for continuity in applications. This module enables LangChain applications to remember and retain their state between runs of a chain, ensuring a seamless user experience and efficient data handling.Callbacks: Transparency and monitoring are critical aspects of application development. Callbacks provide a mechanism to log and stream intermediate steps of any chain, offering insights into the inner workings of the application and facilitating debugging.Building the Personal Financial AdvisorLet’s start building our personal financial advisor with LangChain! For the sake of simplicity, let’s consider only three data sources: monthly credit card statements, bank account statements, and cash expense logs. The following is an example of the data format for each of the sources. ## Monthly Credit Card Statement Date: 2023-09-01 Description: Grocery Store Amount: $150.00 Balance: $2,850.00 Date: 2023-09-03 Description: Restaurant Dinner Amount: $50.00 Balance: $2,800.00 Date: 2023-09-10 Description: Gas Station Amount: $40.00 Balance: $2,760.00 Date: 2023-09-15 Description: Utility Bill Payment Amount: $100.00 Balance: $2,660.00 Date: 2023-09-20 Description: Salary Deposit Amount: $3,000.00 Balance: $5,660.00 Date: 2023-09-25 Description: Online Shopping Amount: $200.00 Balance: $5,460.00 Date: 2023-09-30 Description: Investment Portfolio Contribution Amount: $500.00 Balance: $4,960.00 ## Bank Account Statement Date: 2023-08-01 Description: Rent Payment Amount: $1,200.00 Balance: $2,800.00 Date: 2023-08-05 Description: Grocery Store Amount: $200.00 Balance: $2,600.00 Date: 2023-08-12 Description: Internet and Cable Bill Amount: $80.00 Balance: $2,520.00 Date: 2023-08-15 Description: Freelance Gig Income Amount: $700.00 Balance: $3,220.00 Date: 2023-08-21 Description: Dinner with Friends Amount: $80.00 Balance: $3,140.00 Date: 2023-08-25 Description: Savings Account Transfer Amount: $300.00 Balance: $3,440.00 Date: 2023-08-29 Description: Online Shopping Amount: $150.00 Balance: $3,290.00 ## Cash Expense Log Date: 2023-07-03 Description: Coffee Shop Amount: $5.00 Balance: $95.00 Date: 2023-07-10 Description: Movie Tickets Amount: $20.00 Balance: $75.00 Date: 2023-07-18 Description: Gym Membership Amount: $50.00 Balance: $25.00 Date: 2023-07-22 Description: Taxi Fare Amount: $30.00 Balance: -$5.00 (Negative balance indicates a debt) Date: 2023-07-28 Description: Bookstore Amount: $40.00 Balance: -$45.00 Date: 2023-07-30 Description: Cash Withdrawal Amount: $100.00 Balance: -$145.00To create our personal financial advisor, we’ll use the chat model interface provided by LangChain. There are several important components to build a chatbot with LangChain:`chat model`: Chat models are essential for creating conversational chatbots. These models are designed to generate human-like responses in a conversation. You can choose between chat models and LLMs (Large Language Models) depending on the tone and style you want for your chatbot. Chat models are well-suited for natural, interactive conversations.`prompt template`: Prompt templates help you construct prompts for your chatbot. They allow you to combine default messages, user input, chat history, and additional context to create meaningful and dynamic conversations. Using prompt templates makes it easier to generate responses that flow naturally in a conversation.`memory`: Memory in a chatbot context refers to the ability of the bot to remember information from previous parts of the conversation. This can be crucial for maintaining context and providing relevant responses. Memory types can vary depending on your use case, and they can include short-term and long-term memory.`retriever` (optional): Retrievers are components that help chatbots access domain-specific knowledge or retrieve information from external sources. If your chatbot needs to provide detailed, domain-specific information, a retriever can be a valuable addition to your system.First, we need to set the API key for our LLM. We’ll use OpenAI in this example.import os os.environ["OPENAI_API_KEY"] = “your openai key”Then, we can simply load the necessary chat modules from LangChain. from langchain.schema import ( AIMessage,    HumanMessage,    SystemMessage ) from langchain.chat_models import ChatOpenAIThe ChatOpenAI is the main class that connects with the OpenAI LLM. We can pass `HumanMessage` and `SystemMessage` to this class and it will return the response from the LLM in the type of `AIMessage`.chat = ChatOpenAI(model_name=”gpt-3.5-turbo”) messages = [SystemMessage(content=prompt),                    HumanMessage(content=data)] chat(messages)Let’s see the following example where we pass the prompt along with the data and the LLM returns the response via the ChatOpenAI object. Boom! We just got our first analysis and recommendation from our personal financial advisor. This is a very simple example of how to create our personal financial advisor. Of course, there’s still a lot of room for improvement. For example, currently, we need to pass manually the relevant data sources as the HumanMessage. However, as mentioned before, LangChain provides a built-in class to perform retrieval. This means that we can just create another script to automatically dump all of the relevant data into some sort of document or even database, and then LangChain can directly read the data directly from there. Hence, we can get automated reports every month without needing to manually input the relevant data.ConclusionCongratulations on keeping up to this point! Throughout this article, you have learned what is LangChain, what it is capable of, and how to build a personal financial advisor with LangChain. Hope the best for your experiment in creating your personal financial advisor and see you in the next article!Author BioLouis Owen is a data scientist/AI engineer from Indonesia who is always hungry for new knowledge. Throughout his career journey, he has worked in various fields of industry, including NGOs, e-commerce, conversational AI, OTA, Smart City, and FinTech. Outside of work, he loves to spend his time helping data science enthusiasts to become data scientists, either through his articles or through mentoring sessions. He also loves to spend his spare time doing his hobbies: watching movies and conducting side projects. Currently, Louis is an NLP Research Engineer at Yellow.ai, the world’s leading CX automation platform. Check out Louis’ website to learn more about him! Lastly, if you have any queries or any topics to be discussed, please reach out to Louis via LinkedIn.
Read more
  • 0
  • 0
  • 2259

article-image-canva-plugin-for-chatgpt
Sangita Mahala
09 Oct 2023
6 min read
Save for later

Canva Plugin for ChatGPT

Sangita Mahala
09 Oct 2023
6 min read
Dive deeper into the world of AI innovation and stay ahead of the AI curve! Subscribe to our AI_Distilled newsletter for the latest insights. Don't miss out – sign up today!IntroductionIn the evolving world of digital creativity, the collaboration between Canva and ChatGPT ushers a new era. Canva is a popular graphic design platform that allows users to create a wide variety of visual content, such as social media posts, presentations, posters, videos, banners, and many more. Whereas ChatGPT is an extensive language model that is capable of writing many types of creative material like poems, stories, essays, and songs, generating code snippets, translating languages, and providing you with helpful answers to your queries.In this article, we examine the compelling reasons for embracing these two cutting-edge platforms and reveal the endless possibilities they offer.Why use Canva on ChatGPT?Using Canva and ChatGPT individually can be a great way to create content, but there are several benefits to using them.You can get the best of both platforms by integrating Canva on ChatGPT. The creativity and flexibility of ChatGPT are dynamic while the functionality and simplicity of Canva are user-friendly.You can optimize your workflow and save time and effort by integrating Canva with ChatGPT. When you submit your design query to ChatGPT, It will quickly start the process of locating and producing the best output within less time.You can get ideas and get creative by using Canva on ChatGPT. By altering the description or the parameters in ChatGPT, you can experiment with various options and styles for your graphic.How to use Canva on ChatGPT?Follow the below steps to get started for the Canva plugin:Step-1:To use GPT-4 and Canva Plugin you will need to upgrade to the Plus version. So for that go to the ChatGPT website and log in to your account. Then navigate to top of your screen, then you will be able to find the GPT-4 button.Step-2:Once clicked, then press the Upgrade to Plus button. On the Subscription page, enter your email address, payment method, and billing address. Click the Subscribe button. Once your payment has been processed, you will be upgraded to ChatGPT Plus. Step-3:Now, move to the “GPT-4” model and choose “Plugins” from the drop-down menu.Step-4:After that, you will be able to see “Plugin store” in which you can access different kinds of plugins and explore them.Step-5:Here, you must search “Canva” and click on the install button to download the plugin in ChatGPT.  Step-6:Once installed, make sure the “Canva” plugin is enabled via the drop-down menu.  Step-7:Now, go ahead and enter the prompt for the image, video, banner, poster, and presentation you wish to create. For example, you can ask ChatGPT to generate, “I'm performing a keynote speech presentation about advancements in Al technology. Create a futuristic, modern, and innovative presentation template for me to use” and it generated some impressive results within a minute.  Step-8:By clicking the link in ChatGPT's response you will be redirected toward the Canva editing page then you can customize the design, without even signing in. Once you are finished editing your visual content, you can download it from Canva and share it with others.So overall, you may utilize the Canva plugin in ChatGPT to quickly realize your ideas if you want to create an automated Instagram or YouTube channel with unique stuff. The user's engagement is minimal and effortless.Here are some specific examples of how you can use the Canva plugin on ChatGPT to create amazing content: Create presentations: Using your topic and audience, ChatGPT can generate presentation outlines for you. Once you have an outline, Canva can be used to make interactive and informative presentations.Generate social media posts: Using ChatGPT, you can come up with ideas for social media posts depending on your objectives and target audience. Once you have a few ideas, you may use Canva to make visually beautiful and interesting social media posts.Design marketing materials: You may utilize ChatGPT to come up with concepts for blog articles, infographics, and e-books, among other types of marketing materials. You may use Canva to create visually appealing and informative marketing materials.Make educational resources: ChatGPT can be used to create worksheets, flashcards, and lesson plans, among other types of educational materials. Once you've collected some resources, you can utilize Canva to make interesting and visually appealing educational materials.Things you must know about Canva on ChatGPTBe specific in your prompts. The more specific you are in your prompts, the better ChatGPT will be able to generate the type of visual content you want. Use words and phrases that are appropriate for your visual material. In order to come up with visual content ideas, ChatGPT searches for terms that are relevant to your prompt.Test out several templates and prompts. You may use Canva in a variety of ways on ChatGPT, so don't be hesitant to try out various prompts and templates to see what works best for you.Use ChatGPT's other features. ChatGPT can do more than just generate visual content. You can also use it to translate languages, write different kinds of creative content, and answer your questions in an informative way.ConclusionOverall, using Canva on ChatGPT has a number of advantages, including simplicity, strength, and adaptability. You can save a tonne of time and work by using the Canva plugin to create and update graphic material without using ChatGPT. With ChatGPT's AI capabilities, you can produce more inventive and interesting visual material than you could on your own. You also have a lot of versatility when generating visual material because to Canva's wide variety of templates and creative tools. So we got to know that, whether you are a content creator, a marketing manager, or a teacher, using the Canva plugin on ChatGPT can help you create amazing content that will engage the audience and help you to achieve your goals.Author BioSangita Mahala is a passionate IT professional with an outstanding track record, having an impressive array of certifications, including 12x Microsoft, 11x GCP, 2x Oracle, and LinkedIn Marketing Insider Certified. She is a Google Crowdsource Influencer and IBM champion learner gold. She also possesses extensive experience as a technical content writer and accomplished book blogger. She is always Committed to staying with emerging trends and technologies in the IT sector.
Read more
  • 0
  • 0
  • 913
article-image-creating-openai-and-azure-openai-functions-in-power-bi-dataflows
Greg Beaumont
09 Oct 2023
7 min read
Save for later

Creating OpenAI and Azure OpenAI functions in Power BI dataflows

Greg Beaumont
09 Oct 2023
7 min read
Dive deeper into the world of AI innovation and stay ahead of the AI curve! Subscribe to our AI_Distilled newsletter for the latest insights. Don't miss out – sign up today!This article is an excerpt from the book, Power BI Machine Learning and OpenAI, by Greg Beaumont. Master core data architecture design concepts and Azure Data & AI services to gain a cloud data and AI architect’s perspective to developing end-to-end solutions IntroductionAs noted earlier, integrating OpenAI and Azure OpenAI with Power Query or dataflows currently requires custom M code. To facilitate this process, we have provided M code for both OpenAI and Azure OpenAI, giving you the flexibility to choose which version to use based on your specific needs and requirements.By leveraging this provided M code, you can seamlessly integrate OpenAI or Azure OpenAI with your existing Power BI solutions. This will allow you to take advantage of the unique features and capabilities offered by these powerful AI technologies, while also gaining insights and generating new content from your data with ease.OpenAI and Azure OpenAI functionsOpenAI offers a user-friendly API that can be easily accessed and utilized from within Power Query or dataflows in Power BI. For further information regarding the specifics of the API, we refer you to the official OpenAI documentation, available at this link: https://platform.openai.com/ docs/introduction/overview.It is worth noting that optimizing and tuning the OpenAI API will likely be a popular topic in the coming year. Various concepts, including prompt engineering, optimal token usage, fine-tuning, embeddings, plugins, and parameters that modify response creativity (such as temperature and top p), can all be tested and fine-tuned for optimal results.While these topics are complex and may be explored in greater detail in future works, this book will focus primarily on establishing connectivity between OpenAI and Power BI. Specifically, we will explore prompt engineering and token limits, which are key considerations that will be incorporated into the API call to ensure optimal performance:Prompts: Prompt engineering, in basic terms, is the English-language text that will be used to preface every API call. For example, instead of sending [Operator] and [Airplane] as values without context, text was added to the request in the previous chapter such that the API will receive Tell me about the airplane model [Aircraft] operated by [Operator] in three sentences:. The prompt adds context to the values passed to the OpenAI model.Tokens: Words sent to the OpenAI model get broken into chunks called tokens. Per the OpenAI website, a token contains about four English language characters. Reviewing the Remarks column in the Power BI dataset reveals that most entries have up to 2,000 characters. (2000 / 4) = 500, so you will specify 500 as the token limit. Is that the right number? You’d need to do extensive testing to answer that question, which goes beyond the scope of this book.Let’s get started with building your OpenAI and Azure OpenAI API calls for Power BI dataflows!Creating OpenAI and Azure OpenAI functions for Power BI dataflowsYou will create two functions for OpenAI in your dataflow named OpenAI. The only difference between the two will be the token limits. The purpose of having different token limits is primarily cost savings, since larger token limits could potentially run up a bigger bill. Follow these steps to create a new function named OpenAIshort:1.      Select Get data | Blank query.2.      Paste in the following M code and select Next. Be sure to replace abc123xyz with your OpenAI API key.Here is the code for the function. The code can also be found as 01 OpenAIshortFunction.M in the Packt GitHub repository at https://github.com/PacktPublishing/ Unleashing-Your-Data-with-Power-BI-Machine-Learning-and-OpenAI/ tree/main/Chapter-13:let callOpenAI = (prompt as text) as text => let jsonPayload = "{""prompt"": """ & prompt & """, ""max_tokens"": " & Text.From(120) & "}", url = "https://api.openai.com/v1/engines/ text-davinci-003/completions", headers = [#"Content-Type"="application/json", #"Authorization"="Bearer abc123xyz"], response = Web.Contents(url, [Headers=headers, Content=Text.ToBinary(jsonPayload)]), jsonResponse = Json.Document(response), choices = jsonResponse[choices], text = choices{0}[text] in text in callOpenAI3.      Now, you can rename the function OpenAIshort. Right-click on the function in the Queries panel and duplicate it. The new function will have a larger token limit.4.      Rename this new function OpenAIlong.5.      Right-click on OpenAIlong and select Advanced editor.6.      Change the section of code reading Text.From(120) to Text.From(500).7.      Click OK.Your screen should now look like this:Figure 13.1 – OpenAI functions added to a Power BI dataflowThese two functions can be used to complete the workshop for the remainder of this chapter. If you’d prefer to use Azure OpenAI, the M code for OpenAIshort would be as follows. Remember to replace PBI_OpenAI_project with your Azure resource name, davinci-PBIML with your deployment name, and abc123xyz with your API key:let callAzureOpenAI = (prompt as text) as text => let jsonPayload = "{""prompt"": """ & prompt & """, ""max_tokens"": " & Text.From(120) & "}" url = "https://" & "PBI_OpenAI_project" & ".openai.azure. com" & "/openai/deployments/" & "davinci-PBIML" & "/ completions?api-version=2022-12-01", headers = [#"Content-Type"="application/json", #"api-key"="abc123xyz"], response = Web.Contents(url, [Headers=headers, Content=Text.ToBinary(jsonPayload)]), jsonResponse = Json.Document(response), choices = jsonResponse[choices], text = choices{0}[text] in text in callAzureOpenAIAs with the previous example, changing the token limit for Text.From(120) to Text.From(500) is all you need to do to create an Azure OpenAI function for 500 tokens instead of 120. The M code to create the dataflows for your OpenAI functions can also be found on the Packt GitHub site at this link: https://github.com/PacktPublishing/Unleashing-Your-Data-withPower-BI-Machine-Learning-and-OpenAI/tree/main/Chapter-13.Now that you have your OpenAI and Azure OpenAI functions ready to go in a Power BI dataflow, you can test them out on the FAA Wildlife Strike data!ConclusionIn conclusion, this article has provided valuable insights into integrating OpenAI and Azure OpenAI with Power BI dataflows using custom M code. By offering M code for both OpenAI and Azure OpenAI, it allows users to seamlessly incorporate these powerful AI technologies into their Power BI solutions. The article emphasizes the significance of prompt engineering and token limits in optimizing the OpenAI API. It also provides step-by-step instructions for creating functions with different token limits, enabling cost-effective customization.With these functions in place, users can harness the capabilities of OpenAI and Azure OpenAI within Power BI, enhancing data analysis and content generation. For further details and code references, you can explore the provided GitHub repository. Now, armed with these tools, you are ready to explore the potential of OpenAI and Azure OpenAI in your Power BI data projects.Author BioGreg Beaumont is a Data Architect at Microsoft; Greg is an expert in solving complex problems and creating value for customers. With a focus on the healthcare industry, Greg works closely with customers to plan enterprise analytics strategies, evaluate new tools and products, conduct training sessions and hackathons, and architect solutions that improve the quality of care and reduce costs. With years of experience in data architecture and a passion for innovation, Greg has a unique ability to identify and solve complex challenges. He is a trusted advisor to his customers and is always seeking new ways to drive progress and help organizations thrive. For more than 15 years, Greg has worked with healthcare customers who strive to improve patient outcomes and find opportunities for efficiencies. He is a veteran of the Microsoft data speaker network and has worked with hundreds of customers on their data management and analytics strategies.
Read more
  • 0
  • 0
  • 149

article-image-kickstarting-your-journey-with-azure-openai
Shankar Narayanan
06 Oct 2023
8 min read
Save for later

Kickstarting your journey with Azure OpenAI

Shankar Narayanan
06 Oct 2023
8 min read
Dive deeper into the world of AI innovation and stay ahead of the AI curve! Subscribe to our AI_Distilled newsletter for the latest insights. Don't miss out – sign up today!IntroductionArtificial intelligence is more than just a buzzword now. The transformative force of AI is driving innovation, reshaping industries, and opening up new world possibilities.At the forefront of this revolution stands Azure Open AI. It is a collaboration between Open AI and Microsoft's Azure cloud platform. For the business, developers, and AI enthusiasts, it is an exciting opportunity to harness the incredible potential of AI.Let us know how we can kick-start our journey with Azure Open AI.Understanding Azure Open AI Before diving into the technical details, it is imperative to understand the broader context of artificial intelligence and why Azure OpenAI is a game changer.From recommendation systems on any streaming platform like Netflix to voice-activated virtual assistants like Alexa or Siri, AI is everywhere around us.Businesses use artificial intelligence to enhance customer experiences, optimize operations, and gain a competitive edge. Here, Azure Open AI stands as a testament to the growing importance of artificial intelligence. The collaboration between Microsoft and Open AI brings the cloud computing prowess of Microsoft with the state-of-the-art AI model of Open AI. It results in the creation of AI-driven services and applications with remarkable ease.Core components of Azure Open AI One must understand the core components of Azure OpenAI.●  Azure cloud platformIt is a cloud computing platform of Microsoft. Azure offers various services, including databases, virtual machines, and AI services. One can achieve reliability, scalability, and security with Azure while making it ideal for AI deployment and development.●  Open AI's GPT modelsOpen AI's GPT models reside at the heart of Open AI models. The GPT3 is the language model that can understand context, generate human-like text, translate languages, write codes, or even answer questions. Such excellent capabilities open up many possibilities for businesses and developers.After getting an idea of Azure Open AI, here is how you can start your journey with Azure Open AI.Integrating Open AI API in Azure Open AI projects.To begin with the Azure Open AI journey, one has to access an Azure account. You can always sign up for a free account if you don't have one. It comes with generous free credits to help you get started.You can set up the Azure OpenAI environment only by following these steps as soon as you have your Azure account.●  Create a new projectYou can start creating new projects as you log into your Azure portal. This project would be the workspace for all your Azure Open AI endeavors. Make it a point to choose the configuration and services that perfectly align with the AI project requirements.Yes, Azure offers a wide range of services. Therefore, one can tailor their project to meet clients' specific needs.●  Access Open AI APIThe seamless integration with the powerfulAPI of Open AI is one of the standout features of Azure Open AI. To access, you must obtain an API key for the OpenAi platform. One has to go through these steps:Visit the Open AI platform website.Create an account if you don't have one, or simply sign-inAfter logging in, navigate to the API section to follow the instructions to get the API key.Ensure you keep your API key secure and safe since it grants access to Open AI language models. However, based on usage, it can incur charges.●  Integrate Open AI APIWith the help of the API key, one can integrate Open AI API with the Azure project.Here is a Python code demonstrating how to generate text using Open AI's GPT-3 model.import openai openai.api_key = 'your_openai_api_key' response = openai.Completion.create( engine="text-davinci-003", prompt="Once upon a time,", max_tokens=150 ) print(response.choices[0].text.strip())The code is one of the primary examples. But it helps to showcase the power and the simplicity of integrating Azure Open AI into the project. One can utilize this capability to answer questions, generate content, and more depending on the specific needs.Best practices and tips for successAzureOpenAI offers the most powerful platform for AI development. However, success in projects related to artificial intelligence requires adherence to best practices and thoughtful approaches.Let us explore some tips that can help us navigate our journey effectively.● Data qualityEvery AI model relies on high-quality data. Hence, one must ensure that the input data is well structured, clean, and represents the problem one tries to solve. Such quality of data directly impacts the reliability and accuracy of AI applications.●  Experiment and iterateThe development process is iterative. Make sure you experiment with tweak parameters and different prompts, and try out new ideas. Each iteration brings valuable insights while moving you closer to your desired outcome.● Optimise and regularly monitorThe artificial intelligence model based on machine learning certainly benefits from continuous optimization and monitoring. As you regularly evaluate your model's performance, you will be able to identify the areas that require improvement and fine-tune your approach accordingly. Such refinement ensures that your AI application stays adequate and relevant over time.● Stay updated with trends.Artificial intelligence is dynamic, with new research findings and innovative practices emerging regularly. One must stay updated with the latest trends and research papers and regularly use the best energy practices. Continuous learning would help one to stay ahead in the ever-evolving world of artificial intelligence technology.Real-life applications of Azure Open AIHere are some real-world applications of Azure OpenAI.●  AI-powered chatbotsOne can utilize Azure OpenAI to create intelligent chatbots that streamline support services and enhance customer interaction. Such chatbots provide a seamless user experience while understanding natural language.Integrating Open AI into a chatbot for natural language processing through codes:# Import necessary libraries for Azure Bot Service and OpenAI from flask import Flask, request from botbuilder.schema import Activity import openai # Initialize your OpenAI API key openai.api_key = 'your_openai_api_key' # Initialize the Flask app app = Flask(__name__) # Define a route for the chatbot @app.route("/api/messages", methods=["POST"]) def messages():    # Parse the incoming activity from the user    incoming_activity = request.get_json()    # Get user's message    user_message = incoming_activity["text"]    # Use OpenAI API to generate a response    response = openai.Completion.create(        engine="text-davinci-003",        prompt=f"User: {user_message}\nBot:",        max_tokens=150    )    # Extract the bot's response from OpenAI API response    bot_response = response.choices[0].text.strip()    # Create an activity for the bot's response    bot_activity = Activity(        type="message",        text=bot_response    )    # Return the bot's response activity    return bot_activity.serialize() # Run the Flask app if __name__ == "__main__":    app.run(port=3978)●  Content generationEvery content creator can harness this for generating creative content, drafting articles, or even brainstorming ideas. The ability of the AI model helps in understanding the context, ensuring that the generated content matches the desired tone and theme.Here is how one can integrate Azure OpenAI for content generation.# Function to generate blog content using OpenAI API def generate_blog_content(prompt):    response = openai.Completion.create(        engine="text-davinci-003",        prompt=prompt,        max_tokens=500    )    return response.choices[0].text.strip() # User's prompt for generating blog content user_prompt = "Top technology trends in 2023:" # Generate blog content using Azure OpenAI generated_content = generate_blog_content(user_prompt) # Print the generated content print("Generated Blog Content:") print(generated_content)The 'generate_blog_content' function takes the user prompts as input while generating blog content related to the provided prompt.●  Language translation servicesAzureOpenAI can be employed for building extensive translation services, helping translate text from one language to another. It opens the door for global communication and understanding. ConclusionAzure Open AI empowers businesses and developers to leverage the power of artificial intelligence in the most unprecedented ways. Whether generating creative content or building intelligent chatbots, Azure Open AI provides the necessary resources and tools to bring your ideas to life.Experiment, Learn, and Innovate with Azure AI.Author BioShankar Narayanan (aka Shanky) has worked on numerous different cloud and emerging technologies like Azure, AWS, Google Cloud, IoT, Industry 4.0, and DevOps to name a few. He has led the architecture design and implementation for many Enterprise customers and helped enable them to break the barrier and take the first step towards a long and successful cloud journey. He was one of the early adopters of Microsoft Azure and Snowflake Data Cloud. Shanky likes to contribute back to the community. He contributes to open source is a frequently sought-after speaker and has delivered numerous talks on Microsoft Technologies and Snowflake. He is recognized as a Data Superhero by Snowflake and SAP Community Topic leader by SAP.
Read more
  • 0
  • 0
  • 118

article-image-build-a-language-converter-using-google-bard
Aryan Irani
06 Oct 2023
7 min read
Save for later

Build a Language Converter using Google Bard

Aryan Irani
06 Oct 2023
7 min read
Dive deeper into the world of AI innovation and stay ahead of the AI curve! Subscribe to our AI_Distilled newsletter for the latest insights. Don't miss out – sign up today!IntroductionIn this blog, we will be taking a look at building a language converter inside of a Google Sheet using Google Bard. We are going to achieve this using the PaLM API and Google Apps Script.We are going to use a custom function inside which we pass the origin language of the sentence, followed by the target language and the sentence you want to convert. In return, you will get the converted sentence using Google Bard.Sample Google SheetFor this blog, I will be using a very simple Google Sheet that contains the following columns:Sentence that has to be convertedOrigin language of the sentenceTarget Language of the sentenceConverted SentenceIf you want to work with the Google Sheet, click here. Once you make a copy of the Google Sheet you have to go ahead and change the API key in the Google Apps Script code.Step 1: Get the API keyCurrently, PaLM API hasn’t been released for public use but to access it before everybody does, you can apply for the waitlist by clicking here. If you want to know more about the process of applying for MakerSuite and PaLM API, you can check the YouTube tutorial below.Once you have access, to get the API key, we have to go to MakerSuite and go to the Get API key section. To get the API key, follow these steps:Go to MakerSuite or click here.On opening the MakerSuite you will see something like this3. To get the API key go ahead and click on Get API key on the left side of the page.4. On clicking the Get API key, you will see something like this where you can create your API key.5. To create the API key go ahead and click on Create API key in the new project.On clicking Create API Key, in a few seconds, you will be able to copy the API key.Step 2: Write the Automation ScriptWhile you are in the Google Sheet, let’s open up the Script Editor to write some Google Apps Script. To open the Script Editor, follow these steps:1. Click on Extensions and open the Script Editor.2. This brings up the Script Editor as shown below.We have reached the script editor lets code.Now that we have the Google Sheet and the API key ready, lets go ahead and write the Google Apps Script to integrate the custom function inside the Google Sheet.function BARD(sentence,origin_language,target_lanugage) { var apiKey = "your_api_key"; var apiUrl = "https://generativelanguage.googleapis.com/v1beta2/models/text-bison-001:generateText"; We start out by opening a new function BARD() inside which we will declare the API key that we just copied. After declaring the API key we go ahead and declare the API endpoint that is provided in the PaLM API documentation. You can check out the documentation by checking out the link given below.We are going to be receiving the prompt from the Google Sheet from the BARD function that we just created.Generative Language API | PaLM API | Generative AI for DevelopersThe PaLM API allows developers to build generative AI applications using the PaLM model. Large Language Models (LLMs)…developers.generativeai.googlevar url = apiUrl + "?key=" + apiKey; var headers = {   "Content-Type": "application/json" };Here we create a new variable called url inside which we combine the API URL and the API key, resulting in a complete URL that includes the API key as a parameter. The headers specify the type of data that will be sent in the request which in this case is “application/json”.var prompt = {     'text': "Convert this sentence"+ sentence + "from"+origin_language + "to"+target_lanugage   } var requestBody = {   "prompt": prompt }Now we come to the most important part of the code which is declaring the prompt. For this blog, we will be designing the prompt in such a way that we get back only the converted sentence. This prompt will accept the variables from the Google Sheet and in return will give the converted sentence.Now that we have the prompt ready, we go ahead and create an object that will contain this prompt that will be sent in the request to the API. var options = {   "method": "POST",   "headers": headers,   "payload": JSON.stringify(requestBody) };Now that we have everything ready, it's time to define the parameters for the HTTP request that will be sent to the PaLM API endpoint. We start out by declaring the method parameter which is set to POST which indicates that the request will be sending data to the API.The headers parameter contains the header object that we declared a while back. Finally, the payload parameter is used to specify the data that will be sent in the request.These options are now passed as an argument to the UrlFetchApp.fetch function which sends the request to the PaLM API endpoint, and returns the response that contains the AI generated text. var response = UrlFetchApp.fetch(url, options); var data = JSON.parse(response.getContentText()); var output = data.candidates[0].output; Logger.log(output); return output;In this case, we just have to pass the url and options variable inside the UrlFetchApp.fetch function. Now that we have sent a request to the PaLM API endpoint we get a response back. In order to get an exact response we are going to be parsing the data.The getContentText() function is used to extract the text content from the response object. Since the response is in JSON format, we use the JSON.parse function to convert the JSON string into an object.The parsed data is then passed to the final variable output, inside which we get the first response out of multiple other drafts that Bard generates for us. On getting the first response, we return the output back to the Google Sheet.Our code is complete and good to go.Step 3: Check the outputIt's time to check the output and see if the code is working according to what we expected. To do that go ahead and save your code and run the BARD() function.On running the code, let's go back to the Google Sheet, use the custom function, and pass the prompt inside it.Here I have passed the original sentence, followed by the Origin Language and the Target Language.On successful execution, we can see that Google Bard has successfully converted the sentences using the PaLM API and Google Apps Script.ConclusionThis is just another interesting example of how we can Integrate Google Bard into Google Workspace using Google Apps Script and the PaLM API. I hope you have understood how to use the PaLM API and Google Apps Script to create a custom function that acts as a Language converter. You can get the code from the GitHub link below.Google-Apps-Script/Bard_Lang.js at master · aryanirani123/Google-Apps-ScriptCollection of Google Apps Script Automation scripts written and compiled by Aryan Irani. …github.comFeel free to reach out if you have any issues/feedback at aryanirani123@gmail.com.Author BioAryan Irani is a Google Developer Expert for Google Workspace. He is a writer and content creator who has been working in the Google Workspace domain for three years. He has extensive experience in the area, having published 100 technical articles on Google Apps Script, Google Workspace Tools, and Google APIs.
Read more
  • 0
  • 0
  • 179
article-image-the-future-of-data-analysis-with-pandasai
Gabriele Venturi
06 Oct 2023
6 min read
Save for later

The Future of Data Analysis with PandasAI

Gabriele Venturi
06 Oct 2023
6 min read
Dive deeper into the world of AI innovation and stay ahead of the AI curve! Subscribe to our AI_Distilled newsletter for the latest insights. Don't miss out – sign up today!IntroductionData analysis often involves complex, tedious coding tasks that make it seem reserved only for experts. But imagine a future where anyone could gain insights through natural conversations - where your data speaks plainly instead of through cryptic tables.PandasAI makes this future a reality. In this comprehensive guide, we'll walk through all aspects of adding conversational capabilities to data analysis workflows using this powerful new library. You'll learn:● Installing and configuring PandasAI● Querying data and generating visualizations in plain English● Connecting to databases, cloud storage, APIs, and more● Customizing PandasAI config● Integrating PandasAI into production workflows● Use cases across industries like finance, marketing, science, and moreFollow along to master conversational data analysis with PandasAI!Installation and ConfigurationInstall PandasAILet's start by installing PandasAI using pip or poetry.To install with pip:pip install pandasaiMake sure you are using an up-to-date version of pip to avoid any installation issues.For managing dependencies, we recommend using poetry:# Install poetry pip install --user poetry # Install pandasai poetry add pandasaiThis will install PandasAI and all its dependencies for you.For advanced usage, install all optional extras:poetry add pandasai –all-extrasThis includes dependencies for additional capabilities you may need later like connecting to databases, using different NLP models, advanced visualization, etc.With PandasAI installed, we are ready to start importing it and exploring its conversational interface!Import and Initialize PandasAILet's initialize a PandasAI DataFrame from a CSV file:from pandasai import SmartDataframe df = SmartDataframe("sales.csv")This creates a SmartDataFrame that wraps the underlying Pandas DataFrame but adds conversational capabilities.We can customize initialization through configuration options:from pandasai.llm import OpenAI llm = OpenAI(“<your api key>”) config = { "llm": } df = SmartDataFrame("sales.csv", config=config)This initializes the DataFrame using OpenAI model.For easy multi-table analysis, use SmartDatalake:from pandasai import SmartDatalake dl = SmartDatalake(["sales.csv", "inventory.csv"])SmartDatalake conversates across multiple related data sources.We can also connect to live data sources like databases during initialization:from pandasai.connectors import MySQLConnector mysql_conn = MySQLConnector(config={ "host": "localhost", "port": 3306, "database": "mydb", "username": "root", "password": "root",   "table": "loans", }) df = SmartDataframe(mysql_conn)This connects to a MySQL database so we can analyze the live data interactively.Conversational Data ExplorationAsk Questions in Plain EnglishThe most exciting part of PandasAI is exploring data through natural language. Let's go through some examples!Calculate totals:df.chat("What is the total revenue for 2022?") # Prints revenue totalFilter data:df.chat("Show revenue for electronics category") # Filters and prints electronics revenueAggregate by groups:df.chat("Break down revenue by product category and segment") # Prints table with revenue aggregated by category and segmentVisualize data:df.chat("Plot monthly revenue over time") # Plots interactive line chartAsk for insights:df.chat("Which segment has fastest revenue growth?") # Prints segments sorted by revenue growthPandasAI understands the user's questions in plain English and automatically generates relevant answers, tables and charts.We can ask endless questions and immediately get data-driven insights without writing any SQL queries or analysis code!Connect to Data SourcesA key strength of PandasAI is its broad range of built-in data connectors. This enables conversational analytics on diverse data sources.Databasesfrom pandasai.connectors import PostgreSQLConnector pg_conn = PostgreSQLConnector(config={ "host": "localhost",   "port": 5432,   "database": "mydb",   "username": "root",   "password": "root",   "table": "payments", }) df = SmartDataframe(pg_conn) df.chat("Which products had the most orders last month?")Finance Datafrom pandasai.connectors import YahooFinanceConnector yf_conn = YahooFinanceConnector("AAPL") df = SmartDataframe(yf_conn) df.chat("How did Apple stock perform last quarter?")The connectors provide out-of-the-box access to data across domains for easy conversational analytics.Advanced UsageCustomize ConfigurationWhile PandasAI is designed for simplicity, its architecture is customizable and extensible.We can configure aspects like:Language ModelUse different NLP models:from pandasai.llm import OpenAI, VertexAI df = SmartDataframe(data, config={"llm": VertexAI()})Custom InstructionsAdd data preparation logic:config["custom_instructions"] = """ Prepare data: - Filter outliers - Impute missing valuesThese options provide advanced control for tailored workflows.Integration into PipelinesSince PandasAI is built on top of Pandas, it integrates smoothly into data pipelines:import pandas as pd from pandasai import SmartDataFrame # Load raw data data = pd.read_csv("sales.csv") # Clean data clean_data = clean_data(data) # PandasAI for analysis df = SmartDataframe(clean_data) df.chat("Which products have trending sales?") # Further processing final_data = process_data(df)PandasAI's conversational interface can power the interactive analysis stage in ETL pipelines.Use Cases Across IndustriesThanks to its versatile conversational interface, PandasAI can adapt to workflows across multiple industries. Here are a few examples:Sales Analytics - Analyze sales numbers, find growth opportunities, and predict future performance.df.chat("How do sales for women's footwear compare to last summer?")Financial Analysis - Conduct investment research, portfolio optimization, and risk analysis.df.chat("Which stocks have the highest expected returns given acceptable risk?")Scientific Research - Explore and analyze the results of experiments and simulations.df.chat("Compare the effects of the three drug doses on tumor size.")Marketing Analytics - Measure campaign effectiveness, analyze customer journeys, and optimize spending.df.chat("Which marketing channels give the highest ROI for millennial customers?")And many more! PandasAI fits into any field that leverages data analysis, unlocking the power of conversational analytics for all.ConclusionThis guide covered a comprehensive overview of PandasAI's capabilities for effortless conversational data analysis. We walked through:● Installation and configuration● Asking questions in plain English● Connecting to databases, cloud storage, APIs● Customizing NLP and visualization● Integration into production pipelinesPandasAI makes data analysis intuitive and accessible to all. By providing a natural language interface, it opens up insights from data to a broad range of users.Start adding a conversational layer to your workflows with PandasAI today! Democratize data science and transform how your business extracts value from data through the power of AI.Author BioGabriele Venturi is a software engineer and entrepreneur who started coding at the young age of 12. Since then, he has launched several projects across gaming, travel, finance, and other spaces - contributing his technical skills to various startups across Europe over the past decade.Gabriele's true passion lies in leveraging AI advancements to simplify data analysis. This mission led him to create PandasAI, released open source in April 2023. PandasAI integrates large language models into the popular Python data analysis library Pandas. This enables an intuitive conversational interface for exploring data through natural language queries.By open-sourcing PandasAI, Gabriele aims to share the power of AI with the community and push boundaries in conversational data analytics. He actively contributes as an open-source developer dedicated to advancing what's possible with generative AI.
Read more
  • 0
  • 0
  • 177

article-image-google-bard-everything-you-need-to-know-to-get-started
Sangita Mahala
05 Oct 2023
6 min read
Save for later

Google Bard: Everything You Need to Know to Get Started

Sangita Mahala
05 Oct 2023
6 min read
Dive deeper into the world of AI innovation and stay ahead of the AI curve! Subscribe to our AI_Distilled newsletter for the latest insights. Don't miss out – sign up today!IntroductionGoogle Bard, a generative AI conversational chatbot. Initially from the LaMDA family of large language models and later the PaLM LLMs. It will follow your instructions and complete your requests in a better way. Bard uses its knowledge to answer your questions in an informative manner. It can generate different creative text formats, like poems, code, scripts, emails, letters, etc. Currently, it is available in 238 countries and 46 languages.Bard is a powerful tool which can be used for many different things, including:Writing and editing contentResearching and learning new thingsTranslating languagesGenerating new creative ideasAnswering questionsWhat is Google Bard and How does it work?Bard is a large language model, commonly referred to as a chatbot or conversational AI, that has been programmed to be extensive and informative. It is able to communicate and generate human-like text in response to a wide range of prompts and questions.Bard operates by using a method known as deep learning. Artificial neural networks are used in deep learning to learn from data. Deep learning is a subset of machine learning. The structure and operation of the human brain served as the inspiration for neural networks, which are capable of learning intricate patterns from data.The following illustrates how Bard works:You enter a command or query into Bard.The input or query is processed by Bard's neural network, which then produces a response.The reply from Bard is then shown to you.How to get started with Bard?It’s pretty simple and easy to get started using Google Bard. The following steps will let you know how you will be able to start your journey in Google Bard.Step-1Go to the Google Bard website by clicking this link: https://bard.google.com/Step-2Go to the top right corner of your screen and then click on the “Sign in” button.Step-3Once you are signed in, you can start typing your prompts or queries into the text box at the bottom of the screen.Step-4Your prompt or query will trigger Bard to produce an answer, which you may then read and evaluate.For Example:You can provide the prompt to Google Bard such as “Write a 'Hello, World!' program in the Rust programming language.”Prompt:Common Bard commands and how to use themGoogle Bard does not have any specific commands, but you can use certain keywords and phrases to get Bard to perform certain tasks. For example, you can use the following keywords and phrases to get Bard to:Create many creative text forms, such as "Write a script for...", "Write a poem about...", and "Write a code for...".Bard will perfectly answer your questions in a comprehensive and informative way: "What is the capital of India?", "How do I build a website?", "What are the benefits of using Bard?" Examples of what Bard can do and how to use it for specific tasksHere are some examples of what Bard can do and how to use it for specific tasks:Generate creative text formats: Bard allows you to generate a variety of unique text formats, including code, poems, emails, and letters. You have to simply input the required format, followed by a question or query, to get the things done. For example, to generate an email to your manager requesting a raise in salary, you would type “Write an email to your manager asking for a raise."Prompt:Answer your questions in a comprehensive and informative way: No matter how difficult or complex your query might be, Bard can help you to find the solution. So you have to simply enter your query into the text box, and Bard will generate a response likewise. For example, to ask Bard what is the National Flower of India is, you would type "What is the National Flower of India?".Prompt: Translate languages: Bard will allow to convert text between different languages. To do this, simply type the text that you want to translate into the text box, followed by the provided language from your side. For example, to translate the sentence: I am going to the store to buy some groceries into Hindi, you would type "I am going to the store to buy some groceries to Hindi".Prompt:How Bard Can Be Used for Different PurposesA writer can use Bard to generate new concepts for fresh stories or articles or to summarize research results.Students can utilize Bard to create essays and reports, get assistance with their assignments, and master new topics.A business owner can use Bard to produce marketing materials, develop chatbots for customer support, or perform data analysis.Software developers can use Bard to produce code, debug applications, or find solutions for technical issues.The future of Google BardGoogle Bard is probably going to get increasingly more capable and adaptable as it keeps developing and acquiring knowledge. It might be utilized to produce new works of art and entertainment, advance scientific research, and provide solutions to some of the most critical global problems.It's also crucial to remember that Google Bard is not the only significant language model being developed. Similar technologies are being created as well by various other companies, such as Microsoft and OpenAI. This competition is likely to drive innovation and lead to even more powerful and sophisticated language models in the future.Overall, the future of Google Bard and other substantial language models seems quite bright overall. These innovations have the power to completely transform the way we study, work, and produce. There is no doubt that these technologies have the potential to improve the world, but it is necessary to use them effectively.ConclusionGoogle Bard is a powerful AI tool that has the potential to be very beneficial, including writing and editing content, researching and learning new things, translating languages, generating new creative ideas, and answering questions. Being more productive and saving time are two of the greatest advantages of utilizing Google Bard. This can free up your time so that you can concentrate on other things, including developing new ideas or enhancing your skills. Bard can assist you in finding and understanding the data quickly and easily because it has access to a wide amount of information. It has the potential to be a game-changer for many people. If you are looking for a way to be more productive, I encourage you to try using Bard.Author BioSangita Mahala is a passionate IT professional with an outstanding track record, having an impressive array of certifications, including 12x Microsoft, 11x GCP, 2x Oracle, and LinkedIn Marketing Insider Certified. She is a Google Crowdsource Influencer and IBM champion learner gold. She also possesses extensive experience as a technical content writer and accomplished book blogger. She is always Committed to staying with emerging trends and technologies in the IT sector.
Read more
  • 0
  • 0
  • 142