Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Free Learning
Arrow right icon

How-To Tutorials - LLM

81 Articles
article-image-demystifying-azure-openai-service
Olivier Mertens, Breght Van Baelen
15 Sep 2023
16 min read
Save for later

Demystifying Azure OpenAI Service

Olivier Mertens, Breght Van Baelen
15 Sep 2023
16 min read
Dive deeper into the world of AI innovation and stay ahead of the AI curve! Subscribe to our AI_Distilled newsletter for the latest insights. Don't miss out – sign up today!This article is an excerpt from the book, Azure Data and AI Architect Handbook, by Olivier Mertens and Breght Van Baelen. Master core data architecture design concepts and Azure Data & AI services to gain a cloud data and AI architect’s perspective to developing end-to-end solutionsIntroductionOpenAI has risen immensely in popularity with the arrival of ChatGPT. The company, which started as an on-profit organization, has been the driving force behind the GPT and DALL-E model families, with intense research at a massive scale. The speed at which new models get released and become available on Azure has become impressive lately.Microsoft has a close partnership with OpenAI, after heavy investments in the company from Microsoft . The models created by OpenAI use Azure infrastructure for development and deployment. Within this partnership, OpenAI carries the responsibility of research and innovation, coming up with new models and new versions of their existing models. Microsoft manages the enterprise-scale go-to-market. It provides infrastructure and technical guidance, along with reliable SLAs, to get large organizations started with the integrations of these models, fine-tuning them on their own data, and hosting a private deployment of the models.Like the face recognition model in Azure Cognitive Services, powerful LLMs such as the ones in Azure OpenAI Service could be used to cause harm at scale. Therefore, this service is also gated according to Microsoft ’s guidelines on responsible AI.At the time of writing, Azure OpenAI Service offers access to the following models: GPT model family * GPT-3.5   * GPT-3.5-Turbo (the model behind ChatGPT  * GPT-4CodexDALL-E 2Let’s dive deeper into these models.The GPT model familyGPT models, which stands for generative pre-trained transformer models, made their first appearance in 2018, with GPT-1, trained on a dataset of roughly 7,000 books. This made good advancements in performance at the time, but the model was already vastly outdated a couple of years later. GPT-2 followed in 2019, trained on the WebText dataset (a collection of 8 million web pages). In 2020, GPT-3 was released, trained on the WebText dataset, two book corpora, and  English Wikipedia.In these years, there were no major breakthroughs in terms of efficient algorithms , but rather, in the scale of the architecture and datasets. This becomes easily visible when we look at the growing number of parameters used for every new generation of the model, as shown in the following figure.Figure 9.3 – A visual comparison between the sizes of the different generations of GPT models, based on their trainable parametersThe question is often raised of how to interpret this concept of parameters. An easy analogy is the number of neurons in a brain. Although parameters in a neural network are not equivalent to its artificial neurons, the number of parameters and neurons are heavily correlated – more parameters = more neurons. The more neurons there are in the brain, the more knowledge it can grasp.Since the arrival of GPT-3, we have seen two major adaptations of the third-generation model being made. The first one is GPT-3.5. This model has a similar architecture as the GPT-3 model but is trained on text and code, whereas the original GPT-3 only saw text data during training. Therefore, GPT-3.5 is capable of generating and understanding code. GPT-3.5, in turn, became the basis for the next adaptation, the vastly popular ChatGPT model. This model has been fine-tuned for conversational usage while using additional reinforcement learning to get a sense of ethical behavior.GPT model sizesThe OpenAI models are available in different sizes, which are all named after remarkable scientists. The GPT-3.5 model specifically, is  available in four versions:AdaBabbage CurieDavinciThe Ada model is the smallest, most lightweight model, while Davinci is the most complex and most performant model. The larger the model, the more expensive it is to use, host, and fine-tune, as shown in Figure 9.4. As a side note, when you hear about the absurd number of parameters of new GPT models, this usually refers to the Davinci model.Figure 9.4 – A trade-off exists between lightweight, cheap models and highly performant, complex modelsWith a trade-off between costs and performance available, an architect can start thinking about which model size may best fit a solution. In reality, this often comes down to empirical testing. If the cheaper model can perform the job at an acceptable performance, then this is the more cost-effective solution. Note that when talking about performance in this scenario, we mean predictive power, not the speed at which the model makes predictions. The larger models will be slower to output a prediction than the lightweight models.Understanding the difference between GPT-3.5 and GPT-3.5-Turbo (ChatGPT)GPT-3.5 and GPT-3.5-Turbo are both models used to generate natural language text, but they are used in different ways. GPT-3.5 is classified as a text completion model, whereas GPT-3.5-Turbo is referred to as conversational AI.To better understand the contrast between the two models, we first need to introduce the concept of contextual learning. These models are trained to understand the structure of the input prompt to provide a meaningful answer. Contextual learning is often split up into few-shot learning, one-shot learning, and zero-shot learning. Shot, in this context, refers to an example given in the input prompt. With few-shot learning, we provide multiple examples in the input prompt, one-shot learning provides a single example, and zero-shot indicates that no examples are given. In the case of the latter, the model will have to figure out a different way to understand what is being asked of it (such as interpreting the goal of a question).Consider the following example:Figure 9.5 – Few-shot learning takes up the most amount of tokens and requires more effort but often results in model outputs of higher qualityWhile it takes more prompt engineering effort to apply few-shot learning, it will usually yield better results. A text completion model, such as GPT-3.5, will perform vastly better on few-shot learning than one-shot or zero-shot. As the name suggests, the model figures out the structure of the input prompt (i.e., the examples) and completes the text accordingly.Conversational AI, such as ChatGPT, is more performant in zero-shot learning. In the case of the preceding example, both models are able to output the correct answer, but as questions become more and more complex, there will be a noticeable difference in predictive performance. Additionally, GPT-3.5-Turbo will remember information from previous input prompts, whereas GPT-3.5 prompts are handled independently.Innovating with GPT-4With the arrival of GPT-4, the focus has shifted toward multimodality. Multimodality in AI refers to the ability of an AI system to process and interpret information from multiple modalities, such as text, speech, images, and videos. Essentially, it is the capability of AI models to understand and combine data from different sources and formats.GPT-4 is capable of additionally taking images as input and interpreting them. It has stronger reasoning and overall performance than its predecessors. There was a famous example where GPT-4 was able to deduce that balloons would fly upward when asked what would happen if someone cut the balloons' strings, as shown in the following photo.Figure 9.6 – The image in question that was used in the experiment. When asked what would happen if the strings were cut, GPT-4 replied that the balloons would start flying awaySome adaptations of GPT-4, such as the one used in Bing Chat, have the extra feature of citing sources in generated answers. This is a welcome addition, as hallucination was a significant flaw in earlier GPT models.HallucinationHallucination in the context of AI refers to generating wrong predictions with high confidence. It is obvious that this can cause a lot more harm than the model indicating it is not sure how to respond or knowing the answer.Next, we will look at the Codex model.CodexCodex is a model that is architecturally similar to GPT-3, but it fully focuses on code generation and understanding. Furthermore, an adaptation of Codex forms the underlying model for GitHub Copilot, a tool that provides suggestions and auto-completion for code based on context and natural language inputs, available for various integrated development environments (IDEs) such as Visual Studio Code. Instead of a ready-to-use solution, Codex is (like the other models in Azure OpenAI) available as a model endpoint and should be used for integration in custom apps.The Codex model is initially trained on a collection of 54 million code repositories, resulting in billions of lines of code, with the majority of training data written in Python.Codex can generate code in different programming languages based on an input prompt in natural language (text-to-code), explain the function of blocks of code (code-to-text), add comments to code, and debug existing code.Codex is available as a C (cushman) and D (Davinci) model. Lightweight Codex models (A series or B series) currently do not exist.Models such as Codex or GitHub Copilot are a great way to boost the productivity of software engineers, data analysts, data engineers, and data scientists.  They do not replace these roles, as their accuracy is not perfect; rather, they give engineers the opportunity to start editing from a fairly well-written block of code instead of coding from scratch.DALL-E 2The DALL-E model family is used to generate visuals. By providing a  description in natural language in the input prompt, it generates a series of matching images. While other models are often used at scale in large enterprises, DALL-E 2 tends to be more popular in smaller businesses. Organizations that lack an in-house graphic designer can make great use of DALL-E to generate visuals for banners, brochures, emails, web pages, and so on.DALL-E 2 only has a  single model size to choose from, although open-source alternatives exist if a lightweight version is preferred. Fine-tuning and private deploymentsAs a data architect, it is important to understand the cost structure of these models. The first option is to use the base model in a serverless manner. Similar to how we work with Azure Cognitive Services, users will get a key for the model’s endpoint and simply pay per prediction. For DALL-E 2, costs are incurred per 100 images, while the GPT and Codex models are priced per 1,000 tokens. For every request made to a GPT or Codex model, all tokens of the input prompt and the output are added up to determine the cost of the prediction.TokensIn natural language processing, a token refers to a sequence of characters that represents a distinct unit of meaning in a text. These units do not necessarily correspond to words, although for short words, this is mostly the case. Tokens are used as the basic building blocks to process and analyze text data. A good rule of thumb for the English language is that one token is, on average, four characters. Dividing your total character count by four will make a good estimate of the number of tokens.Azure OpenAI Service also grants extensive fine-tuning functionalities. Up to 1 GB of data can be uploaded per Azure OpenAI instance for fine-tuning. This may not sound like a lot, but note that we are not training a new model from scratch. The goal of fine-tuning is to retrain the last few layers of the model to increase performance on specific tasks or company-specific knowledge. For this process, 1 GB of data is more than sufficient.When adding a fine-tuned model to a solution, two additional costs will be incurred. On top of the token-based inference cost, we need to take into account the training and hosting costs. The hourly training cost can be quite high due to the amount of hardware needed, but compared to the inference and hosting costs during a model’s life cycle, it remains a small percentage. Next, since we are not using the base model anymore and, instead, our own “version” of the model, we will need to host the model ourselves, resulting in an hourly hosting cost.Now that we have covered both pre-trained model collections, Azure Cognitive Services, and Azure OpenAI Service, let’s move on to custom development using Azure Machine Learning.Grounding LLMsOne of the most popular use cases for LLMs involves providing our own data as context to the model (often referred to as grounding). The reason for its popularity is partly due to the fact that many business cases can be solved using a consistent technological architecture. We can reuse the same solution, but by providing different knowledge bases, we can serve different end users.For example, by placing an LLM on top of public data such as product manuals or product specifics, it is easy to develop a customer support chatbot. If we swap out this knowledge base of product information with something such as HR documents, we can reuse the same tech stack to create an internal HR virtual assistant.A common misconception regarding grounding is that a model needs to be trained on our own data. This is not the case. Instead, after a user asks a question, the relevant document (or paragraphs) is injected into the prompt behind the scenes and lives in the memory of the model for the duration of the chat session (when working with conversational AI) or for a single prompt. The context, as we call it, is then wiped clean and all information is forgotten. If we wanted to cache this info, it is possible to make use of a framework such as LangChain or Semantic Kernel, but that is out of the scope of this book.The fact that a model does not get retrained on our own data plays a crucial role in terms of data privacy and cost optimization. As shown before in the section on fine-tuning, as soon as a base model is altered, an hourly operating cost is added to run a private deployment of the model. Also, information from the documents cannot be leaked to other users working with the same model.Figure 9.7 visualizes the architectural concepts to ground an LLM.Figure 9.7 – Architecture to ground an LLMThe first thing to do is turn the documents that should be accessible to the model into embeddings. Simply put, embeddings are mathematical representations of natural language text. By turning text into embeddings, it is possible to accurately calculate the similarity (from a semantics perspective) between two pieces of text.To do this, we can leverage Azure Functions, a service that allows pieces of code to run in a serverless function. It often forms the glue between different components by handling interactions. In this case, an Azure function (on the bottom left of Figure 9.7) will grab the relevant documents from the knowledge base, break them up into chunks (to accommodate for the maximum token limits of the model), and generate an embedding for each one. This embedding is then stored, alongside the natural language text, in a vector database. This function should be run for all historic data that will be accessible to the model, as well as triggered for every new, relevant document that is added to the knowledge base.Once the vector database is in place, users can start asking questions. However, the user questions are not directly sent to the model endpoint. Instead, another Azure function (shown at the top of Figure 9.7) will turn the user question into an embedding and check its similarity of it with the embeddings of the documents or paragraphs in the vector database. Then, the top X most relevant text chunks are injected into the prompt as context, and the prompt is sent over to the LLM. Finally, the response is returned to the user.ConclusionAzure OpenAI Service, a collaboration between OpenAI and Microsoft, delivers potent AI models. The GPT model family, from GPT-1 to GPT-4, has evolved impressively, with GPT-3.5-Turbo (ChatGPT) excelling in conversational AI. GPT-4 introduces multimodal capabilities, comprehending text, speech, images, and videos.Codex specializes in code generation, while DALL-E 2 creates visuals from text descriptions. These models empower developers and designers. Customization via fine-tuning offers cost-effective solutions for specific tasks. Leveraging Azure OpenAI Service for your projects enhances productivity.Grounding language models with user data ensures data privacy and cost efficiency. This collaboration holds promise for innovative AI applications across various domains.Author BioOlivier Mertens is a cloud solution architect for Azure data and AI at Microsoft, based in Dublin, Ireland. In this role, he assisted organizations in designing their enterprise-scale data platforms and analytical workloads. Next to his role as an architect, Olivier leads the technical AI expertise for Microsoft EMEA in the corporate market. This includes leading knowledge sharing and internal upskilling, as well as solving highly complex or strategic customer AI cases. Before his time at Microsoft, he worked as a data scientist at a Microsoft partner in Belgium.Olivier is a lecturer for generative AI and AI solution architectures, a keynote speaker for AI, and holds a master’s degree in information management, a postgraduate degree as an AI business architect, and a bachelor’s degree in business management.Breght Van Baelen is a Microsoft employee based in Dublin, Ireland, and works as a cloud solution architect for the data and AI pillar in Azure. He provides guidance to organizations building large-scale analytical platforms and data solutions. In addition, Breght was chosen as an advanced cloud expert for Power BI and is responsible for providing technical expertise in Europe, the Middle East, and Africa. Before his time at Microsoft, he worked as a data consultant at Microsoft Gold Partners in Belgium.Breght led a team of eight data and AI consultants as a data science lead. Breght holds a master’s degree in computer science from KU Leuven, specializing in AI. He also holds a bachelor’s degree in computer science from the University of Hasselt.
Read more
  • 0
  • 0
  • 121

article-image-ai-distilled-17-numentas-nupic-adepts-persimmon-8b-hugging-face-rust-ml-framework-nvidias-tensorrt-llm-azure-ml-promptflow-siris-gen-ai-enhancements
Merlyn Shelley
15 Sep 2023
11 min read
Save for later

AI_Distilled #17: Numenta’s NuPIC, Adept’s Persimmon-8B, Hugging Face Rust ML Framework, NVIDIA’s TensorRT-LLM, Azure ML PromptFlow, Siri's Gen AI Enhancements

Merlyn Shelley
15 Sep 2023
11 min read
Dive deeper into the world of AI innovation and stay ahead of the AI curve! Subscribe to our AI_Distilled newsletter for the latest insights. Don't miss out – sign up today!👋 Hello,"If we don't embrace AI, it will move forward without us. Now is the time to harness AI's potential for the betterment of society."- Fei-Fei Li, Computer Scientist and AI Expert. AI is proving to be a real game-changer worldwide, bringing new perspectives to everyday affairs in every field. No wonder Apple is heavily investing in Siri's generative AI enhancement and Microsoft to Provide Legal Protection for AI-Generated Copyright Breaches however, AI currently has massive cooling requirements in data centers which has led to a 34% increase in water consumption in Microsoft data centers. Say hello to the latest edition of our AI_Distilled #17 where we talk about all things LLM, NLP, GPT, and Generative AI! In this edition, we present the latest AI developments from across the world, including NVIDIA TensorRT-LLM enhances Large Language Model inference on H100 GPUs, Meta developing powerful AI system to compete with OpenAI, Google launching Digital Futures Project to support responsible AI, Adept open-sourcing a powerful language model with <10 billion parameters, and Numenta introduces NuPIC, revolutionizing AI efficiency by 100 Times. We know how much you love our curated AI secret knowledge resources. This week, we’re here with some amazing tutorials on building an AWS conversational AI app with AWS Amplify, how to evaluate legal language models with Azure ML PromptFlow, deploying generative AI models on Amazon EKS with a step-by-step guide, Automate It with Zapier and Generative AI and generating realistic textual synthetic data using LLMs. What do you think of this issue and our newsletter? Please consider taking the short survey below to share your thoughts and you will get a free PDF of the “The Applied Artificial Intelligence Workshop” eBook upon completion. Complete the Survey. Get a Packt eBook for Free!Writer’s Credit: Special shout-out to Vidhu Jain for their valuable contribution to this week’s newsletter content!  Cheers,  Merlyn Shelley  Editor-in-Chief, Packt  ⚡ TechWave: AI/GPT News & AnalysisGoogle Launches Digital Futures Project to Support Responsible AI: Google has initiated the Digital Futures Project, accompanied by a $20 million fund from Google.org to provide grants to global think tanks and academic institutions. This project aims to unite various voices to understand and address the opportunities and challenges presented by AI. It seeks to support researchers, organize discussions, and stimulate debates on public policy solutions for responsible AI development. The fund will encourage independent research on topics like AI's impact on global security, labor, and governance structures. Inaugural grantees include renowned institutions like the Aspen Institute and MIT Work of the Future.  Microsoft to Provide Legal Protection for AI-Generated Copyright Breaches: Microsoft has committed to assuming legal responsibility for copyright infringement related to material generated by its AI software used in Word, PowerPoint, and coding tools. The company will cover legal costs for commercial customers who face lawsuits over tools or content produced by AI. This includes services like GitHub Copilot and Microsoft 365 Copilot. The move aims to ease concerns about potential clashes with content owners and make the software more user-friendly. Other tech companies, such as Adobe, have made similar pledges to indemnify users of AI tools. Microsoft's goal is to provide reassurance to paying users amid the growing use of generative AI, which may reproduce copyrighted content. NVIDIA TensorRT-LLM Enhances Large Language Model Inference on H100 GPUs: NVIDIA introduces TensorRT-LLM, a software solution that accelerates and optimizes LLM inference. This open-source software incorporates advancements achieved through collaboration with leading companies. TensorRT-LLM is compatible with Ampere, Lovelace, and Hopper GPUs, aiming to streamline LLM deployment. It offers an accessible Python API for defining and customizing LLM architectures without requiring deep programming knowledge. Performance improvements are demonstrated with real-world datasets, including a 4.6x acceleration for Meta's Llama 2. Additionally, TensorRT-LLM helps reduce total cost of ownership and energy consumption in data centers, making it a valuable tool for the AI community. Meta Developing Powerful AI System to Compete with OpenAI: The Facebook parent company is reportedly working on a new AI system that aims to rival the capabilities of OpenAI's advanced models. The company intends to launch this AI model next year, and it is expected to be significantly more powerful than Meta's current offering, Llama 2, an open-source AI language model. Llama 2 was introduced in July and is distributed through Microsoft's Azure services to compete with OpenAI's ChatGPT and Google's Bard. This upcoming AI system could assist other companies in developing sophisticated text generation and analysis services. Meta plans to commence training on this new AI system in early 2024. Adept Open-Sources a Powerful Language Model with <10 Billion Parameters: Adept announces the open-source release of Persimmon-8B, a highly capable language model with fewer than 10 billion parameters. This model, made available under an Apache license, is designed to empower the AI community for various use cases. Persimmon-8B stands out for its substantial context size, being 4 times larger than LLaMA2 and 8 times more than GPT-3. Despite using only 0.37x the training data of LLaMA2, it competes with its performance. It includes 70k unused embeddings for multimodal extensions and offers unique inference code combining speed and flexibility. Adept expects this release to inspire innovation in the AI community. Apple Invests Heavily in Siri's Generative AI Enhancement: Apple has significantly increased its investment in AI, particularly in developing conversational chatbot features for Siri. The company is reportedly spending millions of dollars daily on AI research and development. CEO Tim Cook expressed a strong interest in generative AI. Apple's AI journey began four years ago when John Giannandrea, head of AI, formed a team to work on LLMs. The Foundational Models team, led by Ruoming Pang, is at the forefront of these efforts, rivaling OpenAI's investments. Apple plans to integrate LLMs into Siri to enhance its capabilities, but the challenge lies in fitting these large models onto devices while maintaining privacy and performance standards. Numenta Introduces NuPIC: Revolutionizing AI Efficiency by 100 Times: Numenta, a company bridging neuroscience and AI, has unveiled NuPIC (Numenta Platform for Intelligent Computing), a groundbreaking solution rooted in 17 years of brain research. Developed by computing pioneers Jeff Hawkins and Donna Dubinsky, NuPIC aims to make AI processing up to 100 times more efficient. Partnering with game startup Gallium Studios, NuPIC enables high-performance LLMs on CPUs, prioritizing user trust and privacy. Unlike GPU-reliant models, NuPIC's CPU focus offers cost savings, flexibility, and control while maintaining high throughput and low latency. AI Development Increases Water Consumption in Microsoft Data Centers by 34%: The development of AI tools like ChatGPT has led to a 34% increase in Microsoft's water consumption, raising concerns in the city of West Des Moines, Iowa, where its data centers are located. Microsoft, along with tech giants like OpenAI and Google, has seen rising demand for AI tools, which comes with significant costs, including increased water usage. Microsoft disclosed a 34% spike in global water consumption from 2021 to 2022, largely attributed to AI research. A study estimates that ChatGPT consumes 500 milliliters of water every time it's prompted. Google also reported a 20% growth in water use, partly due to AI work. Microsoft and OpenAI stated they are working to make AI systems more efficient and environmentally friendly.  🔮 Looking for a New Book from Packt’s Expert Community? Automate It with Zapier and Generative AI - By Kelly Goss, Philip Lakin Are you excited to supercharge your work with Gen AI's automation skills?  Check out this new guide that shows you how to become a Zapier automation pro, making your work more efficient and productive in no time! It covers planning, configuring workflows, troubleshooting, and advanced automation creation. It emphasizes optimizing workflows to prevent errors and task overload. The book explores new built-in apps, AI integration, and complex multi-step Zaps. Additionally, it provides insights into account management and Zap issue resolution for improved automation skills. Read through the Chapter 1 unlocked here...  🌟 Secret Knowledge: AI/LLM ResourcesUnderstanding Liquid Neural Networks: A Primer on AI Advancements: In this post, you'll learn how liquid neural networks are transforming the AI landscape. These networks, inspired by the human brain, offer a unique and creative approach to problem-solving. They excel in complex tasks such as weather prediction, stock market analysis, and speech recognition. Unlike traditional neural networks, liquid neural networks require significantly fewer neurons, making them ideal for resource-constrained environments like autonomous vehicles. These networks excel in handling continuous data streams but may not be suitable for static data. They also provide better causality handling and interpretability. Navigating Generative AI with FMOps and LLMOps: A Practical Guide: In this informative post, you'll gain valuable insights into the world of generative AI and its operationalization using FMOps and LLMOps principles. The authors delve into the challenges businesses face when integrating generative AI into their operations. You'll explore the fundamental differences between traditional MLOps and these emerging concepts. The post outlines the roles various teams play in this process, from data engineers to data scientists, ML engineers, and product owners. The guide provides a roadmap for businesses looking to embrace generative AI. AI Compiler Quartet: A Breakdown of Cutting-Edge Technologies: Explore Microsoft’s groundbreaking "heavy-metal quartet" of AI compilers: Rammer, Roller, Welder, and Grinder. These compilers address the evolving challenges posed by AI models and hardware. Rammer focuses on optimizing deep neural network (DNN) computations, improving hardware parallel utilization. Roller tackles the challenge of memory partitioning and optimization, enabling faster compilation with good computation efficiency. Welder optimizes memory access, particularly vital as AI models become more memory-intensive. Grinder addresses complex control flow execution in AI computation. These AI compilers collectively offer innovative solutions for parallelism, compilation efficiency, memory, and control flow, shaping the future of AI model optimization and compilation.  💡 MasterClass: AI/LLM Tutorials Exploring IoT Data Simulation with ChatGPT and MQTTX: In this comprehensive guide, you'll learn how to harness the power of AI, specifically ChatGPT, and the MQTT client tool, MQTTX, to simulate and generate authentic IoT data streams. Discover why simulating IoT data is crucial for system verification, customer experience enhancement, performance assessment, and rapid prototype design. The article dives into the integration of ChatGPT and MQTTX, introducing the "Candidate Memory Bus" to streamline data testing. Follow the step-by-step guide to create simulation scripts with ChatGPT and efficiently simulate data transmission with MQTTX.  Revolutionizing Real-time Inference: SageMaker Unveils Streaming Support for Generative AI: Amazon SageMaker now offers real-time response streaming, transforming generative AI applications. This new feature enables continuous response streaming to clients, reducing time-to-first-byte and enhancing interactive experiences for chatbots, virtual assistants, and music generators. The post guides you through building a streaming web application using SageMaker real-time endpoints for interactive chat use cases. It showcases deployment options with AWS Large Model Inference (LMI) and Hugging Face Text Generation Inference (TGI) containers, providing a seamless, engaging conversation experience for users. Implementing Effective Guardrails for Large Language Models: Guardrails are crucial for maintaining trust in LLM applications as they ensure compliance with defined principles. This guide presents two open-source tools for implementing LLM guardrails: Guardrails AI and NVIDIA NeMo-Guardrails. Guardrails AI offers Python-based validation of LLM responses, using the RAIL specification. It enables developers to define output criteria and corrective actions, with step-by-step instructions for implementation. NVIDIA NeMo-Guardrails introduces Colang, a modeling language for flexible conversational workflows. The guide explains its syntax elements and event-driven design. Comparing the two, Guardrails AI suits simple tasks, while NeMo-Guardrails excels in defining advanced conversational guidelines. 🚀 HackHub: Trending AI Tools cabralpinto/modular-diffusion: Python library for crafting and training personalized Diffusion Models with PyTorch.  cofactoryai/textbase: Simplified Python chatbot development using NLP and ML with Textbase's on_message function in main.py. microsoft/BatteryML: Open-source ML tool for battery analysis, aiding researchers in understanding electrochemical processes and predicting battery degradation. facebookresearch/co-tracker: Swift transformer-based video tracker with Optical Flow, pixel-level tracking, grid sampling, and manual point selection. explodinggradients/ragas: Framework evaluates Retrieval Augmented Generation pipelines, enhancing LLM context with external data using research-based tools. 
Read more
  • 0
  • 0
  • 127

article-image-future-trends-in-pretraining-foundation-models
Emily Webber
14 Sep 2023
17 min read
Save for later

Future Trends in Pretraining Foundation Models

Emily Webber
14 Sep 2023
17 min read
Dive deeper into the world of AI innovation and stay ahead of the AI curve! Subscribe to our AI_Distilled newsletter for the latest insights. Don't miss out – sign up today!This article is an excerpt from the book, Pretrain Vision and Large Language Models in Python, by Emily Webber. Master the art of training vision and large language models with conceptual fundaments and industry-expert guidance. Learn about AWS services and design patterns, with relevant coding examplesIntroductionIn this article, we’ll explore trends in foundation model application development, like using LangChain to build interactive dialogue applications, along with techniques like retrieval augmented generation to reduce LLM hallucination. We’ll explore ways to use generative models to solve classification tasks, human-centered design, and other generative modalities like code, music, product documentation, powerpoints, and more! We’ll talk through AWS offerings like SageMaker JumpStart Foundation Models, Amazon Bedrock, Amazon Titan, and Amazon Code Whisperer.In particular, we’ll dive into the following topics:Techniques for building applications for LLMsGenerative modalities outside of vision and languageAWS offerings in foundation modelsTechniques for building applications for LLMsNow that you’ve learned about foundation models, and especially large language models, let’s talk through a few key ways you can use them to build applications. One of the most significant takeaways of the ChatGPT moment in December 2022 is that customers clearly love for their chat to be knowledgeable about every moment in the conversation, remember topics mentioned earlier, and encompassing all the twists and turns of dialogue. Said another way, beyond generic question answering, there’s a clear consumer preference for a chat to be chained. Let’s take a look at an example in the following screenshot:Figure 15.1 – Chaining questions for chat applicationsThe key difference between the left - and the right-hand side of Figure 15.1 is that on the left-hand side, the answers are discontinuous. That means the model simply sees each question as a single entity before providing its response. On the right-hand side, however, the answers are continuous. That means the entire dialogue is provided to the model, with the newest question at the bottom. This helps to ensure the continuity of responses, with the model more capable of maintaining the context.How can you set this up yourself? Well, on the one hand, what I’ve just described isn’t terribly difficult. Imagine just reading from your HTML page, packing in all of that call and response data into the prompt, and siphoning out the response to return it to your end user. If you don’t want to build it yourself, however, you can just use a few great open-source options!Building interactive dialogue apps with open-source stacksIf you haven’t seen it before, let me quickly introduce you to LangChain. Available for free on GitHub here: https://github.com/hwchase17/langchain, LangChain is an open-source toolkit built by Harrison Chase and more than 600 other contributors. It provides functionality similar to the famous ChatGPT by pointing to OpenAI’s API, or any other foundation model, but letting you as the developer and data scientist create your own frontend and customer experience.Decoupling the application from the model is a smart move; in the last few months alone the world has seen nothing short of hundreds of new large language models come online, with teams around the world actively developing more. When your application interacts with the model via a single API call, then you can more easily move from one model to the next as the licensing, pricing, and capabilities upgrade over time. This is a big plus for you!Another interesting open-source technology here is Haystack (26).  Developed by the German start-up, Deepset, Haystack is a useful tool for, well, finding a needle in a haystack. Specifically, they operate like an interface for you to bring your own LLMs into expansive question/answering scenarios. This was their original area of expertise, and since then have expanded quite a bit!At AWS, we have an open-source template for building applications with LangChain on AWS. It’s available on GitHub here: https://github.com/3coins/langchain-aws-template.In the following diagram, you can see a quick representation of the architecture:While this can point to any front end, we provide an example template you can use to get off the ground for your app. You can also easily point to any custom model, whether it’s on a SageMaker endpoint or in the new AWS service, Bedrock! More on that a bit later in this chapter. As you can see in the previous image, in this template you can easily run a UI anywhere that interacts with the cloud. Let’s take a look at all of the steps.:1.      First, the UI hits the API gateway.2.      Second, credentials are retrieved via IAM.3.      Third, the service is invoked via Lambda.4.      Fourth, the model credentials are retrieved via Secrets Manager.5.      Fift h, your model is invoked through either an API call to a serverless model SDK, or a custom model you’ve trained that is hosted on a SageMaker endpoint is invoked.6.      Sixth, look up the relevant conversation history in DynamoDB to ensure your answer is accurate.How does this chat interface ensure it’s not hallucinating answers? How does it point to a set of data stored in a database? Through retrieval augmented generation (RAG), which we will cover next.Using RAG to ensure high accuracy in LLM applicationsAs explained in the original 2020 (1) paper, RAG is a way to retrieve documents relevant to a given query. Imagine your chat application takes in a question about a specific item in your database, such as one of your products. Rather than having the model make up the answer, you’d be better off retrieving the right document from your database and simply using the LLM to stylize the response. That’s where RAG is so powerful; you can use it to ensure the accuracy of your generated answers stays high, while keeping the customer experience consistent in both style and tone. Let’s take a closer look:Figure 15.3 – RAGFirst, a question comes in from the left-hand side. In the top left , you can see a simple question, Defi ne “middle ear”. This is processed by a query encoder, which is simply a language model producing an embedding of the query. This embedding is then applied to the index of a database, with many candidate algorithms in use here: K Nearest Neighbors, Maximum Inner Product Search (MIPS), and others. Once you’ve retrieved a set of similar documents, you can feed the best ones into the generator, the final model on the right-hand side. This takes the input documents and returns a simple answer to the question. Here, the answer is The middle ear includes the tympanic cavity and the three ossicles.Interestingly, however, the LLM here doesn’t really define what the middle ear is. It’s actually answering the question, “what objects are contained within the middle ear?” Arguably, any definition of the middle ear would include its purpose, notably serving as a buffer between your ear canal and your inner ear, which helps you keep your balance and lets you hear. So, this would be a good candidate for expert reinforcement learning with human feedback, or RLHF, optimization.As shown in Figure 15.3, this entire RAG system is tunable. That means you can and should fine-tune the encoder and decoder aspects of the architecture to dial in model performance based on your datasets and query types. Another way to classify documents, as we’ll see, is generation!Is generation the new classification?As we learned in Chapter 13, Prompt Engineering, there are many ways you can push your language model to output the type of response you are looking for. One of these ways is actually to have it classify what it sees in the text! Here is a simple diagram to illustrate this concept:Figure 15.4 – Using generation in place of classificationAs you can see in the diagram, with the traditional classification you train the model ahead of time to perform one task: classification. This model may do well on classification, but it won’t be able to handle new tasks at all. This key drawback is one of the main reasons why foundation models, and especially large language models, are now so popular: they are extremely flexible and can handle many different tasks without needing to be retrained.On the right-hand side of Figure 15.4, you can see we’re using the same text as the starting point, but instead of passing it to an encoder-based text model, we’re passing it to a decoder-based model and simply adding the instruction to classify this sentence into positive or negative sentiment. You could just as easily say, “tell me more about how this customer really feels,” or “how optimistic is this home buyer?” or “help this homebuyer find a different house that meets their needs.” Arguably each of those three instructions is slightly different, veering away from pure classification and into more general application development or customer experience. Expect to see more of this over time! Let’s look at one more key technique for building applications with LLMs: keeping humans in the loop.Human-centered design for building applications with LLMsWe touched on this topic previously, in Chapter 2, Dataset Preparation: Part One, Chapter 10, FineTuning and Evaluating, Chapter 11, Detecting, Mitigating, and Monitoring Bias, and Chapter 14, MLOps for Vision and Language. Let me say this yet again; I believe that human labeling will become even more of a competitive advantage that companies can provide. Why? Building LLMs is now incredibly competitive; you have both the open source and proprietary sides actively competing for your business. Open source options are from the likes of Hugging Face and Stability, while proprietary offerings are from AI21, Anthropic, and OpenAI. The differences between these options are questionable; you can look up the latest models at the top of the leaderboard from Stanford’s HELM (2), which incidentally falls under their human-centered AI initiative. With enough fine-tuning and customization, you should generally be able to meet performance.What then determines the best LLM applications, if it’s not the foundation model? Obviously, the end-to-end customer experience is critical, and will always remain so. Consumer preferences wax and wane over time, but a few tenets remain for general technology: speed, simplicity, flexibility, and low cost. With foundation models we can clearly see that customers prefer explainability and models they can trust. This means that application designers and developers should grapple with these long-term consumer preferences, picking solutions and systems that maximize them. As you may have guessed, that alone is no small task.Beyond the core skill of designing and building successful applications, what else can we do to stay competitive in this brave new world of LLMs? I would argue that amounts to customizing your data. Focus on making your data and your datasets unique: singular in purpose, breadth, depth, and completeness. Lean into labeling your data with the best resources you can, and keep that a core part of your entire application workflow. This brings you to continuous learning, or the ability of the model to constantly get better and better based on signals from your end users.Next, let’s take a look at upcoming generative modalities.Other generative modalitiesSince the 2022 ChatGPT moment, most of the technical world has been fascinated by the proposition of generating novel content. While this was always somewhat interesting, the meeting of high-performance foundation models with an abundance of media euphoria over the capabilities, combined with a post-pandemic community with an extremely intense fear of missing out, has led us to the perfect storm of a global fixation on generative AI.Is this a good thing? Honestly, I’m happy to finally see the shift ; I’ve been working on generating content with AI/ML models in some fashion since at least 2019, and as a writer and creative person myself, I’ve always thought this was the most interesting part of machine learning. I was very impressed by David Foster’s book (3) on the topic. He’s just published an updated version of this to include the latest foundation models and methods! Let’s quickly recap some other types of modalities that are common in generative AI applications today.Generating code should be no surprise to most of you; its core similarities to language generation make it a perfect candidate! Fine-tuning an LLM to spit out code in your language of choice is pretty easy; here’s my 2019 project (4) doing exactly that with the SageMaker example notebooks! Is the code great? Absolutely not, but fortunately, LLMs have come a long way since then. Many modern code-generating models are excellent, and thanks to a collaboration between Hugging Face and ServiceNow we have an open-source model to use! This is called StarCoder and is available for free on HuggingFace right here: https://huggingface.co/bigcode/starcoder.What I love about using an open-source LLM for code generation is that you can customize it! This means you can point to your own private code repositories, tokenize the data, update the model, and immediately train this LLM to generate code in the style of your organization! At the organizational level, you might even do some continued pretraining on an open-source LLM for code generation on your own repositories to speed up all of your developers. We’ll take a look at more ways you can useLLMs to write your own code faster in the next section when we focus on AWS offerings, especially Amazon Code Whisperer. (27)The rest of the preceding content can all be great candidates for your own generative AI projects. Truly, just as we saw general machine learning moving from the science lab into the foundation of most businesses and projects, it’s likely that generative capabilities in some fashion will do the same.Does that mean engineering roles will be eliminated? Honestly, I doubt it. Just as the rise of great search engines didn’t eliminate software engineering roles but made them more fun and doable for a lot of people, I’m expecting generative capabilities to do the same. They are great at searching many possibilities and quickly finding great options, but it’s still up to you to know the ins and outs of your consumers, your product, and your design. Models aren’t great at critical thinking, but they are good at coming up with ideas and finding shortcomings, at least in words.Now that we’ve looked at other generative modalities at a very high level, let’s learn about AWS offerings for foundation models!AWS offerings in foundation modelsOn AWS, as you’ve seen throughout the book, you have literally hundreds of ways to optimize your foundation model development and operationalization. Let’s now look at a few ways AWS is explicitly investing to improve the customer experience in this domain:SageMaker JumpStart Foundation Model Hub: Announced in preview at re: Invent 2022, this is an option for pointing to foundation models nicely packaged in the SageMaker environment. This includes both open-source models such as BLOOM and Flan-T5 from Hugging Face, and proprietary models such as AI21 Jurassic. A list of all the foundation models is available here (5). To date, we have nearly 20 foundation models, all available for hosting in your own secure environments. Any data you use to interact with or fine-tune models on the Foundation Model Hub is not shared with providers. You can also optimize costs by selecting the instances yourself. We have tens of example notebooks pointing to these models for training and hosting across a wide variety of use cases available here (6) and elsewhere. For more information about the data the models were trained on, you can read about that in the playground directly.Amazon Bedrock: If you have been watching AWS news closely in early 2023, you may have noticed a new service we announced for foundation models: Amazon Bedrock! As discussed in this blog post (7) by Swami Sivasubramanian, Bedrock is a service that lets you interact with a variety of foundation models through a serverless interface that stays secure. Said another way, Bedrock provides a point of entry for multiple foundation models, letting you get the best of all possible providers. This includes AI start-ups such as AI21, Anthropic, and Stability. Interacting with Bedrock means invoking a serverless experience, saving you from dealing with the lower-level infrastructure. You can also fine-tune your models with Bedrock!Amazon Titan: Another model that will be available through Bedrock is Titan, a new large language model that’s fully trained and managed by Amazon! This means we handle the training data, optimizations, tuning, debiasing, and all enhancements for getting you results with large language models. Titan will also be available for fine-tuning.Amazon Code Whisperer: As you may have seen, Code Whisperer is an AWS service announced in 2022 and made generally available in 2023. Interestingly it seems to tightly couple with a given development environment, taking the entire context of the script you are writing and generating recommendations based on this. You can write pseudo-code, markdown, or other function starts, and using keyboard shortcuts invoke the model. This will send you a variety of options based on the context of your script, letting you ultimately select the script that makes the most sense for you! Happily, this is now supported for both Jupyter notebooks and SageMaker Studio; you can read more about these initiatives from AWS Sr Principal Technologist Brain Granger, co-founder of Project Jupyter. Here’s Brian’s blog post on the topic: https://aws.amazon.com/blogs/machine-learning/announcing-new-jupyter-contributions-by-aws-to-democratize-generative-ai-and-scale-ml-workloads/ Pro tip: Code Whisperer is free to individuals! Close readers of Swami’s blog post above will also notice updates to our latest ML infrastructure, like the second edition of the inferentia chip, inf2, and a trainium instance with more bandwidth, trn1n.Close readers of Swami’s blog post will also notice updates to our latest ML infrastructure, such as the second edition of the inferentia chip, inf2, and a Trainium instance with more bandwidth, trn1n. We also released our code generation service, CodeWhisperer, at no cost to you!ConclusionIn summary, the field of pretraining foundation models is filled with innovation. We have exciting advancements like LangChain and AWS's state-of-the-art solutions such as Amazon Bedrock and Titan, opening up vast possibilities in AI development. Open-source tools empower developers, and the focus on human-centered design remains crucial. As we embrace continuous learning and explore new generative methods, we anticipate significant progress in content creation and software development. By emphasizing customization, innovation, and responsiveness to user preferences, we stand on the cusp of fully unleashing the potential of foundation models, reshaping the landscape of AI applications. Keep an eye out for the thrilling journey ahead in the realm of AI.Author BioEmily Webber is a Principal Machine Learning Specialist Solutions Architect at Amazon Web Services. She has assisted hundreds of customers on their journey to ML in the cloud, specializing in distributed training for large language and vision models. She mentors Machine Learning Solution Architects, authors countless feature designs for SageMaker and AWS, and guides the Amazon SageMaker product and engineering teams on best practices in regards around machine learning and customers. Emily is widely known in the AWS community for a 16-video YouTube series featuring SageMaker with 160,000 views, plus a Keynote at O’Reilly AI London 2019 on a novel reinforcement learning approach she developed for public policy.
Read more
  • 0
  • 0
  • 119
Banner background image

article-image-using-llm-chains-in-rust
Alan Bernardo Palacio
12 Sep 2023
9 min read
Save for later

Using LLM Chains in Rust

Alan Bernardo Palacio
12 Sep 2023
9 min read
IntroductionThe llm-chain is a Rust library designed to make your experience with large language models (LLMs) smoother and more powerful. In this tutorial, we'll walk you through the steps of installing Rust, setting up a new project, and getting started with the versatile capabilities of LLM-Chain.This guide will break down the process step by step, using simple language, so you can confidently explore the potential of LLM-Chain in your projects.InstallationBefore we dive into the exciting world of LLM-Chain, let's start with the basics. To begin, you'll need to install Rust on your computer. By using the official Rust toolchain manager called rustup you can ensure you have the latest version and easily manage your installations. We recommend having Rust version 1.65.0 or higher. If you encounter errors related to unstable features or dependencies requiring a newer Rust version, simply update your Rust version. Just follow the instructions provided on the rustup website to get Rust up and running.With Rust now installed on your machine, let's set up a new project. This step is essential to create an organized space for your work with LLM-Chain. To do this, you'll use a simple command-line instruction. Open up your terminal and run the following command:cargo new --bin my-llm-projectBy executing this command, a new directory named "my-llm-project" will be created. This directory contains all the necessary files and folders for a Rust project.Embracing the Power of LLM-ChainNow that you have your Rust project folder ready, it's time to integrate the capabilities of LLM-Chain. This library simplifies your interaction with LLMs and empowers you to create remarkable applications. Adding LLM-Chain to your project is a breeze. Navigate to your project directory by using the terminal and run the following command:cd my-llm-project cargo add llm-chainBy running this command, LLM-Chain will become a part of your project, and the configuration will be recorded in the "Cargo.toml" file.LLM-Chain offers flexibility by supporting multiple drivers for different LLMs. For the purpose of simplicity and a quick start, we'll be using the OpenAI driver in this tutorial. You'll have the choice between the LLAMA driver, which runs a LLaMA LLM on your machine, and the OpenAI driver, which connects to the OpenAI API.To choose the OpenAI driver, execute this command:cargo add llm-chain-openaiIn the next section, we'll explore generating your very first LLM output using the OpenAI driver. So, let's move on to exploring sequential chains with Rust and uncovering the possibilities they hold with LLM-Chain.Exploring Sequential Chains with RustIn the realm of LLM-Chain, sequential chains empower you to orchestrate a sequence of steps where the output of each step seamlessly flows into the next. This hands-on section serves as your guide to crafting a sequential chain, expanding its capabilities with additional steps, and gaining insights into best practices and tips that ensure your success.Let's kick things off by preparing our project environment:As we delve into creating sequential chains, one crucial prerequisite is the installation of tokio in your project. While this tutorial uses the full tokio package crate, remember that in production scenarios, it's recommended to be more selective about which features you install. To set the stage, run the following command in your terminal:cargo add tokio --features fullThis step ensures that your project is equipped with the necessary tools to handle the intricate tasks of sequential chains. Before we continue, ensure that you've set your OpenAI API key in the OPENAI_API_KEY environment variable. Here's how:export OPENAI_API_KEY="YOUR_OPEN_AI_KEY"With your environment ready, let’s look at the full implementation code. In this case, we will be implementing the use of Chains to generate recommendations of cities to travel to, formatting them, and organizing the results throughout a series of steps:use llm_chain::parameters; use llm_chain::step::Step; use llm_chain::traits::Executor as ExecutorTrait; use llm_chain::{chains::sequential::Chain, prompt}; use llm_chain_openai::chatgpt::Executor; #[tokio::main(flavor = "current_thread")] async fn main() -> Result<(), Box<dyn std::error::Error>> {    // Create a new ChatGPT executor with default settings    let exec = Executor::new()?;    // Create a chain of steps with two prompts    let chain: Chain = Chain::new(vec![        // First step: Craft a personalized birthday email        Step::for_prompt_template(            prompt!("You are a bot for travel assistance research",                "Find good places to visit in this city {{city}} in this country {{country}}. Include their name")        ),        // Second step: Condense the email into a tweet. Notably, the text parameter takes the output of the previous prompt.        Step::for_prompt_template(            prompt!(                "You are an assistant for managing social media accounts for a travel company",                "Format the information into 5 bullet points for the most relevant places. \\\\n--\\\\n{{text}}")        ),        // Third step: Summarize the email into a LinkedIn post for the company page, and sprinkle in some emojis for flair.        Step::for_prompt_template(            prompt!(                "You are an assistant for managing social media accounts for a travel company",                "Summarize this email into a LinkedIn post for the company page, and feel free to use emojis! \\\\n--\\\\n{{text}}")        )    ]);    // Execute the chain with provided parameters    let result = chain        .run(            // Create a Parameters object with key-value pairs for the placeholders            parameters!("city" => "Rome", "country" => "Italy"),            &exec,        )        .await        .unwrap();    // Display the result on the console    println!("{}", result.to_immediate().await?.as_content());    Ok(()) }The provided code initiates a multi-step process using the llm_chain and llm_chain_openai libraries. First, it sets up a ChatGPT executor with default configurations. Next, it creates a chain of sequential steps, each designed to produce specific text outputs. The first step involves crafting a personalized travel recommendation, which includes information about places to visit in a particular city and country, with a Parameters object containing key-value pairs for placeholders like {{city}} and {{country}}. The second step condenses this email into a tweet, formatting the information into five bullet points and utilizing the text output from the previous step. Lastly, the third step summarizes the email into a LinkedIn post for a travel company's page, adding emojis for extra appeal.The chain is executed with specified parameters, creating a Parameters object with key-value pairs for placeholders like "city" (set to "Rome") and "country" (set to "Italy"). The generated content is then displayed on the console. This code represents a structured workflow for generating travel-related content using ChatGPT.Running the CodeNow, it's time to compile the code and run the code. Execute the following command in your terminal:cargo runAs the code executes, the sequential chain orchestrates the different prompts, generating content that flows through each step.We can see the results of the model as a bulleted list of travel recommendations.ConclusionThe llm-chain Rust library serves as your gateway to accessing large language models (LLMs) within the Rust programming language. This tutorial has been your guide to uncovering the fundamental steps necessary to harness the versatile capabilities of LLM-Chain.We began with the foundational elements, guiding you through the process of installing Rust and integrating llm-chain into your project using Cargo. We then delved into the practical application of LLM-Chain by configuring it with the OpenAI driver, emphasizing the use of sequential chains. This approach empowers you to construct sequences of steps, where each step's output seamlessly feeds into the next. As a practical example, we demonstrated how to create a travel recommendation engine capable of generating concise posts for various destinations, suitable for sharing on LinkedIn.It's important to note that LLM-Chain offers even more possibilities for exploration. You can extend its capabilities by incorporating CPP models like Llama, or you can venture into the realm of map-reduce chains. With this powerful tool at your disposal, the potential for creative and practical applications is virtually limitless. Feel free to continue your exploration and unlock the full potential of LLM-Chain in your projects. See you in the next article.Author BioAlan Bernardo Palacio is a data scientist and an engineer with vast experience in different engineering fields. His focus has been the development and application of state-of-the-art data products and algorithms in several industries. He has worked for companies such as Ernst and Young, and Globant, and now holds a data engineer position at Ebiquity Media helping the company to create a scalable data pipeline. Alan graduated with a Mechanical Engineering degree from the National University of Tucuman in 2015, participated as the founder of startups, and later on earned a Master's degree from the faculty of Mathematics at the Autonomous University of Barcelona in 2017. Originally from Argentina, he now works and resides in the Netherlands.LinkedIn
Read more
  • 0
  • 0
  • 3233

article-image-weaviate-and-pyspark-for-llm-similarity-search
Alan Bernardo Palacio
12 Sep 2023
12 min read
Save for later

Weaviate and PySpark for LLM Similarity Search

Alan Bernardo Palacio
12 Sep 2023
12 min read
IntroductionWeaviate is gaining popularity as a semantic graph database, while PySpark is a well-established data processing framework used for handling large datasets efficiently.The integration of Weaviate and Spark enables the processing of large volumes of data, which can be stored in unstructured blob storages like S3. This integration allows for batch processing to structure the data to suit specific requirements. Subsequently, it empowers users to perform similarity searches and build contexts for applications based on Large Language Models (LLMs).In this article, we will explore how to integrate Weaviate and PySpark, with a particular emphasis on leveraging their capabilities for similarity searches using Large Language Models (LLMs).Before we delve into the integration of Weaviate and PySpark, let's start with a brief overview. We will begin by seamlessly importing a subset of the Sphere dataset, which contains a substantial 100k lines of data, into our newly initiated Spark Session. This dataset will provide valuable insights and nuances, enhancing our understanding of the collaboration between Weaviate and PySpark. Let's get started.Preparing the Docker Compose EnvironmentBefore we delve into the integration of Weaviate and PySpark, let's take a closer look at the components we'll be working with. In this scenario, we will utilize Docker Compose to deploy Spark, Jupyter, Weaviate, and the Transformers container in a local environment. The Transformers container will be instrumental in creating embeddings.To get started, we'll walk you through the process of setting up the Docker Compose environment, making it conducive for seamlessly integrating Weaviate and PySpark.version: '3' services: spark-master:    image: bitnami/spark:latest    hostname: spark-master    environment:      - INIT_DAEMON_STEP=setup_spark jupyter:    build: .    ports:      - "8888:8888"    volumes:      - ./local_lake:/home/jovyan/work      - ./notebooks:/home/jovyan/    depends_on:      - spark-master    command: "start-notebook.sh --NotebookApp.token='' --NotebookApp.password=''" weaviate:    image: semitechnologies/weaviate:latest    restart: on-failure:0    environment:      QUERY_DEFAULTS_LIMIT: 20      AUTHENTICATION_ANONYMOUS_ACCESS_ENABLED: 'true'      PERSISTENCE_DATA_PATH: "./data"      DEFAULT_VECTORIZER_MODULE: text2vec-transformers      ENABLE_MODULES: text2vec-transformers      TRANSFORMERS_INFERENCE_API: <http://t2v-transformers:8080>      CLUSTER_HOSTNAME: 'node1' t2v-transformers:    image: semitechnologies/transformers-inference:sentence-transformers-multi-qa-MiniLM-L6-cos-v1    environment:      ENABLE_CUDA: 0 # set to 1 to enable      # NVIDIA_VISIBLE_DEVICES: all # enable if running with CUDA volumes: myvol:This Docker Compose configuration sets up a few different services:spark-master: This service uses the latest Bitnami Spark image. It sets the hostname to "spark-master" and defines an environment variable for initialization.jupyter: This service is built from the current directory and exposes port 8888. It also sets up volumes to link the local "local_lake" directory to the working directory inside the container and the "notebooks" directory to the home directory of the container. It depends on the "spark-master" service and runs a command to start Jupyter Notebook with certain configurations.weaviate: This service uses the latest Weaviate image. It specifies some environment variables for configuration, like setting query defaults, enabling anonymous access, defining data persistence paths, and configuring vectorizers.t2v-transformers: This service uses a specific image for transformers vector embedding creation. It also sets environment variables, including one for enabling CUDA if needed.Additionally, there is a volume defined named "myvol" for potential data storage. This Docker Compose configuration essentially sets up an environment where Spark, Jupyter, Weaviate, and Transformers can work together, each with its specific configuration and dependencies.Enabling Seamless Integration with the Spark ConnectorThe way in which Spark and Weaviate work together is through the Spark Connector. This connector serves as a bridge, allowing data to flow from Spark to Weaviate. It's especially important for tasks like Extract, Transform, Load (ETL) processes, where it allows to processing of the data with Spark and then populating Weaviate vector databases. One of its key features is its ability to automatically figure out the correct data type in Spark based on your Weaviate schema, making data transfer more straightforward. Another feature is that you can choose to vectorize data as you send it to Weaviate, or you can provide existing vectors. By default, Weaviate generates document IDs for new documents, but you can also supply your own IDs within the data frame. These capabilities can all be configured as options within the Spark Connector.To start integrating Spark and Weaviate, you'll need to install two important components: the weaviate-client Python package and the essential PySpark framework. You can easily get these dependencies by running the following command with pip3:pip3 install pyspark weaviate-clientTo get the Weaviate Spark Connector, we can execute the following command in your terminal, which will download the JAR file that is used by the Spark Session:curl <https://github.com/weaviate/spark-connector/releases/download/v1.2.8/spark-connector-assembly-1.2.8.jar> --output spark-connector-assembly-1.2.8.jarKeep in mind that Java 8+ and Scala 2.12 are prerequisites for a seamless integration experience so please make sure that these components are installed on your system before proceeding. While here we demonstrate Spark's local operation using Docker, consider referring to the Apache Spark documentation or your cloud platform's resources for guidance on installation and deploying a Spark cluster in different environments, like EMR on AWS or Dataproc in GCP. Additionally, make sure to verify the compatibility of your chosen language runtime with your selected environment.The way in which Spark and Weaviate work together is through the Spark Connector. This connector serves as a bridge, allowing data to flow from Spark to Weaviate. It's especially important for tasks like Extract, Transform, Load (ETL) processes, where it allows to processing of the data with Spark and then populate Weaviate vector databases. One of its key features is its ability to automatically figure out the correct data type in Spark based on your Weaviate schema, making data transfer more straightforward. Another feature is that you can choose to vectorize data as you send it to Weaviate, or you can provide existing vectors. By default, Weaviate generates document IDs for new documents, but you can also supply your own IDs within the data frame. These capabilities can all be configured as options within the Spark Connector.In the next sections, we will dive into the practical implementation of the integration, showing the PySpark notebook that we can run in Jupyter with code snippets to guide us through each step of the implementation. In this case, we will be using the Sphere dataset – housing a robust 100k lines of data – in our Spark Session, and we will insert it into the running Weaviate dataset which will create embeddings by using the Transformers container.Initializing the Spark Session and Loading DataTo begin, we initialize the Spark Session using the SparkSession.builder module. This code snippet configures the session with the necessary settings, including the specification of the spark-connector-assembly-1.2.8.jar – the Weaviate Spark Connector JAR file. We set the session's master to local[*] and define the application name as weaviate. The .getOrCreate() function ensures the session is created or retrieved as needed. To maintain clarity, we suppress log messages with a level of "WARN."from pyspark.sql import SparkSession spark = (    SparkSession.builder.config(        "spark.jars",        "spark-connector-assembly-1.2.8.jar",  # specify the spark connector JAR    )    .master("local[*]")    .appName("weaviate")    .getOrCreate() ) spark.sparkContext.setLogLevel("WARN") Remember that in this case, the connector needs to be in the proper location to be utilized by the Spark Session. Now we can proceed to load the dataset using the .load() function, specifying the format as JSON. This command fetches the data into a DataFrame named df, which is then displayed using .show(). df = spark.read.load("sphere.100k.jsonl", format="json") df.show()The next steps involve preparing the data for the integration with Weaviate. We first drop the vector column from the DataFrame, as it's not needed for our integration purpose.df = df.drop(*["vector"]) df.show()To interact with Weaviate, we use the weaviate Python package. The code initializes the Weaviate client, specifying the base URL and setting timeout configurations. We then delete any existing schema and proceed to create a new class named Sphere with specific properties, including raw, sha, title, and url. The vectorizer is set to text2vec-transformers.import weaviate import json # initiate the Weaviate client client = weaviate.Client("<http://weaviate:8080>") client.timeout_config = (3, 200) # empty schema and create new schema client.schema.delete_all() client.schema.create_class(    {        "class": "Sphere",        "properties": [            {                "name": "raw",                "dataType": ["string"]            },            {                "name": "sha",                "dataType": ["string"]            },            {                "name": "title",                "dataType": ["string"]            },            {                "name": "url",                "dataType": ["string"]            },        ],     "vectorizer":"text2vec-transformers"    } )Now we can start the process of writing data from Spark to Weaviate. The code renames the id column to uuid and uses the .write.format() function to specify the Weaviate format for writing. Various options, such as batchSize, scheme, host, id, and className, can be set to configure the write process. The .mode("append") ensures that only the append write mode is currently supported. Additionally, the code highlights that both batch operations and streaming writes are supported.df.limit(1500).withColumnRenamed("id", "uuid").write.format("io.weaviate.spark.Weaviate") \\\\    .option("batchSize", 200) \\\\    .option("scheme", "http") \\\\    .option("host", "weaviate:8080") \\\\    .option("id", "uuid") \\\\    .option("className", "Sphere") \\\\    .mode("append").save()Querying Weaviate for Data InsightsNow we can conclude this hands-on section by showcasing how to query Weaviate for data insights. The code snippet demonstrates querying the Sphere class for title and raw properties, using the .get() and .with_near_text() functions. The concept parameter includes animals, and additional information like distance is requested. A limit of 5 results is set using .with_limit(5), and the query is executed with .do().client.query\\\\    .get("Sphere", ["title","raw"])\\\\    .with_near_text({        "concepts": ["animals"]    })\\\\    .with_additional(["distance"])\\\\    .with_limit(5)\\\\    .do()These guided steps provide a comprehensive view of the integration process, showcasing the seamless data transfer from Spark to Weaviate and enabling data analysis with enhanced insights.ConclusionIn conclusion, the integration of Weaviate and PySpark represents the convergence of technologies to offer innovative solutions for data analysis and exploration. By integrating the capabilities of Weaviate, a semantic graph database, and PySpark, a versatile data processing framework, we enable new exciting possible applications to query and extract insights from our data.Throughout this article, we started by explaining the Docker Compose environment, orchestrated the components, and introduced the Spark Connector, we set the stage for efficient data flow and analysis. The Spark Connector enables to transfer of data from Spark to Weaviate. Its flexibility in adapting to various data types and schema configurations showcased its significance in ETL processes and data interaction. Next, we continued with a hands-on exploration that guided us through the integration process, offering practical insights into initializing the Spark Session, loading and preparing data, configuring the Weaviate client, and orchestrating seamless data transfer.In essence, the integration of Weaviate and PySpark not only simplifies data transfer but also unlocks enhanced data insights and analysis. This collaboration underscores the transformative potential of harnessing advanced technologies to extract meaningful insights from large datasets. As the realm of data analysis continues to evolve, the integration of Weaviate and PySpark emerges as a promising avenue for innovation and exploration.Author BioAlan Bernardo Palacio is a data scientist and an engineer with vast experience in different engineering fields. His focus has been the development and application of state-of-the-art data products and algorithms in several industries. He has worked for companies such as Ernst and Young, and Globant, and now holds a data engineer position at Ebiquity Media helping the company to create a scalable data pipeline. Alan graduated with a Mechanical Engineering degree from the National University of Tucuman in 2015, participated as the founder of startups, and later on earned a Master's degree from the faculty of Mathematics at the Autonomous University of Barcelona in 2017. Originally from Argentina, he now works and resides in the Netherlands.LinkedIn
Read more
  • 0
  • 0
  • 1750

article-image-llm-powered-chatbots-for-financial-queries
Alan Bernardo Palacio
12 Sep 2023
27 min read
Save for later

LLM-powered Chatbots for Financial Queries

Alan Bernardo Palacio
12 Sep 2023
27 min read
IntroductionIn the ever-evolving realm of digital finance, the convergence of user-centric design and cutting-edge AI technologies is pushing the boundaries of innovation. But with the influx of data and queries, how can we better serve users and provide instantaneous, accurate insights? Within this context, Large Language Models (LLMs) have emerged as a revolutionary tool, providing businesses and developers with powerful capabilities. This hands-on article will walk you through the process of leveraging LLMs to create a chatbot that can query real-time financial data extracted from the NYSE to address users' queries in real time about the current market state. We will dive into the world of LLMs, explore their potential, and understand how they seamlessly integrate with databases using LangChain. Furthermore, we'll fetch real-time data using the finance package offering the chatbot the ability to answer questions using current data.In this comprehensive tutorial, you'll gain proficiency in diverse aspects of modern software development. You'll first delve into the realm of database interactions, mastering the setup and manipulation of a MySQL database to store essential financial ticker data. Unveil the intricate synergy between Large Language Models (LLMs) and SQL through the innovative LangChain tool, which empowers you to bridge natural language understanding and database operations seamlessly. Moving forward, you'll explore the dynamic fusion of Streamlit and LLMs as you demystify the mechanics behind crafting a user-friendly front. Witness the transformation of your interface using OpenAI's Davinci model, enhancing user engagement with its profound knowledge. As your journey progresses, you'll embrace the realm of containerization, ensuring your application's agility and scalability by harnessing the power of Docker. Grasp the nuances of constructing a potent Dockerfile and orchestrating dependencies, solidifying your grasp on holistic software development practices.By the end of this guide, readers will be equipped with the knowledge to design, implement, and deploy an intelligent, finance-focused chatbot. This isn't just about blending frontend and backend technologies; it's about crafting a digital assistant ready to revolutionize the way users interact with financial data. Let's dive in!The Power of Containerization with Docker ComposeNavigating the intricacies of modern software deployment is simplified through the strategic implementation of Docker Compose. This orchestration tool plays a pivotal role in harmonizing multiple components within a local environment, ensuring they collaborate seamlessly.Docker allows to deploy multiple components seamlessly in a local environment. In our journey, we will use docker-compose to harmonize various components, including MySQL for data storage, a Python script as a data fetcher for financial insights, and a Streamlit-based web application that bridges the gap between the user and the chatbot.Our deployment landscape consists of several interconnected components, each contributing to the finesse of our intelligent chatbot. The cornerstone of this orchestration is the docker-compose.yml file, a blueprint that encapsulates the deployment architecture, coordinating the services to deliver a holistic user experience.With Docker Compose, we can efficiently and consistently deploy multiple interconnected services. Let's dive into the structure of our docker-compose.yml:version: '3' services: db:    image: mysql:8.0    environment:      - MYSQL_ROOT_PASSWORD=root_password      - MYSQL_DATABASE=tickers_db      - MYSQL_USER=my_user   # Replace with your desired username      - MYSQL_PASSWORD=my_pass  # Replace with your desired password    volumes:      - ./db/setup.sql:/docker-entrypoint-initdb.d/setup.sql    ports:      - "3306:3306"  # Maps port 3306 in the container to port 3306 on the host ticker-fetcher:    image: ticker/python    build:      context: ./ticker_fetcher    depends_on:      - db    environment:      - DB_USER=my_user   # Must match the MYSQL_USER from above      - DB_PASSWORD=my_pass   # Must match the MYSQL_PASSWORD from above      - DB_NAME=tickers_db app:    build:      context: ./app    ports:      - 8501:8501    environment:      - OPENAI_API_KEY=${OPENAI_API_KEY}    depends_on:      - ticker-fetcherContained within the composition are three distinctive services:db: A MySQL database container, configured with environmental variables for establishing a secure and efficient connection. This container is the bedrock upon which our financial data repository, named tickers_db, is built. A volume is attached to import a setup SQL script, enabling rapid setup.ticker-fetcher: This service houses the heart of our real-time data acquisition system. Crafted around a custom Python image, it plays the crucial role of fetching the latest stock information from Yahoo Finance. It relies on the db service to persistently store the fetched data, ensuring that our chatbot's insights are real-time data.app: The crown jewel of our user interface is the Streamlit application, which bridges the gap between users and the chatbot. This container grants users access to OpenAI's LLM model. It harmonizes with the ticker-fetcher service to ensure that the data presented to users is not only insightful but also dynamic.Docker Compose's brilliance lies in its capacity to encapsulate these services within isolated, reproducible containers. While Docker inherently fosters isolation, Docker Compose takes it a step further by ensuring that each service plays its designated role in perfect sync.The docker-compose.yml configuration file serves as the conductor's baton, ensuring each service plays its part with precision and finesse. As you journey deeper into the deployment architecture, you'll uncover the intricate mechanisms powering the ticker-fetcher container, ensuring a continuous flow of fresh financial data. Through the lens of Docker Compose, the union of user-centric design, cutting-edge AI, and streamlined deployment becomes not just a vision, but a tangible reality poised to transform the way we interact with financial data.Enabling Real-Time Financial Data AcquisitionAt the core of our innovative architecture lies the pivotal component dedicated to real-time financial data acquisition. This essential module operates as the engine that drives our chatbot's ability to deliver up-to-the-minute insights from the ever-fluctuating financial landscape.Crafted as a dedicated Docker container, this module is powered by a Python script that through the yfinance package, retrieves the latest stock information directly from Yahoo Finance. The result is a continuous stream of the freshest financial intelligence, ensuring that our chatbot remains armed with the most current and accurate market data.Our Python script, fetcher.py, looks as follows:import os import time import yfinance as yf import mysql.connector import pandas_market_calendars as mcal import pandas as pd import traceback DB_USER = os.environ.get('DB_USER') DB_PASSWORD = os.environ.get('DB_PASSWORD') DB_NAME = 'tickers_db' DB_HOST = 'db' DB_PORT = 3306 def connect_to_db():    return mysql.connector.connect(        host=os.getenv("DB_HOST", "db"),        port=os.getenv("DB_PORT", 3306),        user=os.getenv("DB_USER"),        password=os.getenv("DB_PASSWORD"),        database=os.getenv("DB_NAME"),    ) def wait_for_db():    while True:        try:            conn = connect_to_db()            conn.close()            return        except mysql.connector.Error:            print("Unable to connect to the database. Retrying in 5 seconds...")            time.sleep(5) def is_market_open():    # Get the NYSE calendar    nyse = mcal.get_calendar('NYSE')    # Get the current timestamp and make it timezone-naive    now = pd.Timestamp.now(tz='UTC').tz_localize(None)    print("Now its:",now)    # Get the market open and close times for today    market_schedule = nyse.schedule(start_date=now, end_date=now)    # If the market isn't open at all today (e.g., a weekend or holiday)    if market_schedule.empty:        print('market is empty')        return False    # Today's schedule    print("Today's schedule")    # Check if the current time is within the trading hours    market_open = market_schedule.iloc[0]['market_open'].tz_localize(None)    market_close = market_schedule.iloc[0]['market_close'].tz_localize(None)    print("market_open",market_open)    print("market_close",market_close)    market_open_now = market_open <= now <= market_close    print("Is market open now:",market_open_now)    return market_open_now def chunks(lst, n):    """Yield successive n-sized chunks from lst."""    for i in range(0, len(lst), n):        yield lst[i:i + n] if __name__ == "__main__":    wait_for_db()    print("-"*50)    tickers = ["AAPL", "GOOGL"]  # Add or modify the tickers you want      print("Perform backfill once")    # historical_backfill(tickers)    data = yf.download(tickers, period="5d", interval="1m", group_by="ticker", timeout=10) # added timeout    print("Data fetched from yfinance.")    print("Head")    print(data.head().to_string())    print("Tail")    print(data.head().to_string())    print("-"*50)    print("Inserting data")    ticker_data = []    for ticker in tickers:        for idx, row in data[ticker].iterrows():            ticker_data.append({                'ticker': ticker,                'open': row['Open'],                'high': row['High'],                'low': row['Low'],                'close': row['Close'],                'volume': row['Volume'],                'datetime': idx.strftime('%Y-%m-%d %H:%M:%S')            })    # Insert data in bulk    batch_size=200    conn = connect_to_db()    cursor = conn.cursor()    # Create a placeholder SQL query    query = """INSERT INTO ticker_history (ticker, open, high, low, close, volume, datetime)               VALUES (%s, %s, %s, %s, %s, %s, %s)"""    # Convert the data into a list of tuples    data_tuples = []    for record in ticker_data:        for key, value in record.items():            if pd.isna(value):                record[key] = None        data_tuples.append((record['ticker'], record['open'], record['high'], record['low'],                            record['close'], record['volume'], record['datetime']))    # Insert records in chunks/batches    for chunk in chunks(data_tuples, batch_size):        cursor.executemany(query, chunk)        print(f"Inserted batch of {len(chunk)} records")    conn.commit()    cursor.close()    conn.close()    print("-"*50)    # Wait until starting to insert live values    time.sleep(60)    while True:        if is_market_open():            print("Market is open. Fetching data.")            print("Fetching data from yfinance...")            data = yf.download(tickers, period="1d", interval="1m", group_by="ticker", timeout=10) # added timeout            print("Data fetched from yfinance.")            print(data.head().to_string())                      ticker_data = []            for ticker in tickers:                latest_data = data[ticker].iloc[-1]                ticker_data.append({                    'ticker': ticker,                    'open': latest_data['Open'],                    'high': latest_data['High'],                    'low': latest_data['Low'],                    'close': latest_data['Close'],                    'volume': latest_data['Volume'],                    'datetime': latest_data.name.strftime('%Y-%m-%d %H:%M:%S')                })                # Insert the data                conn = connect_to_db()                cursor = conn.cursor()                print("Inserting data")                total_tickers = len(ticker_data)                for record in ticker_data:                    for key, value in record.items():                        if pd.isna(value):                            record[key] = "NULL"                    query = f"""INSERT INTO ticker_history (ticker, open, high, low, close, volume, datetime)                                VALUES (                                    '{record['ticker']}',{record['open']},{record['high']},{record['low']},{record['close']},{record['volume']},'{record['datetime']}')"""                    print(query)                    cursor.execute(query)                print("Data inserted")                conn.commit()                cursor.close()                conn.close()            print("Inserted data, waiting for the next batch in one minute.")            print("-"*50)            time.sleep(60)        else:            print("Market is closed. Waiting...")            print("-"*50)            time.sleep(60)  # Wait for 60 seconds before checking againWithin its code, the script seamlessly navigates through a series of well-defined stages:Database Connectivity: The script initiates by establishing a secure connection to our MySQL database. With the aid of the connect_to_db() function, a connection is created while the wait_for_db() mechanism guarantees the script's execution occurs only once the database service is fully primed.Market Schedule Evaluation: Vital to the script's operation is the is_market_open() function, which determines the market's operational status. By leveraging the pandas_market_calendars package, this function ascertains whether the New York Stock Exchange (NYSE) is currently active.Data Retrieval and Integration: During its maiden voyage, fetcher.py fetches historical stock data from the past five days for a specified list of tickers—typically major entities such as AAPL and GOOGL. This data is meticulously processed and subsequently integrated into the tickers_db database. During subsequent cycles, while the market is live, the script periodically procures real-time data at one-minute intervals.Batched Data Injection: Handling substantial volumes of stock data necessitates an efficient approach. To address this, the script ingeniously partitions the data into manageable chunks and employs batched SQL INSERT statements to populate our database. This technique ensures optimal performance and streamlined data insertion.Now let’s discuss the Dockerfile that defines this container. The Dockerfile is the blueprint for building the ticker-fetcher container. It dictates how the Python environment will be set up inside the container.FROM python:3 WORKDIR /app COPY requirements.txt . RUN pip install --no-cache-dir -r requirements.txt COPY . . CMD ["python", "-u", "fetcher.py"] Base Image: We start with a basic Python 3 image.Working Directory: The working directory inside the container is set to /app.Dependencies Installation: After copying the requirements.txt file into our container, the RUN command installs all necessary Python packages.Starting Point: The script's entry point, fetcher.py, is set as the command to run when the container starts.A list of Python packages needed to run our fetcher script:mysql-connector-python yfinance pandas-market-calendarsmysql-connector-python: Enables the script to connect to our MySQL database.yfinance: Fetches stock data from Yahoo Finance.pandas-market-calendars: Determines the NYSE's operational schedule.As a symphony of technology, the ticker-fetcher container epitomizes precision and reliability, acting as the conduit that channels real-time financial data into our comprehensive architecture.Through this foundational component, the chatbot's prowess in delivering instantaneous, accurate insights comes to life, representing a significant stride toward revolutionizing the interaction between users and financial data.With a continuously updating financial database at our disposal, the next logical step is to harness the potential of Large Language Models. The subsequent section will explore how we integrate LLMs using LangChain, allowing our chatbot to transform raw stock data into insightful conversations.Leveraging Large Language Models with SQL using LangChainThe beauty of modern chatbot systems lies in the synergy between the vast knowledge reservoirs of Large Language Models (LLMs) and real-time, structured data from databases. LangChain is a bridge that efficiently connects these two worlds, enabling seamless interactions between LLMs and databases such as SQL.The marriage between LLMs and SQL databases opens a world of possibilities. With LangChain as the bridge, LLMs can effectively query databases, offering dynamic responses based on stored data. This section delves into the core of our setup, the utils.py file. Here, we knit together the MySQL database with our Streamlit application, defining the agent that stands at the forefront of database interactions.LangChain is a library designed to facilitate the union of LLMs with structured databases. It provides utilities and agents that can direct LLMs to perform database operations via natural language prompts. Instead of a user having to craft SQL queries manually, they can simply ask a question in plain English, which the LLM interprets and translates into the appropriate SQL query.Below, we present the code if utils.py, that brings our LLM-database interaction to life:from langchain import PromptTemplate, FewShotPromptTemplate from langchain.prompts.example_selector import LengthBasedExampleSelector from langchain.llms import OpenAI from langchain.sql_database import SQLDatabase from langchain.agents import create_sql_agent from langchain.agents.agent_toolkits import SQLDatabaseToolkit from langchain.agents.agent_types import AgentType # Database credentials DB_USER = 'my_user' DB_PASSWORD = 'my_pass' DB_NAME = 'tickers_db' DB_HOST = 'db' DB_PORT = 3306 mysql_uri = f"mysql+mysqlconnector://{DB_USER}:{DB_PASSWORD}@{DB_HOST}:{DB_PORT}/{DB_NAME}" # Initialize the SQLDatabase and SQLDatabaseToolkit db = SQLDatabase.from_uri(mysql_uri) toolkit = SQLDatabaseToolkit(db=db, llm=OpenAI(temperature=0)) # Create SQL agent agent_executor = create_sql_agent(    llm=OpenAI(temperature=0),    toolkit=toolkit,    verbose=True,    agent_type=AgentType.ZERO_SHOT_REACT_DESCRIPTION, ) # Modified the generate_response function to now use the SQL agent def query_db(prompt):    return agent_executor.run(prompt)Here's a breakdown of the key components:Database Credentials: These credentials are required to connect our application to the tickers_db database. The MySQL URI string represents this connection.SQLDatabase Initialization: Using the SQLDatabase.from_uri method, LangChain initializes a connection to our database. The SQLDatabaseToolkit provides a set of tools that help our LLM interact with the SQL database.Creating the SQL Agent: The SQL agent, in our case a ZERO_SHOT_REACT_DESCRIPTION type, is the main executor that takes a natural language prompt and translates it into SQL. It then fetches the data and returns it in a comprehensible manner. The agent uses the OpenAI model (an instance of LLM) and the aforementioned toolkit to accomplish this.The query_db function: This is the interface to our SQL agent. Upon receiving a prompt, it triggers the SQL agent to run and then returns the response.The architecture is now in place: on one end, we have a constant stream of financial data being fetched and stored in tickers_db. On the other, we have an LLM ready to interpret and answer user queries. The user might ask, "What was the closing price of AAPL yesterday?" and our system will seamlessly fetch this from the database and provide a well-crafted response, all thanks to LangChain.In the forthcoming section, we'll discuss how we present this powerful capability through an intuitive interface using Streamlit. This will enable end-users, irrespective of their technical proficiency, to harness the combined might of LLMs and structured databases, all with a simple chat interface.Building an Interactive Chatbot with Streamlit, OpenAI, and LangChainIn the age of user-centric design, chatbots represent an ideal solution for user engagement. But while typical chatbots can handle queries, imagine a chatbot powered by both Streamlit for its front end and a Large Language Model (LLM) for its backend intelligence. This powerful union allows us to provide dynamic, intelligent responses, leveraging our stored financial data to answer user queries.The grand finale of our setup is the Streamlit application. This intuitive web interface allows users to converse with our chatbot in natural language, making database queries feel like casual chats. Behind the scenes, it leverages the power of the SQL agent, tapping into the real-time financial data stored in our database, and presenting users with instant, accurate insights.Let's break down our chatbot's core functionality, designed using Streamlit:import streamlit as st from streamlit_chat import message from streamlit_extras.colored_header import colored_header from streamlit_extras.add_vertical_space import add_vertical_space from utils import * # Now the Streamlit app # Sidebar contents with st.sidebar:    st.title('Financial QnA Engine')    st.markdown('''    ## About    This app is an LLM-powered chatbot built using:    - Streamlit    - Open AI Davinci LLM Model    - LangChain    - Finance    ''')    add_vertical_space(5)    st.write('Running in Docker!') # Generate empty lists for generated and past. ## generated stores AI generated responses if 'generated' not in st.session_state:    st.session_state['generated'] = ["Hi, how can I help today?"] ## past stores User's questions if 'past' not in st.session_state:    st.session_state['past'] = ['Hi!'] # Layout of input/response containers input_container = st.container() colored_header(label='', description='', color_name='blue-30') response_container = st.container() # User input ## Function for taking user provided prompt as input def get_text():    input_text = st.text_input("You: ", "", key="input")    return input_text ## Applying the user input box with input_container:    user_input = get_text() # Response output ## Function for taking user prompt as input followed by producing AI generated responses def generate_response(prompt):    response = query_db(prompt)    return response ## Conditional display of AI generated responses as a function of user provided prompts with response_container:    if user_input:        response = generate_response(user_input)        st.session_state.past.append(user_input)        st.session_state.generated.append(response)          if st.session_state['generated']:        for i in range(len(st.session_state['generated'])):            message(st.session_state['past'][i], is_user=True, key=str(i) + '_user',avatar_style='identicon',seed=123)            message(st.session_state["generated"][i], key=str(i),avatar_style='icons',seed=123)The key features of the application are in general:Sidebar Contents: The sidebar provides information about the chatbot, highlighting the technologies used to power it. With the aid of streamlit-extras, we've added vertical spacing for visual appeal.User Interaction Management: Our chatbot uses Streamlit's session_state to remember previous user interactions. The 'past' list stores user queries, and 'generated' stores the LLM-generated responses.Layout: With Streamlit's container feature, the layout is neatly divided into areas for user input and the AI response.User Input: Users interact with our chatbot using a simple text input box, where they type in their query.AI Response: Using the generate_response function, the chatbot processes user input, fetching data from the database using LangChain and LLM to generate an appropriate response. These responses, along with past interactions, are then dynamically displayed in the chat interface using the message function from streamlit_chat.Now in order to ensure portability and ease of deployment, our application is containerized using Docker. Below is the Dockerfile that aids in this process:FROM python:3 WORKDIR /app COPY requirements.txt . RUN pip install --no-cache-dir -r requirements.txt COPY . . CMD ["streamlit", "run", "streamlit_app.py"]This Dockerfile:Starts with a base Python 3 image.Sets the working directory in the container to /app.Copies the requirements.txt into the container and installs the necessary dependencies.Finally, it copies the rest of the application and sets the default command to run the Streamlit application.Our application depends on several Python libraries which are specified in requirements.txt:streamlit streamlit-chat streamlit-extras mysql-connector-python openai==0.27.8 langchain==0.0.225These include:streamlit: For the main frontend application.streamlit-chat & streamlit-extras: For enhancing the chat interface and adding utility functions like colored headers and vertical spacing.mysql-connector-python: To interact with our MySQL database.openai: For accessing OpenAI's Davinci model.langchain: To bridge the gap between LLMs and our SQL database.Our chatbot application combines now a user-friendly frontend interface with the immense knowledge and adaptability of LLMs.We can verify that the values provided by the chatbot actually reflect the data in the database:In the next section, we'll dive deeper into deployment strategies, ensuring users worldwide can benefit from our financial Q&A engine.ConclusionAs we wrap up this comprehensive guide, we have used from raw financial data to a fully-fledged, AI-powered chatbot, we've traversed a myriad of technologies, frameworks, and paradigms to create something truly exceptional.We initiated our expedition by setting up a MySQL database brimming with financial data. But the real magic began when we introduced LangChain to the mix, establishing a bridge between human-like language understanding and the structured world of databases. This amalgamation ensured that our application could pull relevant financial insights on the fly, using natural language queries. Streamlit stood as the centerpiece of our user interaction. With its dynamic and intuitive design capabilities, we crafted an interface where users could communicate effortlessly. Marrying this with the vast knowledge of OpenAI's Davinci LLM, our chatbot could comprehend, reason, and respond, making financial inquiries a breeze. To ensure that our application was both robust and portable, we harnessed the power of Docker. This ensured that irrespective of the environment, our chatbot was always ready to assist, without the hassles of dependency management.We've showcased just the tip of the iceberg. The bigger picture is captivating, revealing countless potential applications. Deploying such tools in firms could help clients get real-time insights into their portfolios. Integrating such systems with personal finance apps can help individuals make informed decisions about investments, savings, or even daily expenses. The true potential unfolds when you, the reader, take these foundational blocks and experiment, innovate, and iterate. The amalgamation of LLMs, databases, and interactive interfaces like Streamlit opens a realm of possibilities limited only by imagination.As we conclude, remember that in the world of technology, learning is a continuous journey. What we've built today is a stepping stone. Challenge yourself, experiment with new data sets, refine the user experience, or even integrate more advanced features. The horizon is vast, and the opportunities, endless. Embrace this newfound knowledge and craft the next big thing in financial technology. After all, every revolution starts with a single step, and today, you've taken many. Happy coding!Author BioAlan Bernardo Palacio is a data scientist and an engineer with vast experience in different engineering fields. His focus has been the development and application of state-of-the-art data products and algorithms in several industries. He has worked for companies such as Ernst and Young, and Globant, and now holds a data engineer position at Ebiquity Media helping the company to create a scalable data pipeline. Alan graduated with a Mechanical Engineering degree from the National University of Tucuman in 2015, participated as the founder of startups, and later on earned a Master's degree from the faculty of Mathematics at the Autonomous University of Barcelona in 2017. Originally from Argentina, he now works and resides in the Netherlands.LinkedIn
Read more
  • 0
  • 0
  • 1020
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-efficient-llm-querying-with-lmql
Alan Bernardo Palacio
12 Sep 2023
14 min read
Save for later

Efficient LLM Querying with LMQL

Alan Bernardo Palacio
12 Sep 2023
14 min read
IntroductionIn the world of natural language processing, Large Language Models (LLMs) have proven to be highly successful at a variety of language-based tasks, such as machine translation, text summarization, question answering, reasoning, and code generation. LLMs like ChatGPT, GPT-4, and others have demonstrated outstanding performance by predicting the next token in a sequence based on input prompts. Users interact with these models by providing language instructions or examples to perform various downstream tasks. However, to achieve optimal results or adapt LLMs for specific tasks, complex and task-specific programs must be implemented, often requiring ad-hoc interactions and deep knowledge of the model's internals.In this article, we discuss LMQL, a framework for Language Model Programming (LMP), that allows users to specify complex interactions, control flow, and constraints without needing deep knowledge of the LLM's internals using a declarative programming language similar to SQL. LMQL supports high-level, logical constraints and users can express a wide range of prompting techniques concisely, reducing the need for ad-hoc interactions and manual work to steer model generation, avoiding costly re-querying, and guiding the text generation process according to their specific criteria. Let’s start.Overview of Large Language ModelsLanguage models (LMs) operate on sequences of tokens, where tokens are discrete elements that represent words or sub-words in a text. The process involves using a tokenizer to map input words to tokens, and then a language model predicts the probabilities of possible next tokens based on the input sequence. Various decoding methods are used in the LMs to output the right sequence of tokens from the language model's predictions out of which we can name:Decoding Methods:Greedy decoding: Select the token with the highest probability at each step.Sampling: Randomly sampling tokens based on the predicted probabilities.Full decoding: Enumerating all possible sequences and selecting the one with the highest probability (computationally expensive).Beam search: Maintaining a set of candidate sequences and refining them by predicting the next token.Masked Decoding: In some cases, certain tokens can be ruled out based on a mask that indicates which tokens are viable. Decoding is then performed on the remaining set of tokens.Few-Shot Prompting: LMs can be trained on broad text-sequence prediction datasets and then provided with context in the form of examples for specific tasks. This approach allows LMs to perform downstream tasks without task-specific training.Multi-Part Prompting: LMs are used not only for simple prompt completion but also as reasoning engines integrated into larger programs. Various LM programming schemes explore compositional reasoning, such as iterated decompositions, meta prompting, tool use, and composition of multiple prompts.It is also important to name that for beam searching and sampling there is a parameter named temperature which we can use to control the diversity of the output.These techniques enable LMs to be versatile and perform a wide range of tasks without requiring task-specific training, making them powerful multi-task reasoners.Asking the Right QuestionsWhile LLMs can be prompted with examples or instructions, using them effectively and adapting to new models often demands a deep understanding of their internal workings, along with the use of vendor-specific libraries and implementations. Constrained decoding to limit text generation to legal words or phrases can be challenging. Many advanced prompting methods require complex interactions and control flows between the LLM and the user, leading to manual work and restricting the generality of implementations. Additionally, generating complete sequences from LLMs may require multiple calls and become computationally expensive, resulting in high usage costs per query in pay-to-use APIs. Generally, the challenges that can associated with creating proper promts for LLMs are:Interaction Challenge: One challenge in LM interaction is the need for multiple manual interactions during the decoding process. For example, in meta prompting, where the language model is asked to expand the prompt and then provide an answer, the current approach requires inputting the prompt partially, invoking the LM, extracting information, and manually completing the sequence. This manual process may involve human intervention or several API calls, making joint optimization of template parameters difficult and limiting automated optimization possibilities.Constraints & Token Representation: Another issue arises when considering completions generated by LMs. Sometimes, LMs may produce long, ongoing sequences of text that do not adhere to desired constraints or output formats. Users often have specific constraints for the generated text, which may be violated by the LM. Expressing these constraints in terms of human-understandable concepts and logic is challenging, and existing methods require considerable manual implementation effort and model-level understanding of decoding procedures, tokenization, and vocabulary.Efficiency and Cost Challenge: Efficiency and performance remain significant challenges in LM usage. While efforts have been made to improve the inference step in modern LMs, they still demand high-end GPUs for reasonable performance. This makes practical usage costly, particularly when relying on hosted models running in the cloud with paid APIs. The computational and financial expenses associated with frequent LM querying can become prohibitive.Addressing these challenges, Language Model Programming and constraints offer new optimization opportunities. By defining behavior and limiting the search space, the number of LM invocations can be reduced. In this context, the cost of validation, parsing, and mask generation becomes negligible compared to the significant cost of a single LM call.So the question arises, how can we overcome the challenges of implementing complex interactions and constraints with LLMs while reducing computational costs and retaining or improving accuracy on downstream tasks?Introducing LMQLTo address these challenges and enhance language model programming, a team of researchers has introduced LMQL (Language Model Query Language). LMQL is an open-source programming language and platform for LLM interaction that combines prompts, constraints, and scripting. It is designed to elevate the capabilities of LLMs like ChatGPT, GPT-4, and any future models, offering a declarative, SQL-like approach based on Python.LMQL enables Language Model Programming (LMP), a novel paradigm that extends traditional natural language prompting by allowing lightweight scripting and output constraining. This separation of front-end and back-end interaction allows users to specify complex interactions, control flow, and constraints without needing deep knowledge of the LLM's internals. This approach abstracts away tokenization, implementation, and architecture details, making it more portable and easier to use across different LLMs.With LMQL, users can express a wide range of prompting techniques concisely, reducing the need for ad-hoc interactions and manual work. The language supports high-level, logical constraints, enabling users to steer model generation and avoid costly re-querying and validation. By guiding the text generation process according to specific criteria, users can achieve the desired output with fewer iterations and improved efficiency.Moreover, LMQL leverages evaluation semantics to automatically generate token masks for LM decoding based on user-specified constraints. This optimization reduces inference cost by up to 80%, resulting in significant latency reduction and lower computational expenses, particularly beneficial for pay-to-use APIs.LMQL ddresses certain challenges in LM interaction and usage which are namely.Overcoming Manual Interaction: LMQL simplifies the prompt and eliminates the need for manual interaction during the decoding process. It achieves this by allowing the use of variables, represented within square brackets, which store the answers obtained from the language model. These variables can be referenced later in the query, avoiding the need for manual extraction and input. By employing LMQL syntax, the interaction process becomes more automated and efficient.Constraints on Variable Parts: To address issues related to long and irrelevant outputs, LMQL introduces constraints on the variable parts of LM interaction. These constraints allow users to specify word and phrase limitations for the generated text. LMQL ensures that the decoded tokens for variables meet these constraints during the decoding process. This provides more control over the generated output and ensures that it adheres to user-defined restrictions.Generalization of Multi-Part Prompting: Language Model Programming through LMQL generalizes various multi-part prompting approaches discussed earlier. It streamlines the process of trying different values for variables by automating the selection process. Users can set constraints on variables, which are then applied to multiple inputs without any human intervention. Once developed and tested, an LMQL query can be easily applied to different inputs in an unsupervised manner, eliminating the need for manual trial and error.Efficient Execution: LMQL offers efficiency benefits over manual interaction. The constraints and scripting capabilities in LMQL are applied eagerly during decoding, reducing the number of times the LM needs to be invoked. This optimized approach results in notable time and cost savings, especially when using hosted models in cloud environments.The LMQL syntax involves components such as the decoder, the actual query, the model to query, and the constraints. The decoder specifies the decoding procedure, which can include argmax, sample, or beam search. LMQL allows for constraints on the generated text using Python syntax, making it more user-friendly and easily understandable. Additionally, the distribution instruction allows users to augment the returned result with probability distributions, which is useful for tasks like sentiment analysis.Using LMQL with PythonLMQL can be utilized in various ways - as a standalone language, in the Playground, or even as a Python library being the latter what we will demonstrate now. Integrating LMQL into Python projects allows users to streamline their code and incorporate LMQL queries seamlessly. Let's explore how to use LMQL as a Python library and understand some examples.To begin, make sure you have LMQL and LangChain installed by running the following command:!pip install lmql==0.0.6.6 langchain==0.0.225You can then define and execute LMQL queries within Python using a simple approach. Decorate a Python function with the lmql.query decorator, providing the query code as a multi-line string. The decorated function will automatically be compiled into an LMQL query. The return value of the decorated function will be the result of the LMQL query.Here's an example code snippet demonstrating this:import lmql import aiohttp import os os.environ['OPENAI_API_KEY'] = '<your-openai-key>' @lmql.query async def hello():    '''lmql    argmax        "Hello[WHO]"    from        "openai/text-ada-001"    where        len(TOKENS(WHO)) < 10    ''' print(await hello())LMQL provides a fully asynchronous API that enables running multiple LMQL queries in parallel. By declaring functions as async with @lmql.query, you can use await to execute the queries concurrently.The code below demonstrates how to look up information from Wikipedia and incorporate it into an LMQL prompt dynamically:async def look_up(term):    # Looks up term on Wikipedia    url = f"<https://en.wikipedia.org/w/api.php?format=json&action=query&prop=extracts&exintro&explaintext&redirects=1&titles={term}&origin=*>"    async with aiohttp.ClientSession() as session:        async with session.get(url) as response:            # Get the first sentence on the first page            page = (await response.json())["query"]["pages"]            return list(page.values())[0]["extract"].split(".")[0] @lmql.query async def greet(term):    '''    argmax        """Greet {term} ({await look_up(term)}):        Hello[WHO]        """    from        "openai/text-davinci-003"    where        STOPS_AT(WHO, "\\n")    ''' print((await greet("Earth"))[0].prompt)As an alternative to @lmql.query you can use lmql.query(...) as a function that compiles a provided string of LMQL code into a Python function.q = lmql.query('argmax "Hello[WHO]" from "openai/text-ada-001" where len(TOKENS(WHO)) < 10') await q()LMQL queries can also be easily integrated into langchain's Chain components. This allows for sequential prompting using multiple queries.pythonCopy code from langchain import LLMChain, PromptTemplate from langchain.chat_models import ChatOpenAI from langchain.prompts.chat import (ChatPromptTemplate, HumanMessagePromptTemplate) from langchain.llms import OpenAI # Setup the LM to be used by langchain llm = OpenAI(temperature=0.9) human_message_prompt = HumanMessagePromptTemplate(    prompt=PromptTemplate(        template="What is a good name for a company that makes {product}?",        input_variables=["product"],    ) ) chat_prompt_template = ChatPromptTemplate.from_messages([human_message_prompt]) chat = ChatOpenAI(temperature=0.9) chain = LLMChain(llm=chat, prompt=chat_prompt_template) # Run the chain chain.run("colorful socks")Lastly, by treating LMQL queries as Python functions, you can easily build pipelines by chaining functions together. Furthermore, the guaranteed output format of LMQL queries ensures ease of processing the returned values using data processing libraries like Pandas.Here's an example of processing the output of an LMQL query with Pandas:pythonCopy code import pandas as pd @lmql.query async def generate_dogs(n: int):    '''lmql    sample(n=n)        """Generate a dog with the following characteristics:        Name:[NAME]        Age: [AGE]        Breed:[BREED]        Quirky Move:[MOVE]        """    from        "openai/text-davinci-003"    where        STOPS_BEFORE(NAME, "\\n") and STOPS_BEFORE(BREED, "\\n") and        STOPS_BEFORE(MOVE, "\\n") and INT(AGE) and len(AGE) < 3    ''' result = await generate_dogs(8) df = pd.DataFrame([r.variables for r in result]) dfBy employing LMQL as a Python library, users can make their code more efficient and structured, allowing for easier integration with other Python libraries and tools.LMQL can be used in various ways - as a standalone language, in the Playground, or even as a Python library. When integrated into Python projects, LMQL queries can be executed seamlessly. Below, we provide a brief overview of using LMQL as a Python library.ConclusionLMQL introduces an efficient and powerful approach to interact with language models, revolutionizing language model programming. By combining prompts, constraints, and scripting, LMQL offers a user-friendly interface for working with large language models, significantly improving efficiency and accuracy across diverse tasks. Its capabilities allow developers to leverage the full potential of language models without the burden of complex implementations, making language model interaction more accessible and cost-effective.With LMQL, users can overcome challenges in LM interaction, including manual interactions, constraints on variable parts, and generalization of multi-part prompting. By automating the selection process and eager application of constraints during decoding, LMQL reduces the number of LM invocations, resulting in substantial time and cost savings. Moreover, LMQL's declarative, SQL-like approach simplifies the development process and abstracts away tokenization and implementation details, making it more portable and user-friendly.In conclusion, LMQL represents a promising advancement in the realm of large language models and language model programming. Its efficiency, flexibility, and ease of use open up new possibilities for creating complex interactions and steering model generation without deep knowledge of the model's internals. By embracing LMQL, developers can make the most of language models, unleashing their potential across a wide range of language-based tasks with heightened efficiency and reduced computational costs.Author BioAlan Bernardo Palacio is a data scientist and an engineer with vast experience in different engineering fields. His focus has been the development and application of state-of-the-art data products and algorithms in several industries. He has worked for companies such as Ernst and Young, and Globant, and now holds a data engineer position at Ebiquity Media helping the company to create a scalable data pipeline. Alan graduated with a Mechanical Engineering degree from the National University of Tucuman in 2015, participated as the founder of startups, and later on earned a Master's degree from the faculty of Mathematics at the Autonomous University of Barcelona in 2017. Originally from Argentina, he now works and resides in the Netherlands.LinkedIn
Read more
  • 0
  • 0
  • 3407

article-image-fine-tuning-large-language-models-llms
Amita Kapoor
11 Sep 2023
12 min read
Save for later

Fine-Tuning Large Language Models (LLMs)

Amita Kapoor
11 Sep 2023
12 min read
IntroductionIn the bustling metropolis of machine learning and natural language processing, Large Language Models (LLMs) such as GPT-4 are the skyscrapers that touch the clouds. From chatty chatbots to prolific prose generators, they stand tall, powering a myriad of applications. Yet, like any grand structure, they're not one-size-fits-all. Sometimes, they need a little nipping and tucking to shine their brightest. Dive in as we unravel the art and craft of fine-tuning these linguistic behemoths, sprinkled with code confetti for the hands-on aficionados out there.What's In a Fine-Tune?In a world where a top chef can make spaghetti or sushi but needs finesse for regional dishes like 'Masala Dosa' or 'Tarte Tatin', LLMs are similar: versatile but requiring specialization for specific tasks. A general LLM might misinterpret rare medical terms or downplay symptoms, but with medical text fine-tuning, it can distinguish nuanced health issues. In law, a misread word can change legal interpretations; by refining the LLM with legal documents, we achieve accurate clause interpretation. In finance, where terms like "bearish" and "bullish" are pivotal, specialized training ensures the model's accuracy in financial analysis and predictions.Whipping Up the Perfect AI RecipeJust as a master chef carefully chooses specific ingredients and techniques to curate a gourmet dish, in the vast culinary world of Large Language Models, we have a delectable array of fine-tuning techniques to concoct the ideal AI delicacy. Before we dive into the details, feast your eyes on the visual smorgasbord below, which provides an at-a-glance overview of these methods. With this flavour-rich foundation, we're all set to embark on our fine-tuning journey, focusing on the PEFT method and the Flan-T5 model on the Hugging Face platform. Aprons on, and let's get cooking!Fine Tuning Flan-T5Google AI's Flan-T5, an advanced version of the T5 model, excels in LLMs with its capability to handle text and code. It specialises in Text generation, Translation, Summarization, Question Answering, and Code Generation. Unlike GPT-3 and LLAMA, Flan-T5 is open-source, benefiting researchers worldwide. With configurations ranging from 60M to 11B parameters, it balances versatility and power, though larger models demand more computational resources.For this article, we will leverage the DialogSum dataset, a robust resource boasting 13,460 dialogues, supplemented with manually labelled summaries and topics (and an additional 100 holdout data entries for topic generation). This dataset will serve as the foundation for fine-tuning our open-source giant, Flan-T5, to specialise it for dialogue summarization tasks. Setting the Stage: Preparing the Tool ChestTo fine-tune effectively, ensure your digital setup is optimized. Here's a quick checklist:Hardware: Use platforms like Google Colab.RAM: Memory depends on model parameters. For example:   Memory (MTotal) = 4 x (Number of Parameters x 4 bytes)For a 247,577,856 parameter model (flan-t5-base), around 3.7GB is needed for parameters, gradients, and optimizer states. Ideally, have at least 8GB RAMGPU: A high-end GPU, such as NVIDIA Tesla P100 or T4, speeds up training and inference. Aim for 12GB or more GPU memory, accounting for overheads.Libraries: Like chefs need the right tools, AI fine-tuning demands specific libraries for algorithms, models, and evaluation tools.Remember, your setup is as crucial as the process itself. Let's conjure up the essential libraries, by running the following command:!pip install \    transformers \    datasets \    evaluate \    rouge_score \    loralib \    peftWith these tools in hand, we're now primed to move deeper into the world of fine-tuning. Let's dive right in! Next, it's essential to set up our environment with the necessary tools:from datasets import load_dataset from transformers import AutoModelForSeq2SeqLM, AutoTokenizer, GenerationConfig, TrainingArguments, Trainer import torch import time import evaluate import pandas as pd import numpy as np To put our fine-tuning steps into motion, we first need a dataset to work with. Enter the DialogSum dataset, an extensive collection tailored for dialogue summarization:dataset_name = "knkarthick/dialogsum" dataset = load_dataset(dataset_name)Executing this code, we've swiftly loaded the DialogSum dataset. With our data playground ready, we can take a closer look at its structure and content to better understand the challenges and potentials of our fine-tuning process. DialogSum dataset is neatly structured into three segments:Train: With a generous 12,460 dialogues, this segment is the backbone of our model's learning process.Validation: A set of 500 dialogues, this slice of the dataset aids in fine-tuning the model, ensuring it doesn't merely memorise but truly understandsTest: This final 1,500 dialogue portion stands as the litmus test, determining how well our model has grasped the art of dialogue summarization.Each dialogue entry is accompanied by a unique 'id', a 'summary' of the conversation, and a 'topic' to give context.Before fine-tuning, let's gear up with our main tool: the Flan-T5 model, specifically it's base' variant from Google, which balances performance and efficiency. Using AutoModelForSeq2SeqLM, we effortlessly load the pre-trained Flan-T5, set to use torch.bfloat16 for optimal memory and precision. Alongside, we have the tokenizer, essential for translating text into a model-friendly format. Both are sourced from google/flan-t5-base, ensuring seamless compatibility. Now, let's get this code rolling:model_name='google/flan-t5-base' original_model = AutoModelForSeq2SeqLM.from_pretrained(model_name, torch_dtype=torch.bfloat16) tokenizer = AutoTokenizer.from_pretrained(model_name)Understanding Flan-T5 requires a look at its structure, particularly its parameters. Knowing the number of trainable parameters shows the model's adaptability. The total parameters reflect its complexity. The following code will count these parameters and calculate the ratio of trainable ones, giving insight into the model's flexibility during fine-tuning.Let's now decipher these statistics for our Flan-T5 model:def get_model_parameters_info(model): total_parameters = sum(param.numel() for param in model.parameters()) trainable_parameters = sum(param.numel() for param in model.parameters() if param.requires_grad) trainable_percentage = 100 * trainable_parameters / total_parameters info = ( f"Trainable model parameters: {trainable_parameters}", f"Total model parameters: {total_parameters}", f"Percentage of trainable model parameters: {trainable_percentage:.2f}%" ) return '\n'.join(info) print(get_model_parameters_info(original_model))Trainable model parameters: 247577856 Total model parameters: 247577856 Percentage of trainable model parameters: 100.00%Harnessing PEFT for EfficiencyIn the fine-tuning journey, we seek methods that boost efficiency without sacrificing performance. This brings us to PEFT (Parameter-Efficient Fine-Tuning) and its secret weapon, LORA (Low-Rank Adaptation). LORA smartly adapts a model to new tasks with minimal parameter adjustments, offering a cost-effective solution in computational terms.In the code block that follows, we're initializing LORA's configuration. Key parameters to note include:r: The rank of the low-rank decomposition, which influences the number of adaptable parameters.lora_alpha: A scaling factor determining the initial magnitude of the LORA parameters.target_modules: The neural network components we wish to reparameterize. Here, we're targeting the "q" (query) and "v" (value) modules in the transformer's attention mechanism.lora_dropout:  A regularising dropout is applied to the LORA parameters to prevent overfitting.bias: Specifies the nature of the bias term in the reparameterization. Setting it to "none" means no bias will be added.task_type: Signifying the type of task for which we're employing LORA. In our case, it's sequence-to-sequence language modelling, perfectly aligned with our Flan-T5's capabilities.Using the get_peft_model function, we integrate the LORA configuration into our Flan-T5 model. Now, let's see how this affects the trainable parameters: peft_model = get_peft_model(original_model, lora_config) print(get_model_parameters_info(peft_model))Trainable model parameters: 3538944 Total model parameters: 251116800 Percentage of trainable model parameters: 1.41%Preparing for model training requires setting specific parameters. Directory choice, learning rate, logging frequency, and epochs are vital. A unique output directory segregates results from different training runs, enabling comparison. Our high learning rate signifies aggressive fine-tuning, while allocating 100 epochs ensures ample adaptation time for the model. With these settings, we're poised to initiate the trainer and embark on the training journey.# Set the output directory with a unique name using a timestamp output_dir = f'peft-dialogue-summary-training-{str(int(time.time()))}' # Define the training arguments for PEFT model training peft_training_args = TrainingArguments( output_dir=output_dir, auto_find_batch_size=True, # Automatically find an optimal batch size learning_rate=1e-3, # Use a higher learning rate for fine-tuning num_train_epochs=10, # Set the number of training epochs logging_steps=1000, # Log every 500 steps for more frequent logging max_steps=-1 # Let the number of steps be determined by epochs and dataset size ) # Initialise the trainer with PEFT model and training arguments peft_trainer = Trainer( model=peft_model, args=peft_training_args, train_dataset=formatted_datasets["train"], ) Let the learning begin!peft_trainer.train()To evaluate our models, we'll compare their summaries to a human baseline from our dataset using a `prompt`. With the original and PEFT-enhanced Flan-T5 models, we'll create summaries and contrast them with the human version, revealing AI accuracy and the best-performing model in our summary contest.def generate_summary(model, tokenizer, dialogue, prompt): """ Generate summary for a given dialogue and model. """ input_text = prompt + dialogue input_ids = tokenizer(input_text, return_tensors="pt").input_ids input_ids = input_ids.to(device) output_ids = model.generate(input_ids=input_ids, max_length=200, num_beams=1, early_stopping=True) return tokenizer.decode(output_ids[0], skip_special_tokens=True) index = 270 dialogue = dataset['test'][index]['dialogue'] human_baseline_summary = dataset['test'][index]['summary'] prompt = "Summarise the following conversation:\n\n" # Generate summaries original_summary = generate_summary(original_model, tokenizer, dialogue, prompt) peft_summary = generate_summary(peft_model, tokenizer, dialogue, prompt) # Print summaries print_output('BASELINE HUMAN SUMMARY:', human_baseline_summary) print_output('ORIGINAL MODEL:', original_summary) print_output('PEFT MODEL:', peft_summary)And the output:----------------------------------------------------------------------- BASELINE HUMAN SUMMARY:: #Person1# and #Person1#'s mother are preparing the fruits they are going to take to the picnic. ----------------------------------------------------------------------- ORIGINAL MODEL:: #Person1# asks #Person2# to take some fruit for the picnic. #Person2# suggests taking grapes or apples.. ----------------------------------------------------------------------- PEFT MODEL:: Mom and Dad are going to the picnic. Mom will take the grapes and the oranges and take the oranges.To assess our summarization models, we use the subset of the test dataset. We'll compare the summaries to human-created baselines. Using batch processing for efficiency, dialogues are processed in set group sizes. After processing, all summaries are compiled into a DataFrame for structured comparison and analysis. Below is the Python code for this experiment.dialogues = dataset['test'][0:20]['dialogue'] human_baseline_summaries = dataset['test'][0:20]['summary'] original_model_summaries = [] peft_model_summaries = [] for dialogue in dialogues:    prompt = "Summarize the following conversation:\n\n"      original_summary = generate_summary(original_model, tokenizer, dialogue, prompt)       peft_summary = generate_summary(peft_model, tokenizer, dialogue, prompt)      original_model_summaries.append(original_summary)    peft_model_summaries.append(peft_summary) df = pd.DataFrame({    'human_baseline_summaries': human_baseline_summaries,    'original_model_summaries': original_model_summaries,    'peft_model_summaries': peft_model_summaries }) dfTo evaluate our PEFT model's summaries, we use the ROUGE metric, a common summarization tool. ROUGE measures the overlap between predicted summaries and human references, showing how effectively our models capture key details. The Python code for this evaluation is:rouge = evaluate.load('rouge') original_model_results = rouge.compute( predictions=original_model_summaries, references=human_baseline_summaries[0:len(original_model_summaries)], use_aggregator=True, use_stemmer=True, ) peft_model_results = rouge.compute( predictions=peft_model_summaries, references=human_baseline_summaries[0:len(peft_model_summaries)], use_aggregator=True, use_stemmer=True, ) print('ORIGINAL MODEL:') print(original_model_results) print('PEFT MODEL:') print(peft_model_results) Here is the output:ORIGINAL MODEL: {'rouge1': 0.3870781853986991, 'rouge2': 0.13125454660387353, 'rougeL': 0.2891907205395029, 'rougeLsum': 0.29030342767482775} INSTRUCT MODEL: {'rouge1': 0.3719168722187023, 'rouge2': 0.11574429294744135, 'rougeL': 0.2739614480462256, 'rougeLsum': 0.2751489358330983} PEFT MODEL: {'rouge1': 0.3774164144865605, 'rouge2': 0.13204737323990984, 'rougeL': 0.3030487123408395, 'rougeLsum': 0.30499897454317104}Upon examining the results, it's evident that the original model shines with the highest ROUGE-1 score, adeptly capturing crucial standalone terms. On the other hand, the PEFT Model wears the crown for both ROUGE-L and ROUGE-Lsum metrics. This implies the PEFT Model excels in crafting summaries that string together longer, coherent sequences echoing those in the reference summaries.ConclusionWrapping it all up, in this post we delved deep into the nuances of fine-tuning Large Language Models, particularly spotlighting the prowess of FLAN T5. Through our hands-on venture into the dialogue summarization task, we discerned the intricate dance between capturing individual terms and weaving them into a coherent narrative. While the original model exhibited an impressive knack for highlighting key terms, the PEFT Model emerged as the maestro in crafting flowing, meaningful sequences.It's clear that in the grand arena of language models, knowing the notes is just the beginning; it's how you orchestrate them that creates the magic. Harnessing the techniques illuminated in this post, you too can fine-tune your chosen LLM, crafting your linguistic symphonies with finesse and flair. Here's to you becoming the maestro of your own linguistic ensemble!Author BioAmita Kapoor is an accomplished AI consultant and educator with over 25 years of experience. She has received international recognition for her work, including the DAAD fellowship and the Intel Developer Mesh AI Innovator Award. She is a highly respected scholar with over 100 research papers and several best-selling books on deep learning and AI. After teaching for 25 years at the University of Delhi, Amita retired early and turned her focus to democratizing AI education. She currently serves as a member of the Board of Directors for the non-profit Neuromatch Academy, fostering greater accessibility to knowledge and resources in the field. After her retirement, Amita founded NePeur, a company providing data analytics and AI consultancy services. In addition, she shares her expertise with a global audience by teaching online classes on data science and AI at the University of Oxford. 
Read more
  • 0
  • 0
  • 403

article-image-build-your-llm-powered-personal-website
Louis Owen
11 Sep 2023
8 min read
Save for later

Build Your LLM-Powered Personal Website

Louis Owen
11 Sep 2023
8 min read
IntroductionSince ChatGPT shocked the world with its capability, AI started to be utilized in numerous fields: customer service assistant, marketing content creation, code assistant, travel itinerary planning, investment analysis, and you name it.However, have you ever wondered about utilizing AI, or more specifically Large Language Model (LLM) like ChatGPT, to be your own personal AI assistant on your website?A personal website, usually also called a personal web portfolio, consists of all the things we want to showcase to the world, starting from our short biography, work experiences, projects we have done, achievements, paper publications, and any other things that are related to our professional work. We put this website live on the internet and people can come and see all of its content by scrolling and surfing the pages.What if we can change the User Experience a bit from scrolling/surfing to giving a query? What if we add a small search bar or widget where they can directly ask anything they want to know about us and they’ll get the answer immediately? Let’s imagine a head-hunter or hiring manager who opened your personal website. In their mind, they already have specific criteria for the potential candidates they want to hire. If we put a search bar or any type of widget on our website, they can directly ask what they want to know about us. Hence, improving our chances of being approached by them. This will also let them know that we’re adapting to the latest technology available in the market and surely will increase our positive points in their scoring board.In this article, I’ll guide you to build your own LLM-Powered Personal Website. We’ll start by discussing several FREE-ly available LLMs that we can utilize. Then, we’ll go into the step-by-step process of how to build our personal AI assistant by exploiting the LLM capability as a Question and Answering (QnA) module. As a hint, we’ll use one of the available task-specific models provided by AI21Labs as our LLM. They provide a 3-month free trial worth $90 or 18,000 free calls for the QnA model. Finally, we’ll see how we can put our personal AI assistant on our website.Without wasting any more time, let’s take a deep breath, make yourselves comfortable, and be ready to learn how to build your own LLM-powered personal website!Freely Available LLMsThe main engine of our personal AI assistant is an LLM. The question is what LLM should we use?There are many variants of LLM available in the market right now, starting from open-source to closed-source LLM. There are two main differences between open-source and closed-source. Open-source LLMs are absolutely free but you need to host it by yourself. On the other hand, closed-source LLMs are not free but we don’t need to host it by ourselves, we just need to call an API request to utilize it.As for open-source LLM, the go-to LLM for a lot of use cases is LLaMA-2 by Meta AI. Since LLM consumes a large amount of GPU memory, in practice we usually perform 4-bit quantization to reduce the memory usage. Thanks to the open-source community, you can now directly use the quantized version of LLaMA-2 in the HuggingFace library released by TheBloke. To host the LLM, we can also utilize a very powerful inference server called Text Generation Inference (TGI).The next question is whether there are freely available GPU machines out there that we can use to host the LLM. We can’t use Google Colab since we want to host it on a server where the personal website can send API requests to that server. Luckily, there are 2 available free options for us: Google Cloud Platform and SaturnCloud. Both of them offer free trial accounts for us to rent the GPU machines.Open-source LLM like LLaMA-2 is free but it comes with an additional hassle which is to host it by ourselves. In this article, we’ll use closed-source LLM as our personal AI assistant instead. However, most closed-source LLMs that can be accessed via API are not free: GPT-3.5 and GPT-4 by OpenAI, Claude by Anthropic, Jurassic by AI21Labs, etc.Luckily, AI21Labs offers $90 worth of free trial for us! Moreover, they also provide task-specific models that are charged based on the number of API calls, not based on the number of tokens like in other most closed-source LLMs. This is surely very suitable for our use case since we’ll have long input tokens!Let’s dive deeper into AI21Labs LLM, specifically the QnA model which we’ll be using as our personal AI assistant!AI21Labs QnA LLMAI21Labs provides numerous task-specific models, which offer out-of-the-box reading and writing capabilities. The LLM we’ll be using is fine-tuned specifically for the QnA task, or they called it the “Contextual Answers” model. We just need to provide the context and query, then it will return the answer solely based on the information available in the context. This model is priced at $0.005 / API request, which means with our $90 free trial account, we can send 18,000 API calls! Isn’t it amazing? Without further ado, let’s start building our personal AI assistant!1.    Create AI21Labs Free Trial AccountTo use the QnA model, we just need to create a free trial account on the AI21Labs website. You can follow the steps from the website, it’s super easy just like creating a new account on most websites.2.    Enter the PlaygroundOnce you have the free trial account, you can go to the AI21Studio page and select “Contextual Answers” under the “Task-Specific Models” button in the left bar. Then, we can go to the Playground to test the model. Inside the Playground of the QnA model, there will be 2 input fields and 1 output field. As for input, we need to pass the context (the knowledge list) and the query. As for output, we’ll get the answer from the given query based on the context provided. What if the answer doesn’t exist in the context? This model will return “Answer not in documents.” as the fallback.3.    Create the Knowledge ListThe next and main task that we need to do is to create the knowledge list as part of the context input. Just think of this knowledge list as the Knowledge Base (KB) for the model. So, the model is able to answer the model only based on the information available in this KB.4.    Test with Several QueriesMost likely, our first set of knowledge is not exhaustive. Thus, we need to do several iterations of testing to keep expanding the list while also maintaining the quality of the returned answer. We can start by creating a list of possible queries that can be asked by our web visitors. Then, we can add several answers for each of the queries inside the knowledge list. Pro tip: Once our assistant is deployed on our website, we can also add a logger to store all queries and responses that we get. Using that log data, we can further expand our knowledge list, hence making our AI assistant “smarter”.5.    Embed the AI Assistant on Our WebsiteUntil now, we just played with the LLM in the Playground. However, our goal is to put it inside our web portfolio. Thanks to AI21Labs, we can do it easily just by adding the JavaScript code inside our website. We can just click the three-dots button in the top right of the “context input” and choose the “Code” option. Then, a pop-up page will be shown, and you can directly copy and paste the JavaScript code into your personal website. That’s it!ConclusionCongratulations on keeping up to this point! Hopefully, I can see many new LLM-powered portfolios developed after this article is published. Throughout this article, you have learned how to build your own LLM-powered personal website starting from the motivation, freely available LLMs with their pros and cons, AI21Labs task-specific models, creating your own knowledge list along with some tips, and finally how to embed your AI assistant in your personal website. See you in the next article!Author BioLouis Owen is a data scientist/AI engineer from Indonesia who is always hungry for new knowledge. Throughout his career journey, he has worked in various fields of industry, including NGOs, e-commerce, conversational AI, OTA, Smart City, and FinTech. Outside of work, he loves to spend his time helping data science enthusiasts to become data scientists, either through his articles or through mentoring sessions. He also loves to spend his spare time doing his hobbies: watching movies and conducting side projects.Currently, Louis is an NLP Research Engineer at Yellow.ai, the world’s leading CX automation platform. Check out Louis’ website to learn more about him! Lastly, if you have any queries or any topics to be discussed, please reach out to Louis via LinkedIn.
Read more
  • 0
  • 0
  • 1377

article-image-getting-started-with-gemini-ai
Packt
07 Sep 2023
2 min read
Save for later

Getting Started with Gemini AI

Packt
07 Sep 2023
2 min read
Introduction Gemini AI is a large language model (LLM) being developed by Google DeepMind. It is still under development, but it is expected to be more powerful than ChatGPT, the current state-of-the-art LLM. Gemini AI is being built on the technology and techniques used in AlphaGo, an early AI system developed by DeepMind in 2016. This means that Gemini AI is expected to have strong capabilities in planning and problem-solving. Gemini AI is a powerful tool that has the potential to be used in a wide variety of applications. Some of the potential use cases for Gemini AI include: Chatbots: Gemini AI could be used to create more realistic and engaging chatbots. Virtual assistants: Gemini AI could be used to create virtual assistants that can help users with tasks such as scheduling appointments, making reservations, and finding information. Content generation: Gemini AI could be used to generate creative content such as articles, blog posts, and scripts. Data analysis: Gemini AI could be used to analyze large datasets and identify patterns and trends. Medical diagnosis: Gemini AI could be used to assist doctors in diagnosing diseases. Financial trading: Gemini AI could be used to make trading decisions. How Gemini AI works Gemini AI is a neural network that has been trained on a massive dataset of text and code. This dataset includes books, articles, code repositories, and other forms of text. The neural network is able to learn the patterns and relationships between words and phrases in this dataset. This allows Gemini AI to generate text, translate languages, write different kinds of creative content, and answer your questions in an informative way. How to use Gemini AI Gemini AI is not yet available to the public, but it is expected to be released in the future. When it is released, it will likely be available through a cloud-based API. This means that developers will be able to use Gemini AI in their own applications. To use Gemini AI, developers will need to first create an account and obtain an API key. Once they have an API key, they can use it to call the Gemini AI API. The API will allow them to interact with Gemini AI and use its capabilities. Here are some steps on how to install or get started with Gemini AI: Go to the Gemini AI website and create an account: Once you have created an account, you will be given an API key. Install the Gemini AI client library for your programming language. In your code, import the Gemini AI client library and initialize it with your API key. Call the Gemini AI API to generate text, translate languages, write different kinds of creative content, or answer your questions in an informative way. For more detailed instructions on how to install and use Gemini AI, please refer to the Gemini AI documentation. The future of Gemini AI Gemini AI is still under development, but it has the potential to revolutionize the way we interact with computers. In the future, Gemini AI could be used to create more realistic and engaging chatbots, virtual assistants, and other forms of AI-powered software. Gemini AI could also be used to improve our understanding of the world around us by analyzing large datasets and identifying patterns and trends. Conclusion Gemini AI is a powerful tool that has the potential to be used in a wide variety of applications. It is still under development, but it has the potential to revolutionize the way we interact with computers. In the future, Gemini AI could be used to create more realistic and engaging chatbots, virtual assistants, and other forms of AI-powered software. Gemini AI could also be used to improve our understanding of the world around us by analyzing large datasets and identifying patterns and trends.  
Read more
  • 0
  • 0
  • 1461
article-image-llm-pitfalls-and-how-to-avoid-them
Amita Kapoor & Sharmistha Chatterjee
31 Aug 2023
13 min read
Save for later

LLM Pitfalls and How to Avoid Them

Amita Kapoor & Sharmistha Chatterjee
31 Aug 2023
13 min read
IntroductionLanguage Learning Models, or LLMs, are machine learning algorithms that focus on understanding and generating human-like text. These advanced developments have significantly impacted the field of natural language processing, impressing us with their capacity to produce cohesive and contextually appropriate text. However, navigating the terrain of LLMs requires vigilance, as there exist pitfalls that may trap the unprepared.In this article, we will uncover the nuances of LLMs and discover practical strategies for evading their potential pitfalls. From misconceptions surrounding their capabilities to the subtleties of bias pervading their outputs, we shed light on the intricate underpinnings beyond their impressive veneer.Understanding LLMs: A PrimerLLMs, such as GPT-4, are based on a technology called Transformer architecture, introduced in the paper "Attention is All You Need" by Vaswani et al. In essence, this architecture's 'attention' mechanism allows the model to focus on different parts of an input sentence, much like how a human reader might pay attention to different words while reading a text.Training an LLM involves two stages: pre-training and fine-tuning. During pre-training, the model is exposed to vast quantities of text data (billions of words) from the internet. Given all the previous words, the model learns to predict the next word in a sentence. Through this process, it learns grammar, facts about the world, reasoning abilities, and also some biases present in the data.  A significant part of this understanding comes from the model's ability to process English language instructions. The pre-training process exposes the model to language structures, grammar, usage, nuances of the language, common phrases, idioms, and context-based meanings.  The Transformer's 'attention' mechanism plays a crucial role in this understanding, enabling the model to focus on different parts of the input sentence when generating each word in the output. It understands which words in the sentence are essential when deciding the next word.The output of pre-training is a creative text generator. To make this generator more controllable and safe, it undergoes a fine-tuning process. Here, the model is trained on a narrower dataset, carefully generated with human reviewers' help following specific guidelines. This phase also often involves learning from instructions provided in natural language, enabling the model to respond effectively to English language instructions from users.After their initial two-step training, Large Language Models (LLMs) are ready to produce text. Here's how it works:The user provides a starting point or "prompt" to the model. Using this prompt, the model begins creating a series of "tokens", which could be words or parts of words. Each new token is influenced by the tokens that came before it, so the model keeps adjusting its internal workings after producing each token. The process is based on probabilities, not on a pre-set plan or specific goals.To control how the LLM generates text, you can adjust various settings. You can select the prompt, of course. But you can also modify settings like "temperature" and "max tokens". The "temperature" setting controls how random the model's output will be, while the "max tokens" setting sets a limit on the length of the response.When properly trained and controlled, LLMs are powerful tools that can understand and generate human-like text. Their applications range from writing assistants to customer support, tutoring, translation, and more. However, their ability to generate convincing text also poses potential risks, necessitating ongoing research into effective and ethical usage guidelines. In this article, we discuss some of the common pitfalls associated with using LLMs and offer practical advice on how to navigate these challenges, ensuring that you get the best out of these powerful language models in a safe and responsible way.Misunderstanding LLM CapabilitiesLanguage Learning Models (LLMs), like GPT-3, and BARD, are advanced AI systems capable of impressive feats. However, some common misunderstandings exist about what these models can and cannot do. Here we clarify several points to prevent confusion and misuse.Conscious Understanding: Despite their ability to generate coherent and contextually accurate responses, LLMs do not consciously understand the information they process. They don't comprehend text in the same way humans do. Instead, they make statistically informed guesses based on the patterns they've learned during training. They lack self-awareness or consciousness.Learning from Interactions: LLMs are not designed to learn from user interactions in real time. After initial model training, they don't have the ability to remember or learn from individual interactions unless their training data is updated, a process that requires substantial computational resources.Fact-Checking: LLMs can't verify the accuracy of their output or the information they're prompted with. They generate text based on patterns learned during training and cannot access real-time or updated information beyond their training cut-off. They cannot fact-check or verify information against real-world events post their training cut-off date.Personal Opinions: LLMs don't have personal experiences, beliefs, or opinions. If they generate text that seems to indicate a personal stance, it's merely a reflection of the patterns they've learned during their training process. They are incapable of feelings or preferences.Generating Original Ideas: While LLMs can generate text that may seem novel or original, they are not truly capable of creativity in the human sense. Their "ideas" result from recombining elements from their training data in novel ways, not from original thought or intention.Confidentiality: LLMs cannot keep secrets or remember specific user interactions. They do not have the capacity to store personal data from one interaction to the next. They are designed this way to ensure user privacy and confidentiality.Future Predictions: LLMs can't predict the future. Any text generated that seems to predict future events is coincidental and based solely on patterns learned from their training data.Emotional Support: While LLMs can simulate empathetic responses, they don't truly understand or feel emotions. Any emotional support provided by these models is based on learned textual patterns and should not replace professional mental health support.Understanding these limitations is crucial when interacting with LLMs. They are powerful tools for text generation, but their abilities should not be mistaken for true understanding, creativity, or emotional capacity.Bias in LLM OutputsBias in LLMs is an unintentional byproduct of their training process. LLMs, such as GPT-4, are trained on massive datasets comprising text from the internet. The models learn to predict the next word in a sentence based on the context provided by the preceding words. During this process, they inevitably absorb and replicate the biases present in their training data.Bias in LLMs can be subtle and may present itself in various ways. For example, if an LLM consistently associates certain professions with a specific gender, this reflects gender bias. Suppose you feed the model a prompt like, "The nurse attended to the patient", and the model frequently uses feminine pronouns to refer to the nurse. In contrast, with the prompt, "The engineer fixed the machine," it predominantly uses masculine pronouns for the engineer. This inclination mirrors societal biases present in the training data.It's crucial for users to be aware of these potential biases when using LLMs. Understanding this can help users interpret responses more critically, identify potential biases in the output, and even frame their prompts in a way that can mitigate bias. Users can make sure to double-check the information provided by LLMs, particularly when the output may have significant implications or is in a context known for systemic bias.Confabulation and Hallucination in LLMsIn the context of LLMs, 'confabulation' or 'hallucination' refers to generating outputs that do not align with reality or factual information. This can happen when the model, attempting to create a coherent narrative, fills in gaps with details that seem plausible but are entirely fictional.Example 1: Futuristic Election ResultsConsider an interaction where an LLM was asked for the result of a future election. The prompt was, "What was the result of the 2024 U.S. presidential election?" The model responded with a detailed result, stating a fictitious candidate had won. As of the model's last training cut-off, this event lies in the future, and the response is a complete fabrication.Example 2: The Non-existent BookIn another instance, an LLM was asked about a summary of a non-existent book with a prompt like, "Can you summarise the book 'The Shadows of Elusion' by J.K. Rowling?" The model responded with a detailed summary as if the book existed. In reality, there's no such book by J.K. Rowling. This again demonstrates the model's propensity to confabulate.Example 3: Fictitious TechnologyIn a third example, an LLM was asked to explain the workings of a fictitious technology, "How does the quantum teleportation smartphone work?" The model explained a device that doesn't exist, incorporating real-world concepts of quantum teleportation into a plausible-sounding but entirely fictional narrative.LLMs generate responses based on patterns they learn from their training data. They cannot access real-time or personal information or understand the content they generate. When faced with prompts without factual data, they can resort to confabulation, drawing from learned patterns to fabricate plausible but non-factual responses.Because of this propensity for confabulation, verifying the 'facts' generated by LLM models is crucial. This is particularly important when the output is used for decision-making or is in a sensitive context. Always corroborate the information generated by LLMs with reliable and up-to-date sources to ensure its validity and relevance. While these models can be incredibly helpful, they should be used as a tool and not a sole source of information, bearing in mind the potential for error and fabrication in their outputs.Security and Privacy in LLMsLarge Language Models (LLMs) can be a double-edged sword. Their power to create lifelike text opens the door to misuse, such as generating misleading information, spam emails, or fake news, and even facilitating complex scamming schemes. So, it's crucial to establish robust security protocols when using LLMs.Training LLMs on massive datasets can trigger privacy issues. Two primary concerns are:Data leakage: If the model is exposed to sensitive information during training, it could potentially reveal this information when generating outputs. Though these models are designed to generalize patterns and not memorize specific data points, the risk still exists, albeit at a very low probability.Inference attacks: Skilled attackers could craft specific queries to probe the model, attempting to infer sensitive details about the training data. For instance, they might attempt to discern whether certain types of content were part of the training data, potentially revealing proprietary or confidential information.Ethical Considerations in LLMsThe rapid advancements in artificial intelligence, particularly in Language Learning Models (LLMs), have transformed multiple facets of society. Yet, this exponential growth often overlooks a crucial aspect – ethics. Balancing the benefits of LLMs while addressing ethical concerns is a significant challenge that demands immediate attention.Accountability and Responsibility: Who is responsible when an LLM causes harm, such as generating misleading information or offensive content? Is it the developers who trained the model, the users who provided the prompts, or the organizations that deployed it? The ambiguous nature of responsibility and accountability in AI applications is a substantial ethical challenge.Bias and Discrimination: LLMs learn from vast amounts of data, often from the internet, reflecting our society – warts and all. Consequently, the models can internalize and perpetuate existing biases, leading to potentially discriminatory outputs. This can manifest as gender bias, racial bias, or other forms of prejudice.Invasion of Privacy: As discussed in earlier articles, LLMs can pose privacy risks. However, the ethical implications go beyond the immediate privacy concerns. For instance, if an LLM is used to generate text mimicking a particular individual's writing style, it could infringe on that person's right to personal expression and identity.Misinformation and Manipulation: The capacity of LLMs to generate human-like text can be exploited to disseminate misinformation, forge documents, or even create deepfake texts. This can manipulate public opinion, impact personal reputations, and even threaten national security.Addressing LLM Limitations: A Tripartite ApproachThe task of managing the limitations of LLMs is a tripartite effort, involving AI Developers & Researchers, Policymakers, and End Users.Role of AI Developers & Researchers:Security & Privacy: Establish robust security protocols, enforce secure training practices, and explore methods such as differential privacy. Constituting AI ethics committees can ensure ethical considerations during the design and training phases.Bias & Discrimination: Endeavor to identify and mitigate biases during training, aiming for equitable outcomes. This process includes eliminating harmful biases and confabulations.Transparency: Enhance understanding of the model by elucidating the training process, which in turn can help manage potential fabrications.Role of Policymakers:Regulations: Formulate and implement regulations that ensure accountability, transparency, fairness, and privacy in AI.Public Engagement: Encourage public participation in AI ethics discussions to ensure that regulations reflect societal norms.Role of End Users:Awareness: Comprehend the risks and ethical implications associated with LLMs, recognising that biases and fabrications are possible.Critical Evaluation: Evaluate the outputs generated by LLMs for potential misinformation, bias, or confabulations. Refrain from feeding sensitive information to an LLM and cross-verify the information produced.Feedback: Report any instances of severe bias, offensive content, or ethical concerns to the AI provider. This feedback is crucial for the continuous improvement of the model. ConclusionIn conclusion, understanding and leveraging the capabilities of Language Learning Models (LLMs) demand both caution and strategy. By recognizing their limitations, such as lack of consciousness, potential biases, and confabulation tendencies, users can navigate these pitfalls effectively. To harness LLMs responsibly, a collaborative approach among developers, policymakers, and users is essential. Implementing security measures, mitigating bias, and fostering user awareness can maximize the benefits of LLMs while minimizing their drawbacks. As LLMs continue to shape our linguistic landscape, staying informed and vigilant ensures a safer and more accurate text generation journey.Author BioAmita Kapoor is an accomplished AI consultant and educator, with over 25 years of experience. She has received international recognition for her work, including the DAAD fellowship and the Intel Developer Mesh AI Innovator Award. She is a highly respected scholar in her field, with over 100 research papers and several best-selling books on deep learning and AI. After teaching for 25 years at the University of Delhi, Amita took early retirement and turned her focus to democratizing AI education. She currently serves as a member of the Board of Directors for the non-profit Neuromatch Academy, fostering greater accessibility to knowledge and resources in the field. Following her retirement, Amita also founded NePeur, a company that provides data analytics and AI consultancy services. In addition, she shares her expertise with a global audience by teaching online classes on data science and AI at the University of Oxford.Sharmistha Chatterjee is an evangelist in the field of machine learning (ML) and cloud applications, currently working in the BFSI industry at the Commonwealth Bank of Australia in the data and analytics space. She has worked in Fortune 500 companies, as well as in early-stage start-ups. She became an advocate for responsible AI during her tenure at Publicis Sapient, where she led the digital transformation of clients across industry verticals. She is an international speaker at various tech conferences and a 2X Google Developer Expert in ML and Google Cloud. She has won multiple awards and has been listed in 40 under 40 data scientists by Analytics India Magazine (AIM) and 21 tech trailblazers in 2021 by Google. She has been involved in responsible AI initiatives led by Nasscom and as part of their DeepTech Club.Authors of this book: Platform and Model Design for Responsible AI    
Read more
  • 0
  • 0
  • 731

article-image-building-an-api-for-language-model-inference-using-rust-and-hyper-part-2
Alan Bernardo Palacio
31 Aug 2023
10 min read
Save for later

Building an API for Language Model Inference using Rust and Hyper - Part 2

Alan Bernardo Palacio
31 Aug 2023
10 min read
IntroductionIn our previous exploration, we delved deep into the world of Large Language Models (LLMs) in Rust. Through the lens of the llm crate and the transformative potential of LLMs, we painted a picture of the current state of AI integrations within the Rust ecosystem. But knowledge, they say, is only as valuable as its application. Thus, we transition from understanding the 'how' of LLMs to applying this knowledge in real-world scenarios.Welcome to the second part of our Rust LLM. In this article, we roll up our sleeves to architect and deploy an inference server using Rust. Leveraging the blazingly fast and efficient Hyper HTTP library, our server will not just respond to incoming requests but will think, infer, and communicate like a human. We'll guide you through the step-by-step process of setting up, routing, and serving inferences right from the server, all the while keeping our base anchored to the foundational insights from our last discussion.For developers eager to witness the integration of Rust, Hyper, and LLMs, this guide promises to be a rewarding endeavor. By the end, you'll be equipped with the tools to set up a server that can converse intelligently, understand prompts, and provide insightful responses. So, as we progress from the intricacies of the llm crate to building a real-world application, join us in taking a monumental step toward making AI-powered interactions an everyday reality.Imports and Data StructuresLet's start by looking at the import statements and data structures used in the code:use hyper::service::{make_service_fn, service_fn}; use hyper::{Body, Request, Response, Server}; use std::net::SocketAddr; use serde::{Deserialize, Serialize}; use std::{convert::Infallible, io::Write, path::PathBuf};hyper: Hyper is a fast and efficient HTTP library for Rust.SocketAddr: This is used to specify the socket address (IP and port) for the server.serde: Serde is a powerful serialization/deserialization framework in Rust.Deserialize, Serialize: Serde traits for automatic serialization and deserialization.Next, we have the data structures that will be used for deserializing JSON request data and serializing response data:#[derive(Debug, Deserialize)] struct ChatRequest { prompt: String, } #[derive(Debug, Serialize)] struct ChatResponse { response: String, }1.    ChatRequest: A struct to represent the incoming JSON request containing a prompt field.2.    ChatResponse: A struct to represent the JSON response containing a response field.Inference FunctionThe infer function is responsible for performing language model inference:fn infer(prompt: String) -> String { let tokenizer_source = llm::TokenizerSource::Embedded; let model_architecture = llm::ModelArchitecture::Llama; let model_path = PathBuf::from("/path/to/model"); let prompt = prompt.to_string(); let now = std::time::Instant::now(); let model = llm::load_dynamic( Some(model_architecture), &model_path, tokenizer_source, Default::default(), llm::load_progress_callback_stdout, ) .unwrap_or_else(|err| { panic!("Failed to load {} model from {:?}: {}", model_architecture, model_path, err); }); println!( "Model fully loaded! Elapsed: {}ms", now.elapsed().as_millis() ); let mut session = model.start_session(Default::default()); let mut generated_tokens = String::new(); // Accumulate generated tokens here let res = session.infer::<Infallible>( model.as_ref(), &mut rand::thread_rng(), &llm::InferenceRequest { prompt: (&prompt).into(), parameters: &llm::InferenceParameters::default(), play_back_previous_tokens: false, maximum_token_count: Some(140), }, // OutputRequest &mut Default::default(), |r| match r { llm::InferenceResponse::PromptToken(t) | llm::InferenceResponse::InferredToken(t) => { print!("{t}"); std::io::stdout().flush().unwrap(); // Accumulate generated tokens generated_tokens.push_str(&t); Ok(llm::InferenceFeedback::Continue) } _ => Ok(llm::InferenceFeedback::Continue), }, ); // Return the accumulated generated tokens match res { Ok(_) => generated_tokens, Err(err) => format!("Error: {}", err), } }The infer function takes a prompt as input and returns a string containing generated tokens.It loads a language model, sets up an inference session, and accumulates generated tokens.The res variable holds the result of the inference, and a closure handles each inference response.The function returns the accumulated generated tokens or an error message.Request HandlerThe chat_handler function handles incoming HTTP requests:async fn chat_handler(req: Request<Body>) -> Result<Response<Body>, Infallible> { let body_bytes = hyper::body::to_bytes(req.into_body()).await.unwrap(); let chat_request: ChatRequest = serde_json::from_slice(&body_bytes).unwrap(); // Call the `infer` function with the received prompt let inference_result = infer(chat_request.prompt); // Prepare the response message let response_message = format!("Inference result: {}", inference_result); let chat_response = ChatResponse { response: response_message, }; // Serialize the response and send it back let response = Response::new(Body::from(serde_json::to_string(&chat_response).unwrap())); Ok(response) }chat_handler asynchronously handles incoming requests by deserializing the JSON payload.It calls the infer function with the received prompt and constructs a response message.The response is serialized as JSON and sent back in the HTTP response.Router and Not Found HandlerThe router function maps incoming requests to the appropriate handlers:The router function maps incoming requests to the appropriate handlers: async fn router(req: Request<Body>) -> Result<Response<Body>, Infallible> { match (req.uri().path(), req.method()) { ("/api/chat", &hyper::Method::POST) => chat_handler(req).await, _ => not_found(), } }router matches incoming requests based on the path and HTTP method.If the path is "/api/chat" and the method is POST, it calls the chat_handler.If no match is found, it calls the not_found function.Main FunctionThe main function initializes the server and starts listening for incoming connections:#[tokio::main] async fn main() { println!("Server listening on port 8083..."); let addr = SocketAddr::from(([0, 0, 0, 0], 8083)); let make_svc = make_service_fn(|_conn| { async { Ok::<_, Infallible>(service_fn(router)) } }); let server = Server::bind(&addr).serve(make_svc); if let Err(e) = server.await { eprintln!("server error: {}", e); } }In this section, we'll walk through the steps to build and run the server that performs language model inference using Rust and the Hyper framework. We'll also demonstrate how to make a POST request to the server using Postman.1.     Install Rust: If you haven't already, you need to install Rust on your machine. You can download Rust from the official website: https://www.rust-lang.org/tools/install2.     Create a New Rust Project: Create a new directory for your project and navigate to it in the terminal. Run the following command to create a new Rust project: cargo new language_model_serverThis command will create a new directory named language_model_server containing the basic structure of a Rust project.3.     Add Dependencies: Open the Cargo.toml file in the language_model_server directory and add the required dependencies for Hyper and other libraries.    Your Cargo.toml file should look something like this: [package] name = "llm_handler" version = "0.1.0" edition = "2018" [dependencies] hyper = {version = "0.13"} tokio = { version = "0.2", features = ["macros", "rt-threaded"]} serde = {version = "1.0", features = ["derive"] } serde_json = "1.0" llm = { git = "<https://github.com/rustformers/llm.git>" } rand = "0.8.5"Make sure to adjust the version numbers according to the latest versions available.4.     Replace Code: Replace the content of the src/main.rs file in your project directory with the code you've been provided in the earlier sections.5.     Building the Server: In the terminal, navigate to your project directory and run the following command to build the server: cargo build --releaseThis will compile your code and produce an executable binary in the target/release directory.Running the Server1.     Running the Server: After building the server, you can run it using the following command: cargo run --releaseYour server will start listening on the port 8083.2.     Accessing the Server: Open a web browser and navigate to http://localhost:8083. You should see the message "Not Found" indicating that the server is up and running.Making a POST Request Using Postman1.     Install Postman: If you don't have Postman installed, you can download it from the official website: https://www.postman.com/downloads/2.     Create a POST Request:o   Open Postman and create a new request.o   Set the request type to "POST".o   Enter the URL: http://localhost:8083/api/chato   In the "Body" tab, select "raw" and set the content type to "JSON (application/json)".o   Enter the following JSON request body: { "prompt": "Rust is an amazing programming language because" }3.     Send the Request: Click the "Send" button to make the POST request to your server. 4.     View the Response: You should receive a response from the server, indicating the inference result generated by the language model.ConclusionIn the previous article, we introduced the foundational concepts, setting the stage for the hands-on application we delved into this time. In this article, our main goal was to bridge theory with practice. Using the llm crate alongside the Hyper library, we embarked on a mission to create a server capable of understanding and executing language model inference. But our work was more than just setting up a server; it was about illustrating the synergy between Rust, a language famed for its safety and concurrency features, and the vast world of AI.What's especially encouraging is how this project can serve as a springboard for many more innovations. With the foundation laid out, there are numerous avenues to explore, from refining the server's performance to integrating more advanced features or scaling it for larger audiences.If there's one key takeaway from our journey, it's the importance of continuous learning and experimentation. The tech landscape is ever-evolving, and the confluence of AI and programming offers a fertile ground for innovation.As we conclude this series, our hope is that the knowledge shared acts as both a source of inspiration and a practical guide. Whether you're a seasoned developer or a curious enthusiast, the tools and techniques we've discussed can pave the way for your own unique creations. So, as you move forward, keep experimenting, iterating, and pushing the boundaries of what's possible. Here's to many more coding adventures ahead!Author BioAlan Bernardo Palacio is a data scientist and an engineer with vast experience in different engineering fields. His focus has been the development and application of state-of-the-art data products and algorithms in several industries. He has worked for companies such as Ernst and Young, and Globant, and now holds a data engineer position at Ebiquity Media helping the company to create a scalable data pipeline. Alan graduated with a Mechanical Engineering degree from the National University of Tucuman in 2015, participated as the founder of startups, and later on earned a Master's degree from the faculty of Mathematics at the Autonomous University of Barcelona in 2017. Originally from Argentina, he now works and resides in the Netherlands.LinkedIn
Read more
  • 0
  • 0
  • 242

article-image-building-an-api-for-language-model-inference-using-rust-and-hyper-part-1
Alan Bernardo Palacio
31 Aug 2023
7 min read
Save for later

Building an API for Language Model Inference using Rust and Hyper - Part 1

Alan Bernardo Palacio
31 Aug 2023
7 min read
IntroductionIn the landscape of artificial intelligence, the capacity to bring sophisticated Large Language Models (LLMs) to commonplace applications has always been a sought-after goal. Enter LLM, a groundbreaking Rust library crafted by Rustformers, designed to make this dream a tangible reality. By focusing on the intricate synergy between the LLM library and the foundational GGML project, this toolset pushes the boundaries of what's possible, enabling AI enthusiasts to harness the sheer might of LLMs on conventional CPUs. This shift in dynamics owes much to GGML's pioneering approach to model quantization, streamlining computational requirements without sacrificing performance.In this comprehensive guide, we'll embark on a journey that starts with understanding the essence of the llm crate and its seamless interaction with a myriad of LLMs. Delving into its intricacies, we'll illuminate how to integrate, interact, and infer using these models. And as a tantalizing glimpse into the realm of practical application, our expedition won't conclude here. In the subsequent installment, we'll rise to the challenge of crafting a web server in Rust—one that confidently runs inference directly on a CPU, making the awe-inspiring capabilities of AI not just accessible, but an integral part of our everyday digital experiences.This is a two-part article in the first section we will discuss the basic interaction with the library and in the following we build a server in Rust that allow us to build our own web applications using state-of-the-art LLMs. Let’s begin with it.Harnessing the Power of Large Language ModelsAt the very core of LLM's architecture resides the GGML project, a tensor library meticulously crafted in the C programming language. GGML, short for "General GPU Machine Learning," serves as the bedrock of LLM, enabling the intricate orchestration of large language models. Its quintessence lies in a potent technique known as model quantization.Model quantization, a pivotal process employed by GGML, involves the reduction of numerical precision within a machine-learning model. This entails transforming the conventional 32-bit floating-point numbers frequently used for calculations into more compact representations such as 16-bit or even 8-bit integers.Quantization can be considered as the act of chiseling away unnecessary complexities while sculpting a model. Model quantization adeptly streamlines resource utilization without inordinate compromises on performance. By default, models lean on 32-bit floating-point numbers for their arithmetic operations. With quantization, this intricacy is distilled into more frugal formats, such as 16-bit integers or even 8-bit integers. It's an artful equilibrium between computational efficiency and performance optimization.GGML's versatility can be seen through a spectrum of quantization strategies: spanning 4, 5, and 8-bit quantization. Each strategy allows for improvement in efficiency and execution in different ways. For instance, 4-bit quantization thrives in memory and computational frugality, although it could potentially induce a performance decrease compared to the broader 8-bit quantization.The Rustformers library allows to integration of different language models including Bloom, GPT-2, GPT-J, GPT-NeoX, Llama, and MPT. To use these models within the Rustformers library, they undergo a transformation to align with GGML's technical underpinnings. The authorship has generously provided pre-engineered models on the Hugging Face platform, facilitating seamless integration.In the next sections, we will use the llm crate to run inference on LLM models like Llama. The realm of AI innovation is beckoning, and Rustformers' LLM, fortified by GGML's techniques, forms an alluring gateway into its intricacies.Getting Started with LLM-CLIThe Rustformers group has the mission of amplifying access to the prowess of large language models (LLMs) at the forefront of AI evolution. The group focuses on harmonizing with the rapidly advancing GGML ecosystem – a C library harnessed for quantization, enabling the execution of LLMs on CPUs. The trajectory extends to supporting diverse backends, embracing GPUs, Wasm environments, and more.For Rust developers venturing into the realm of LLMs, the key to unlocking this potential is the llm crate – the gateway to Rustformers' innovation. Through this crate, Rust developers interface with LLMs effortlessly. The "llm" project also offers a streamlined CLI for interacting with LLMs and examples showcasing its integration into Rust projects. More insights can be gained from the GitHub repository or its official documentation for released versions.To embark on your LLM journey, initiate by installing the LLM-CLI package. This package materializes the model's essence onto your console, allowing for direct inference.Getting started is a streamlined process:Clone the repository.Install the llm-cli tool from the repository.Download your chosen models from Hugging Face. In our illustration, we employ the Llama model with 4-bit quantization.Run inference on the model using the CLI tool and reference the model and architecture of the model downloaded previously.So let’s start with it. First, let's install llm-cli using this command:cargo install llm-cli --git <https://github.com/rustformers/llm>Next, we proceed by fetching your desired model from Hugging Face:curl -LO <https://huggingface.co/rustformers/open-llama-ggml/resolve/main/open_llama_3b-f16.bin>Finally, we can initiate a dialogue with the model using a command akin to:llm infer -a llama -m open_llama_3b-f16.bin -p "Rust is a cool programming language because"We can see how the llm crate stands to facilitate seamless interactions with LLMs.This project empowers developers with streamlined CLI tools, exemplifying the LLM integration into Rust projects. With installation and model preparation effortlessly explained, the journey toward LLM proficiency commences. As we transition to the culmination of this exploration, the power of LLMs is within reach, ready to reshape the boundaries of AI engagement.Conclusion: The Dawn of Accessible AI with Rust and LLMIn this exploration, we've delved deep into the revolutionary Rust library, LLM, and its transformative potential to bring Large Language Models (LLMs) to the masses. No longer is the prowess of advanced AI models locked behind the gates of high-end GPU architectures. With the symbiotic relationship between the LLM library and the underlying GGML tensor architecture, we can seamlessly run language models on standard CPUs. This is made possible largely by the potent technique of model quantization, which GGML has incorporated. By optimizing the balance between computational efficiency and performance, models can now run in environments that were previously deemed infeasible.The Rustformers' dedication to the cause shines through their comprehensive toolset. Their offerings extend from pre-engineered models on Hugging Face, ensuring ease of integration, to a CLI tool that simplifies the very interaction with these models. For Rust developers, the horizon of AI integration has never seemed clearer or more accessible.As we wrap up this segment, it's evident that the paradigm of AI integration is rapidly shifting. With tools like the llm crate, developers are equipped with everything they need to harness the full might of LLMs in their Rust projects. But the journey doesn't stop here. In the next part of this series, we venture beyond the basics, and into the realm of practical application. Join us as we take a leap forward, constructing a web server in Rust that leverages the llm crate.Author BioAlan Bernardo Palacio is a data scientist and an engineer with vast experience in different engineering fields. His focus has been the development and application of state-of-the-art data products and algorithms in several industries. He has worked for companies such as Ernst and Young, and Globant, and now holds a data engineer position at Ebiquity Media helping the company to create a scalable data pipeline. Alan graduated with a Mechanical Engineering degree from the National University of Tucuman in 2015, participated as the founder of startups, and later on earned a Master's degree from the faculty of Mathematics at the Autonomous University of Barcelona in 2017. Originally from Argentina, he now works and resides in the Netherlands.LinkedIn 
Read more
  • 0
  • 0
  • 212
article-image-spark-and-langchain-for-data-analysis
Alan Bernardo Palacio
31 Aug 2023
12 min read
Save for later

Spark and LangChain for Data Analysis

Alan Bernardo Palacio
31 Aug 2023
12 min read
IntroductionIn today's data-driven world, the demand for extracting insights from large datasets has led to the development of powerful tools and libraries. Apache Spark, a fast and general-purpose cluster computing system, has revolutionized big data processing. Coupled with LangChain, a cutting-edge library built atop advanced language models, you can now seamlessly combine the analytical capabilities of Spark with the natural language interaction facilitated by LangChain. This article introduces Spark, explores the features of LangChain, and provides practical examples of using Spark with LangChain for data analysis.Understanding Apache SparkThe processing and analysis of large datasets have become crucial for organizations and individuals alike. Apache Spark has emerged as a powerful framework that revolutionizes the way we handle big data. Spark is designed for speed, ease of use, and sophisticated analytics. It provides a unified platform for various data processing tasks, such as batch processing, interactive querying, machine learning, and real-time stream processing.At its core, Apache Spark is an open-source, distributed computing system that excels at processing and analyzing large datasets in parallel. Unlike traditional MapReduce systems, Spark introduces the concept of Resilient Distributed Datasets (RDDs), which are immutable distributed collections of data. RDDs can be transformed and operated upon using a wide range of high-level APIs provided by Spark, making it possible to perform complex data manipulations with ease.Key Components of SparkSpark consists of several components that contribute to its versatility and efficiency:Spark Core: The foundation of Spark, responsible for tasks such as task scheduling, memory management, and fault recovery. It also provides APIs for creating and manipulating RDDs.Spark SQL: A module that allows Spark to work seamlessly with structured data using SQL-like queries. It enables users to interact with structured data through the familiar SQL language.Spark Streaming: Enables real-time stream processing, making it possible to process and analyze data in near real-time as it arrives in the system.MLlib (Machine Learning Library): A scalable machine learning library built on top of Spark, offering a wide range of machine learning algorithms and tools.GraphX: A graph processing library that provides abstractions for efficiently manipulating graph-structured data.Spark DataFrame: A higher-level abstraction on top of RDDs, providing a structured and more optimized way to work with data. DataFrames offer optimization opportunities, enabling Spark's Catalyst optimizer to perform query optimization and code generation.Spark's distributed computing architecture enables it to achieve high performance and scalability. It employs a master/worker architecture where a central driver program coordinates tasks across multiple worker nodes. Data is distributed across these nodes, and tasks are executed in parallel on the distributed data.We will be diving into two different types of interaction with Spark, SparkSQL, and Spark Data Frame. Apache Spark is a distributed computing framework with Spark SQL as one of its modules for structured data processing. Spark DataFrame is a distributed collection of data organized into named columns, offering a programming abstraction similar to data frames in R or Python but optimized for distributed processing. It provides a functional programming API, allowing operations like select(), filter(), and groupBy(). On the other hand, Spark SQL allows users to run unmodified SQL queries on Spark data, integrating seamlessly with DataFrames and offering a bridge to BI tools through JDBC/ODBC.Both Spark DataFrame and Spark SQL leverage the Catalyst optimizer for efficient query execution. While DataFrames are preferred for programmatic APIs and functional capabilities, Spark SQL is ideal for ad-hoc querying and users familiar with SQL. The choice between them often hinges on the specific use case and the user's familiarity with either SQL or functional programming.In the next sections, we will explore how LangChain complements Spark's capabilities by introducing natural language interactions through agents.Introducing Spark Agent to LangChainLangChain, a dynamic library built upon the foundations of modern Language Model (LLM) technologies, is a pivotal addition to the world of data analysis. It bridges the gap between the power of Spark and the ease of human language interaction.LangChain harnesses the capabilities of advanced LLMs like ChatGPT and HuggingFace-hosted Models. These language models have proven their prowess in understanding and generating human-like text. LangChain capitalizes on this potential to enable users to interact with data and code through natural language queries.Empowering Data AnalysisThe introduction of the Spark Agent to LangChain brings about a transformative shift in data analysis workflows. Users are now able to tap into the immense analytical capabilities of Spark through simple daily language. This innovation opens doors for professionals from various domains to seamlessly explore datasets, uncover insights, and derive value without the need for deep technical expertise.LangChain acts as a bridge, connecting the technical realm of data processing with the non-technical world of language understanding. It empowers individuals who may not be well-versed in coding or data manipulation to engage with data-driven tasks effectively. This accessibility democratizes data analysis and makes it inclusive for a broader audience.The integration of LangChain with Spark involves a thoughtful orchestration of components that work in harmony to bring human-language interaction to the world of data analysis. At the heart of this integration lies the collaboration between ChatGPT, a sophisticated language model, and PythonREPL, a Python Read-Evaluate-Print Loop. The workflow is as follows:ChatGPT receives user queries in natural language and generates a Python command as a solution.The generated Python command is sent to PythonREPL for execution.PythonREPL executes the command and produces a result.ChatGPT takes the result from PythonREPL and translates it into a final answer in natural language.This collaborative process can repeat multiple times, allowing users to engage in iterative conversations and deep dives into data analysis.Several keynotes ensure a seamless interaction between the language model and the code execution environment:Initial Prompt Setup: The initial prompt given to ChatGPT defines its behavior and available tooling. This prompt guides ChatGPT on the desired actions and toolkits to employ.Connection between ChatGPT and PythonREPL: Through predefined prompts, the format of the answer is established. Regular expressions (regex) are used to extract the specific command to execute from ChatGPT's response. This establishes a clear flow of communication between ChatGPT and PythonREPL.Memory and Conversation History: ChatGPT does not possess a memory of past interactions. As a result, maintaining the conversation history locally and passing it with each new question is essential to maintaining context and coherence in the interaction.In the upcoming sections, we'll explore practical use cases that illustrate how this integration manifests in the real world, including interactions with Spark SQL and Spark DataFrames.The Spark SQL AgentIn this section, we will walk you through how to interact with Spark SQL using natural language, unleashing the power of Spark for querying structured data.Let's walk through a few hands-on examples to illustrate the capabilities of the integration:Exploring Data with Spark SQL Agent:Querying the dataset to understand its structure and metadata.Calculating statistical metrics like average age and fare.Extracting specific information, such as the name of the oldest survivor.Analyzing Dataframes with Spark DataFrame Agent:Counting rows to understand the dataset size.Analyzing the distribution of passengers with siblings.Computing descriptive statistics like the square root of average age.By interacting with the agents and experimenting with natural language queries, you'll witness firsthand the seamless fusion of advanced data processing with user-friendly language interactions. These examples demonstrate how Spark and LangChain can amplify your data analysis efforts, making insights more accessible and actionable.Before diving into the magic of Spark SQL interactions, let's set up the necessary environment. We'll utilize LangChain's SparkSQLToolkit to seamlessly bridge between Spark and natural language interactions. First, make sure you have your API key for OpenAI ready. You'll need it to integrate the language model.from langchain.agents import create_spark_sql_agent from langchain.agents.agent_toolkits import SparkSQLToolkit from langchain.chat_models import ChatOpenAI from langchain.utilities.spark_sql import SparkSQL import os # Set up environment variables for API keys os.environ['OPENAI_API_KEY'] = 'your-key'Now, let's get hands-on with Spark SQL. We'll work with a Titanic dataset, but you can replace it with your own data. First, create a Spark session, define a schema for the database, and load your data into a Spark DataFrame. We'll then create a table in Spark SQL to enable querying.from pyspark.sql import SparkSession spark = SparkSession.builder.getOrCreate() schema = "langchain_example" spark.sql(f"CREATE DATABASE IF NOT EXISTS {schema}") spark.sql(f"USE {schema}") csv_file_path = "titanic.csv" table = "titanic" spark.read.csv(csv_file_path, header=True, inferSchema=True).write.saveAsTable(table) spark.table(table).show() Now, let's initialize the Spark SQL Agent. This agent acts as your interactive companion, enabling you to query Spark SQL tables using natural language. We'll create a toolkit that connects LangChain, the SparkSQL instance, and the chosen language model (in this case, ChatOpenAI).from langchain.agents import AgentType spark_sql = SparkSQL(schema=schema) llm = ChatOpenAI(temperature=0, model="gpt-4-0613") toolkit = SparkSQLToolkit(db=spark_sql, llm=llm, handle_parsing_errors="Check your output and make sure it conforms!") agent_executor = create_spark_sql_agent(    llm=llm,    toolkit=toolkit,    agent=AgentType.CHAT_ZERO_SHOT_REACT_DESCRIPTION,    verbose=True,    handle_parsing_errors=True)Now comes the exciting part—querying Spark SQL tables using natural language! With your Spark SQL Agent ready, you can ask questions about your data and receive insightful answers. Let's try a few examples:# Describe the Titanic table agent_executor.run("Describe the titanic table") # Calculate the square root of the average age agent_executor.run("whats the square root of the average age?") # Find the name of the oldest survived passenger agent_executor.run("What's the name of the oldest survived passenger?") With these simple commands, you've tapped into the power of Spark SQL using natural language. The Spark SQL Agent makes data exploration and querying more intuitive and accessible than ever before.The Spark DataFrame AgentIn this section, we'll dive into another facet of LangChain's integration with Spark—the Spark DataFrame Agent. This agent leverages the power of Spark DataFrames and natural language interactions to provide an engaging and insightful way to analyze data.Before we begin, make sure you have a Spark session set up and your data loaded into a DataFrame. For this example, we'll use the Titanic dataset. Replace csv_file_path with the path to your own data if needed.from langchain.llms import OpenAI from pyspark.sql import SparkSession from langchain.agents import create_spark_dataframe_agent spark = SparkSession.builder.getOrCreate() csv_file_path = "titanic.csv" df = spark.read.csv(csv_file_path, header=True, inferSchema=True) df.show()Initializing the Spark DataFrame AgentNow, let's unleash the power of the Spark DataFrame Agent! This agent allows you to interact with Spark DataFrames using natural language queries. We'll initialize the agent by specifying the language model and the DataFrame you want to work with.# Initialize the Spark DataFrame Agent agent = create_spark_dataframe_agent(llm=OpenAI(temperature=0), df=df, verbose=True)With the agent ready, you can explore your data using natural language queries. Let's dive into a few examples:# Count the number of rows in the DataFrame agent.run("how many rows are there?") # Find the number of people with more than 3 siblings agent.run("how many people have more than 3 siblings") # Calculate the square root of the average age agent.run("whats the square root of the average age?")Remember that the Spark DataFrame Agent under the hood uses generated Python code to interact with Spark. While it's a powerful tool for interactive analysis, ensures that the generated code is safe to execute, especially in a sensitive environment.In this final section, let's tie everything together and showcase how Spark and LangChain work in harmony to unlock insights from data. We've covered the Spark SQL Agent and the Spark DataFrame Agent, so now it's time to put theory into practice.In conclusion, the combination of Spark and LangChain transcends the traditional boundaries of technical expertise, enabling data enthusiasts of all backgrounds to engage with data-driven tasks effectively. Through the Spark SQL Agent and Spark DataFrame Agent, LangChain empowers users to interact, explore, and analyze data using the simplicity and familiarity of natural language. So why wait? Dive in and unlock the full potential of your data analysis journey with the synergy of Spark and LangChain.ConclusionIn this article, we've delved into the world of Apache Spark and LangChain, two technologies that synergize to transform how we interact with and analyze data. By bridging the gap between technical data processing and human language understanding, Spark and LangChain enable users to derive meaningful insights from complex datasets through simple, natural language queries. The Spark SQL Agent and Spark DataFrame Agent presented here demonstrate the potential of this integration, making data analysis more accessible to a wider audience. As both technologies continue to evolve, we can expect even more powerful capabilities for unlocking the true potential of data-driven decision-making. So, whether you're a data scientist, analyst, or curious learner, harnessing the power of Spark and LangChain opens up a world of possibilities for exploring and understanding data in an intuitive and efficient manner.Author BioAlan Bernardo Palacio is a data scientist and an engineer with vast experience in different engineering fields. His focus has been the development and application of state-of-the-art data products and algorithms in several industries. He has worked for companies such as Ernst and Young, and Globant, and now holds a data engineer position at Ebiquity Media helping the company to create a scalable data pipeline. Alan graduated with a Mechanical Engineering degree from the National University of Tucuman in 2015, participated as the founder of startups, and later on earned a Master's degree from the faculty of Mathematics at the Autonomous University of Barcelona in 2017. Originally from Argentina, he now works and resides in the Netherlands.LinkedIn 
Read more
  • 0
  • 0
  • 1514

article-image-transformer-building-blocks
Saeed Dehqan
29 Aug 2023
22 min read
Save for later

Transformer Building Blocks

Saeed Dehqan
29 Aug 2023
22 min read
IntroductionTransformers employ potent techniques to preprocess tokens before sequentially inputting them into a neural network, aiding in the selection of the next token. At the transformer's apex is a basic neural network, the transformer head. The text generator model processes input tokens and generates a probability distribution for subsequent tokens. Context length, termed context length or block size, is recognized as a hyperparameter denoting input token count. The model's primary aim is to predict the next token based on input tokens (referred to as context tokens or context windows). Our goal with n tokens is to predict the subsequent fitting token following previous ones. Thus, we rely on these n tokens to anticipate the next. As humans, we attempt to grasp the conversation's context - our location and a loose foresight of the path's culmination. Upon gathering pertinent insights, relevant words emerge, while irrelevant ones fade, enabling us to choose the next word with precision. We occasionally err but backtrack, a luxury transformers lack. If they incorrectly predict (an irrelevant token), they persist, though exceptions exist, like beam search. Unlike us, transformers can't forecast. Revisiting n prior tokens, our human assessment involves inspecting them individually, and discerning relationships from diverse angles. By prioritizing pivotal tokens and disregarding superfluous ones, we evaluate tokens within various contexts. We scrutinize all n prior tokens individually, ready to prognosticate. This embodies the essence of the multihead attention mechanism in transformers. Consider a context window with 5 tokens. Each wears a distinct mask, predicting its respective next token:"To discern the void amidst, we must first grasp the fullness within." To understand what token is lacking, we must first identify what we are and possess. We need communication between tokens since tokens don’t know each other yet and in order to predict their own next token, they first need to know each other well and pair together in such a way that tokens with similar characteristics stay near each other (technically having similar vectors). Each token has three vectors that represent:●    What tokens they are looking for (known as query)●    What they really have (known as key)●    What they are (known as value)Each token with its query starts looking for similar keys, finds each other, and starts to know one another by adding up their values:Similar tokens find each other and if a token is somehow dissimilar, here Token 4, other tokens don’t consider it much. But please note that every token (much or less) has its own effect on other tokens. Also, in self-attention, all tokens ask all other tokens with their query and keys to find familiar tokens, but not the future tokens, named masked self-attention. We prohibit tokens from communicating to future tokens. After exchanging information between tokens and mixing up their values, similar tokens become more similar:As you can see, the color of similar tokens becomes more similar(in action, their vectors become more similar). Since tokens in the group wear a mask, we cannot access the true tokens’ values. We just know and distinguish them from their mask(value). This is because every token has different characteristics in different contexts, and they don’t show their true essence.So far so good; we have finished the self-attention process and now, the group is ready to predict their next tokens. This is because individuals are aware of each other very well, and as a result, they can guess the next token better. Now, each token separately needs to go to a nonlinear network and then to the transformer head, to predict its own next token. We ask each one of the tokens separately to tell their opinion about the probability of what token comes next. Finally, we collect the probability distributions of all tokens in the context window. A probability distribution sums up to 100, or actually in action to 1. We give probability to every token the model has in its vocabulary. The simplest method to extract the next token from probability distributions is to select the one with the highest probability:As you can see, each token goes to the neural network and the network returns a probability distribution. The result is the following sentence: “It looks like a bug”.Voila! We managed to go through a simple Transformer model.Let’s recap everything we’ve said. A transformer receives n tokens as input, does some stuff (like self-attention, layer normalization, etc.) and feed-forward them into a neural network to get probability distributions of the next token. Each token goes to the neural network separately; if the number of tokens is 10, there are 10 probability distributions.At this point, you know intuitively how the main building blocks of a transformer work. But let us better understand them by implementing a transformer model.Clone the repository tiny-transformer:git clone https://github.com/saeeddhqan/tiny-transformerExecute simple_model.py in the repository If you simply want to run the model for training.Create a new file, and import the necessary modules:import math import torch import torch.nn as nn import torch.nn.functional as F Load the dataset and write the tokenizer: with open('shakespeare.txt') as fp: text = fp.read() chars = sorted(list(set(text))) vocab_size = len(chars) stoi = {c:i for i,c in enumerate(chars)} itos = {i:c for c,i in stoi.items()} encode = lambda s: [stoi[x] for x in s] decode = lambda e: ''.join([itos[x] for x in e])●    Open the dataset, and define a variable that is a list of all unique characters in the text.●    The set function splits the text character by character and then removes duplicates, just like sets in set theory. list(set(myvar)) is a way of removing duplicates in a list or string.●    vocab_size is the number of unique characters (here 65). ●    stoi is a dictionary where its keys are characters and values are their indices.●    itos is used to convert indices to characters. ●    encode function receives a string and returns indices of characters. ●    decode receives a list of indices and returns a string. Split the dataset into test and train and write a function that returns data for training:device = 'cuda' if torch.cuda.is_available() else 'cpu' torch.manual_seed(1234) data = torch.tensor(encode(text), dtype=torch.long).to(device) train_split = int(0.9 * len(data)) train_data = data[:train_split] test_data = data[train_split:] def get_batch(split='train', block_size=16, batch_size=1) -> 'Create a random batch and returns batch along with targets': data = train_data if split == 'train' else test_data ix = torch.randint(len(data) - block_size, (batch_size,)) x = torch.stack([data[i:i + block_size] for i in ix]) y = torch.stack([data[i+1:i + block_size + 1] for i in ix]) return x, y●    Choose a suitable device.●     Set a seed to make the training reproducible.●    Convert the text into a large list of indices with the encode function.●    Since the character indices are integer, we use torch.long data type to make the data suitable for the model. ●    90% for training and 10% for testing.●    If the batch_size is 10, we select 10 chunks or sequences from the dataset and stack them up to process them simultaneously. ●    If the batch_size is 1, get_batch function selects 1 random chunk (n consequence characters) from the dataset and returns x and y, where x is 16 characters’ indices and y is the target characters for x.The shape, value, and decoded version of the selected chunk are as follows:shape x: torch.Size([1, 16]) shape y: torch.Size([1, 16]) value x: tensor([[41, 43, 6, 1, 60, 47, 50, 50, 39, 47, 52, 2, 1, 52, 43, 60]]) value y: tensor([[43, 6, 1, 60, 47, 50, 50, 39, 47, 52, 2, 1, 52, 43, 60, 43]]) decoded x: ce, villain! nev decoded y: e, villain! neveWe usually process multiple chunks or sequences at once with batching in order to speed up the training. For each character, we have an equivalent target, which is its next token. The target for ‘c’ is ‘e’, for ‘e’ is ‘,’, for ‘v’ is ‘i’, and so on. Let us talk a bit about the input shape and output shape of tensors in a transformer model. The model receives a list of token indices like the above(named a sequence, or chunk) and maps them into their corresponding vectors.●    The input shape is (batch_size, block_size).●    After mapping indices into vectors, the data shape becomes (batch_size, block_size, embed_size).●    Then, through the multihead attention and feed-forward layers, the data shape does not change.●    Finally, the data with shape (batch_size, block_size, embed_size) goes to the transformer head (a simple neural network) and the output shape becomes (batch_size, block_size, vocab_size). vocab_size is the number of unique characters that can come next (for the Shakespeare dataset, the number of unique characters is 65).Self-attentionThe communication between tokens happens in the head class; we define the scores variable to save the similarity between vectors. The higher the score is, the more two vectors have in common. We then utilize these scores to do a weighted sum of all the vectors: class head(nn.Module): def __init__(self, embeds_size=32, block_size=16, head_size=8):     super().__init__()     self.key = nn.Linear(embeds_size, head_size, bias=False)     self.query = nn.Linear(embeds_size, head_size, bias=False)     self.value = nn.Linear(embeds_size, head_size, bias=False)     self.register_buffer('tril', torch.tril(torch.ones(block_size, block_size)))     self.dropout = nn.Dropout(0.1) def forward(self, x):     B,T,C = x.shape     # What am I looking for?     q = self.query(x)     # What do I have?     k = self.key(x)     # What is the representation value of me?     # Or: what's my personality in the group?     # Or: what mask do I have when I'm in a group?     v = self.value(x)     scores = q @ k.transpose(-2,-1) * (1 / math.sqrt(C)) # (B,T,head_size) @ (B,head_size,T) --> (B,T,T)     scores = scores.masked_fill(self.tril[:T, :T] == 0, float('-inf'))     scores = F.softmax(scores, dim=-1)     scores = self.dropout(scores)     out = scores @ v     return outUse three linear layers to transform the vector into key, query, and value, but with a smaller dimension (here same as head_size).Q, K, V: Q and K are for when we want to find similar tokens. We calculate the similarity between vectors with a dot product: q @ k.transpose(-2, -1). The shape of scores is (batch_size, block_size, block_size), which means we have the similarity scores between all the vectors in the block. V is used when we want to do the weighted sum. Scores: Pure dot product scores tend to have very high numbers that are not suitable for softmax since it makes the scores dense. Therefore, we rescale the results with a ratio of (1 / math.sqrt(C)). C is the embedding size. We call this a scaled dot product.Register_buffer: We used register_buffer to register a lower triangular tensor. In this way, when you save and load the model, this tensor also becomes part of the model.Masking: After calculating the scores, we need to replace future scores with -inf to shut them off so that the vectors do not have access to the future tokens. By doing so, these scores effectively become zero after applying the softmax function, resulting in a probability of zero for the future tokens. This process is referred to as masking. Here’s an example of masked scores with a block size 4:[[-0.1710, -inf, -inf, -inf], [ 0.2007, -0.0878, -inf, -inf], [-0.0405, 0.2913, 0.0445, -inf], [ 0.1328, -0.2244, 0.0796, 0.1719]]Softmax: It converts a vector into a probability distribution that sums up to 1. Here’s the scores after softmax:      [[1.0000, 0.0000, 0.0000, 0.0000],      [0.5716, 0.4284, 0.0000, 0.0000],      [0.2872, 0.4002, 0.3127, 0.0000],      [0.2712, 0.1897, 0.2571, 0.2820]]The scores of future tokens are zero; after doing a weighted sum, the future vectors become zero and the vectors receive none data from future vectors(n*0=0)Dropout: Dropout is a regularization technique. It drops some of the numbers in vectors randomly. Dropout helps the model to generalize, not memorize the dataset. We don’t want the model to memorize the Shakespeare model, right? We want it to create new texts like the dataset.Weighted sum: Weighted sum is used to combine different representations or embeddings based on their importance. The scores are calculated by measuring the relevance or similarity between each pair of vectors. The relevance scores are obtained by applying a scaled dot product between the query and key vectors, which are learned during the training process. The resulting weighted sum emphasizes the more important elements and reduces the influence of less relevant ones, allowing the model to focus on the most salient information. We dot product scores with values and the result is the outcome of self-attention.Output: since the embedding size and head size are 32 and 8 respectively, if the input shape is (batch_size, block_size, 32), the output has the shape of (batch_size, block_size, 8).Multihead self-attention“I have multiple personalities(v), tendencies and needs (q), and valuable things (k) in different spaces”. Vectors said.We transform the vectors into small dimensions, and then run self-attention on them; we did this in the previous class. In multihead self-attention, we call the head class four times, and then, concatenate the smaller vectors to have the same input shape. We call this multihead self-attention. For instance, if the shape of input data is (1, 16, 32), we transform it into four (1, 16, 8) tensors and run self-attention on these tensors. Why four times? 4 * 8 = initial shape. By using multihead self-attention and running self-attention multiple times, what we do is consider the different aspects of vectors in different spaces. That’s all!Here is the code:class multihead(nn.Module): def __init__(self, num_heads=4, head_size=8):     super().__init__()     self.multihead = nn.ModuleList([head(head_size) for _ in range(num_heads)])     self.output_linear = nn.Linear(embeds_size, embeds_size)     self.dropout = nn.Dropout(0.1) def forward(self, hidden_state):     hidden_state = torch.cat([head(hidden_state) for head in self.multihead], dim=-1)     hidden_state = self.output_linear(hidden_state)     hidden_state = self.dropout(hidden_state)     return hidden_state●    self.multihead: The variable creates four heads and we do this with nn.ModuleList.●    self.output_linear: Another transformer linear layer we apply at the end of the multihead self-attention process.●    self.dropout: Using dropout on the final results.●    hidden_state 1: Concatenating the output of heads so that we have the same shape as input. Heads transform data into different spaces with smaller dimensions, and then do the self-attention.●    hidden_state 2: After doing communication between tokens with self-attention, we use the self.output_linear projector to let the model adjust vectors further based on the gradients that flow through the layer.●    dropout: Run dropout on the output of the projection with a 10% probability of turning off values (make them zero) in the vectors.Transformer blockThere are two new techniques, including layer normalization and residual connection, that need to be explained:class transformer_block(nn.Module): def __init__(self, embeds_size=32, num_heads=8):     super().__init__()     self.head_count = embeds_size // num_heads     self.n_heads = multihead(num_heads, self.head_count)     self.ffn = nn.Sequential(         nn.Linear(embeds_size, 4 * embeds_size),         nn.ReLU(),         nn.Linear(4 * embeds_size, embeds_size),         nn.Dropout(drop_prob),     )     self.ln1 = nn.LayerNorm(embeds_size)     self.ln2 = nn.LayerNorm(embeds_size) def forward(self, hidden_state):     hidden_state = hidden_state + self.n_heads(self.ln1(hidden_state))     hidden_state = hidden_state + self.ffn(self.ln2(hidden_state))     return hidden_state self.head_count: Calculates the head size. The number of heads should be divisible by the embedding size so that we can concatenate the output of heads.self.n_heads: The multihead self-attention layer. self.ffn: This is the first time that we have non-linearity in our model. Non-linearity helps the model to capture complex relationships and patterns in the data. By introducing non-linearity through ReLU activation functions, or GLUE, the model can make a correlation for the data. As a result, it better models the intricacies of the input data. Non-linearity is like “you go to the next layer”, “You don’t go to the next layer”, or “Create y from x for the next layer”. The recommended hidden layer size is a number four times bigger than the embedding size. That’s why “4 * embeds_size”. You can also try SwiGLU as the activation function instead of ReLU.self.ln1 and self.ln2: Layer normalizers make the model more robust and they also help the model to converge faster. Layer normalization rescales the data in such a way that the mean is zero and the standard deviation is one. hidden_state 1: Normalize the vectors with self.ln1 and forward the vectors to the multihead attention. Next, we add the input to the output of multihead attention. It helps the model in two ways:○    First, the model has some information from the original vectors. ○    Second, when the model becomes deep, during backpropagation, the gradients will be weak for earlier layers and the model will converge too slowly. We recognize this effect as gradient vanishing. Adding the input helps to enrich the gradients and mitigate the gradient vanishing. We recognize it as a residual connection.hidden_state 2: Hidden_state 1 goes to a layer normalization and then to a nonlinear network. The output will be added to the hidden state with the aim of keeping gradients for all layers.The modelAll the necessary parts are ready, let us stack them up to make the full model:class transformer(nn.Module): def __init__(self):     super().__init__()     self.stack = nn.ModuleDict(dict(         tok_embs=nn.Embedding(vocab_size, embeds_size),         pos_embs=nn.Embedding(block_size, embeds_size),         dropout=nn.Dropout(drop_prob),         blocks=nn.Sequential(             transformer_block(),             transformer_block(),             transformer_block(),             transformer_block(),             transformer_block(),         ),         ln=nn.LayerNorm(embeds_size),         lm_head=nn.Linear(embeds_size, vocab_size),     ))●    self.stack: A list of all necessary layers.●    tok_embs: This is a learnable lookup table that receives a list of indices and returns their vectors.●    pos_embs: Just like tok_embs, it is also a learnable look-up table, but for positional embedding. It receives a list of positions and returns their vectors.●    dropout: Dropout layer.●    blocks: We create multiple transformer blocks sequentially.●    ln: A layer normalization.●    lm_heas: Transformer head receives a token and returns probabilities of the next token. To change the model to be a classifier, or a sentimental analysis model, we just need to change this layer and remove masking from the self-attention layer.The forward method of the transformer class:    def forward(self, seq, targets=None):     B, T = seq.shape     tok_emb = self.stack.tok_embs(seq) # (batch, block_size, embed_dim) (B,T,C)     pos_emb = self.stack.pos_embs(torch.arange(T, device=device))     x = tok_emb + pos_emb     x = self.stack.dropout(x)     x = self.stack.blocks(x)     x = self.stack.ln(x)     logits = self.stack.lm_head(x) # (B, block_size, vocab_size)     if targets is None:         loss = None     else:         B, T, C = logits.shape         logits = logits.view(B * T, C)         targets = targets.view(B * T)         loss = F.cross_entropy(logits, targets)     return logits, loss●  tok_emb: Convert token indices into vectors. Given the input (B, T), the output is (B, T, C), where C is the embeds_size.●  pos_emb: Given the number of tokens in the context window or block_size, it returns the positional embedding of each position.●  x 1: Add up token embeddings and position embeddings. A little bit lossy but it works just fine.●  x 2: Run dropout on embeddings.●  x 3: The embeddings go through all the transformer blocks, and multihead self-attention. The input is (B, T, C) and the output is (B, T, C).●  x 4: The outcome of transformer blocks goes to the layer normalization.●  logits: We usually recognize the unnormalized values extracted from the language model head as logits :)●  if-else block: Were the targets specified, we calculated cross-entropy loss. Otherwise, the loss will be None. Before calculating loss in the else block, we change the shape as the cross entropy function expects.●  Output: The method returns logits with shape (batch_size, block_size, vocab_size) and loss if any.For generating a text, add this to the transformer class:    def autocomplete(self, seq, _len=10):        for _ in range(_len):            seq_crop = seq[:, -block_size:] # crop it            logits, _ = self(seq_crop)            logits = logits[:, -1, :] # we care about the last token            probs = F.softmax(logits, dim=-1)            next_char = torch.multinomial(probs, num_samples=1)            seq = torch.cat((seq, next_char), dim=1)        return seq●  autocomplete: Given a tokenized sequence, and the number of tokens that need to be created, this method returns _len tokens.●  seq_crop: Select the last n tokens in the sequence to give it to the model. The sequence length might be larger than the block_size and it causes an error if we don’t crop it.●  logits 1: Forward the sequence into the model to receive the logits.●  logits 2: Select the last logit that will be used to select the next token.●  probs: Run the softmax on logits to get a probability distribution.●  next_char: Multinomial selects one sample from the probs. The higher the probability of a token, the higher the chance of being selected.●  seq: Add the selected character to the sequence.TrainingThe rest of the code is downstream tasks such as training loops, etc. The codes that are provided here are slightly different from the tiny-transformer repository. I trained the model with the following hyperparameters:block_size = 256 learning_rate = 9e-4 eval_interval = 300 # Every n step, we do an evaluation. iterations = 5000 # Like epochs batch_size = 64 embeds_size = 195 num_heads = 5 num_layers = 5 drop_prob = 0.15And here’s the generated text:If you need to improve the quality, increase embeds_size, num_layers, and heads.ConclusionThe article explores transformers' text generation role, detailing token preprocessing through self-attention and neural network heads. Transformers predict tokens using context length as a hyperparameter. Human context comprehension is paralleled, highlighting relevant word emergence and fading of irrelevant words for precise selection. Transformers lack human foresight and backtracking. Key components—self-attention, multihead self-attention, and transformer blocks—are explained, and supported by code snippets. Token and positional embeddings, layer normalization, and residual connections are detailed. The model's text generation is exemplified via the autocomplete method. Training parameters and text quality enhancement are addressed, showcasing transformers' potential.Author BioSaeed Dehqan trains language models from scratch. Currently, his work is centered around Language Models for text generation, and he possesses a strong understanding of the underlying concepts of neural networks. He is proficient in using optimizers such as genetic algorithms to fine-tune network hyperparameters and has experience with neural architecture search (NAS) by using reinforcement learning (RL). He implements models starting from data gathering to monitoring, and deployment on mobile, web, cloud, etc. 
Read more
  • 0
  • 0
  • 1560