Search icon CANCEL
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon

How-To Tutorials

7017 Articles
article-image-databricks-dolly-for-future-ai-adoption
Sagar Lad
29 Dec 2023
6 min read
Save for later

Databricks Dolly for Future AI Adoption

Sagar Lad
29 Dec 2023
6 min read
Dive deeper into the world of AI innovation and stay ahead of the AI curve! Subscribe to our AI_Distilled newsletter for the latest insights. Don't miss out – sign up today!IntroductionArtificial intelligence is playing an increasingly crucial role in assisting businesses and organizations to process huge volumes of data that the world is producing. The development of huge language models to evaluate enormous amounts of text data is one of the largest challenges in AI research. Databricks Dolly revolutionized the Databricks project, opening the door for more complex NLP models and improving the field of AI technology.Databricks Dolly for AIBefore we deep dive into Databricks Dolly and its impact on the future of AI adoption, let’s understand the basics of Large Language Models and their current challenges.Large Language Models & Databricks DollyAn artificial intelligence system called a large language model is used to produce human-like language and comprehend natural language processing activities. These models are created using deep learning methods and are trained on a lot of text input using a neural network design. Its major objective is to produce meaningful and coherent text from a given prompt or input. There are many uses for this, including speech recognition, chatbots, language translation, etc.They have gained significant popularity because of below capabilities :Text GenerationLanguage TranslationClassification and Categorization Conversational AIRecently ChapGPT from OpenAI, Google Bard, and Bing have created unified models for training and fine-tuning such models at a large scale. Now the issue with these LLMs is that they save user data on external servers, opening the cloud to unauthorized users and increasing the risk of sensitive data being compromised. Additionally, They may provide irrelevant information that could potentially injure users and lead to poor judgments, as well as offensive, discriminating content against certain individuals.In order to overcome this challenge, there is a need for open-source alternatives that promote the accuracy, and security of Large Language Models. The Databricks team has built Databricks Dolly, an open-source chatbot that adheres to these criteria and performs exceptionally in a variety of use situations, in response to these requirements after carefully examining user issues.Databricks Dolly can produce text by responding to questions, summarising ideas, and other natural language commands. It is built on an open-source, 6-billion-parameter model from EleutherAI that has been modified using the databricks-dolly-15k dataset of user-generated instructions. Due to Dolly's open-source nature and commercial licensing, anyone can use it to build interactive applications without having to pay for API access or divulge their data to outside parties. Dolly may be trained for less than $30, making construction costs low. Data can be saved in the DBFS root or another cloud object storage location that we specify when Dolly generates an answer. Using Dolly, we can design, construct, and personalize LLM without sharing any data.                                                         Image 1 - Databricks Dolly DifferentiatorsDemocratizing the magic of Databricks DollyWith Databricks Dolly , we can manage the below types of engagements.1.  Open & Close ended Question and Answers2.  Information Parsing from web3.  Detailed Answers based on the input4.  Creative Writing Now, Let’s see in detail how we can use Databricks dolly.Step 1 : Install Required LibrariesUse the below command in Databricks notebook or use cmd to install the required packages.%pip install "accelerate>=0.16.0,<1" "transformers[torch]>=4.28.1,<5" "torch>=1.13.1,<2"                                                                                   Image 2 - Databricks Dolly Package InstallationAs you can see from the image, once we execute this command in Databricks, the required packages are installed.Accelerate : Accelerate the training of machine learning modelsTransformers : Collection of pre-trained models for NLP activitiesTorch : To build and train deep learning modelsStep 2 : Input to the Databricks DollyOnce the model is loaded, the next step is to generate text based on the generate_next function.                                                                                                                     Image 3 - Databricks Dolly - Create Pipeline for remote code executionHere, the pipeline function from the Transformers library is used to execute the NLP tasks such as text generation, sentiment analysis, etc. Option trust_remote_code is used for the remote code execution.Step 3 : Pipeline reference to parse the output                                                                   Image 4 -Databricks Dolly - Create a Pipeline for remote code executionNow, the final step is to provide the textual input to the model using the generate_text function to which will use the language model to generate the response.Best Practices of Using Databricks DollyBe specific and lucid in your instructions to DollyUse Databricks Machine Learning Models to train and deploy Dolly for a scalable and faster executionUse the hugging face library and repo which has multiple tutorials and examplesConclusionThis article describes the difficulties that organizations have adopting Large Language Models and how Databricks may overcome these difficulties by utilising Dolly. Dolly gives businesses the ability to create a customized LLM that meets their unique requirements and has the added benefit of having open-source source code. In order to maximize LLM performance, the article also highlights the significance of recommended practices.Author Bio:Sagar Lad is a Cloud Data Solution Architect with a leading organization and has deep expertise in designing and building Enterprise-grade Intelligent Azure Data and Analytics Solutions. He is a published author, content writer, Microsoft Certified Trainer, and C# Corner MVP.Link - Medium , Amazon , LinkedIn
Read more
  • 0
  • 0
  • 217

article-image-creating-vulnerability-assessment-plans-with-chatgpt
Clint Bodungen
26 Dec 2023
9 min read
Save for later

Creating Vulnerability Assessment Plans with ChatGPT

Clint Bodungen
26 Dec 2023
9 min read
Dive deeper into the world of AI innovation and stay ahead of the AI curve! Subscribe to our AI_Distilled newsletter for the latest insights. Don't miss out – sign up today!This article is an excerpt from the book, ChatGPT for Cybersecurity Cookbook, by Clint Bodungen. Master ChatGPT and the OpenAI API, and harness the power of cutting-edge generative AI and large language models to revolutionize the way you perform penetration testing, threat detection, and risk assessment.IntroductionIn this recipe, you'll learn how to harness the power of ChatGPT and the OpenAI API to create comprehensive vulnerability assessment plans using network, system, and business details as input. This recipe is invaluable for both cybersecurity students and beginners looking to familiarize themselves with proper methods and tools for vulnerability assessments, as well as experienced cybersecurity professionals aiming to save time on planning and documentation. Building upon the skills acquired in Chapter 1, you will delve deeper into establishing the system role of a cybersecurity professional specializing in vulnerability assessments. You'll learn how to craft effective prompts that generate well-formatted output using markdown language. This recipe will also expand on the techniques explored in Enhancing Output with Templates (Application: Threat Report) and Formatting Output as a Table (Application: Security Controls Table) in Chapter 1, enabling you to design prompts that produce the desired output format. Finally, you'll discover how to use the OpenAI API and Python to generate the vulnerability assessment plan, and then export it as a Microsoft Word file. This recipe will serve as a practical guide for creating detailed and efficient vulnerability assessment plans using ChatGPT and the OpenAI API. Getting Ready Before diving into the recipe, you should already have your OpenAI account setup and have obtained your API key. If not, revisit Chapter 1 for details. You will also need to be sure you have the following Python libraries installed: 1. python-docx: This library will be used to generate Microsoft Word files. You can install it using the command pip install python-docx.  2. tqdm: This library will be used to display progress bars. You can install it using the command:pip install tqdmHow to do it… In this section, we will walk you through the process of using ChatGPT to create a comprehensive vulnerability assessment plan tailored to a specific network and organization's needs. By providing the necessary details and using the given system role and prompt, you will be able to generate a well-structured assessment plan. 1. Begin by logging in to your ChatGPT account and navigating to the ChatGPT web UI. 2. Start a new conversation with ChatGPT by clicking the "New chat" button. 3.  Enter the following prompt to establish a system role: You are a cybersecurity professional specializing in vulnerability assessment. 4. Enter the following message text, but replace the placeholders in the “{ }” brackets with the appropriate data of your choice. You can either combine this prompt with the system role or enter it separately as follows: Using cybersecurity industry standards and best practices, create a complete and detailed assessment plan (not a penetration test) that includes: Introduction, outline of the process/methodology, tools needed, and a very detailed multi-layered outline of the steps. Provide a thorough and descriptive introduction and as much detail and description as possible throughout the plan. The plan should not only assessment of technical vulnerabilities on systems but also policies, procedures, and compliance. It should include the use of scanning tools as well as configuration review, staff interviews, and site walk-around. All recommendations should following industry standard best practices and methods. The plan should be a minimum of 1500 words. Create the plan so that it is specific for the following details: Network Size: {Large} Number of Nodes: {1000} Type of Devices: {Desktops, Laptops, Printers, Routers} Specific systems or devices that need to be excluded from the assessment: {None} Operating Systems: {Windows 10, MacOS, Linux} Network Topology: {Star} Access Controls: {Role-based access control} Previous Security Incidents: {3 incidents in the last year} Compliance Requirements: {HIPAA} Business Critical Assets: {Financial data, Personal health information} Data Classification: {Highly confidential} Goals and objectives of the vulnerability assessment: {To identify and prioritize potential vulnerabilities in the network and provide recommendations for remediation and risk mitigation.} Timeline for the vulnerability assessment: {4 weeks{ Team: {3 cybersecurity professionals, including a vulnerability assessment lead and two security analysts} Expected deliverables of the assessment: {A detailed report outlining the results of the vulnerability assessment, including identified vulnerabilities, their criticality, potential impact on the network, and recommendations for remediation and risk mitigation.} Audience: {The organization's IT department, senior management, and any external auditors or regulators.} Provide the plan using the following format and markdown language: #Vulnerability Assessment Plan ##Introduction Thorough Introduction to the plan including the scope, reasons for doing it, goals and objectives, and summary of the plan ##Process/Methodology Description and Outline of the process/Methodology ##Tools Required List of required tools and applications, with their descriptions and reasons needed ##Assessment Steps Detailed, multi-layered outline of the assessment steps Hint If you are performing this in the OpenAI Playground, it is advisable to use Chat mode and enter the role in the System window, and the prompt in the User message window. Figure 2.1 shows the system role and user prompt entered into the OpenAI Playground.  Figure 2.1 – OpenAI Playground Method 5. Review the generated output from ChatGPT. If the output is satisfactory and meets the requirements, you can proceed to the next step. If not, you can either refine your prompt or re-run the conversation to generate a new output. 6. Once you have obtained the desired output, you can use the generated markdown to create a well-structured vulnerability assessment plan in your preferred text editor or markdown viewer.  Figure 2.2 shows an example of ChatGPT generation of a vulnerability assessment plan using markdown language formatting.  Figure 2.2 – Example ChatGPT Assessment Plan Output How it works… This GPT-assisted vulnerability assessment plan recipe leverages the sophistication of natural language processing (NLP) and machine learning algorithms to generate a comprehensive and detailed vulnerability assessment plan. By adopting a specific system role and an elaborate user request as a prompt, ChatGPT is able to customize its response to meet the requirements of a seasoned cybersecurity professional who is tasked with assessing an extensive network system. Here's a closer look at how this process works: 1. System Role and Detailed Prompt: The system role designates ChatGPT as a seasoned cybersecurity professional specializing in vulnerability assessment. The prompt, which serves as the user request, is detailed and outlines the specifics of the assessment plan, from the size of the network and types of devices to the required compliance and the expected deliverables. These inputs provide context and guide ChatGPT's response, ensuring it is tailored to the complexities and requirements of the vulnerability assessment task. 2. Natural Language Processing and Machine Learning: NLP and machine learning form the bedrock of ChatGPT's capabilities. It applies these technologies to understand the intricacies of the user request, learn from the patterns, and generate a well-structured vulnerability assessment plan that is detailed, specific, and actionable. 3. Knowledge and Language Understanding Capabilities: ChatGPT uses its extensive knowledge base and language understanding capabilities to conform to industry-standard methodologies and best practices. This is particularly important in the rapidly evolving field of cybersecurity, ensuring that the resulting vulnerability assessment plan is up-to-date and adheres to recognized standards. 4. Markdown Language Output: The use of markdown language output ensures that the plan is formatted in a consistent and easy-to-read manner. This format can be easily integrated into reports, presentations, and other formal documents, which is crucial when communicating the plan to IT departments, senior management, and external auditors or regulators. 5. Streamlining the Assessment Planning Process: The overall advantage of using this GPT-assisted vulnerability assessment plan recipe is that it streamlines the process of creating a comprehensive vulnerability assessment plan. You save time on planning and documentation and can generate a professional-grade assessment plan that aligns with industry standards and is tailored to the specific needs of your organization. By applying these detailed inputs, you transform ChatGPT into a potential tool that can assist in creating a comprehensive, tailored vulnerability assessment plan. This not only bolsters your cybersecurity efforts but also ensures your resources are utilized effectively in protecting your network systems. ConclusionIn harnessing ChatGPT and the OpenAI API, this guide unlocks a streamlined approach to crafting detailed vulnerability assessment plans. Whether a novice or seasoned cybersecurity professional, leveraging these tools optimizes planning and documentation. By tailoring assessments to specific network intricacies, it fosters precision in identifying potential threats and fortifying defenses. Embrace this method to not only save time but also ensure comprehensive security measures aligned with industry standards, safeguarding networks effectively.Author BioClint Bodungen is a cybersecurity professional with 25+ years of experience and the author of Hacking Exposed: Industrial Control Systems. He began his career in the United States Air Force and has since many of the world's largest energy companies and organizations, working for notable cybersecurity companies such as Symantec, Kaspersky Lab, and Booz Allen Hamilton. He has published multiple articles, technical papers, and training courses on cybersecurity and aims to revolutionize cybersecurity education using computer gaming (“gamification”) and AI technology. His flagship product, ThreatGEN® Red vs. Blue, is the world’s first online multiplayer cybersecurity simulation game, designed to teach real-world cybersecurity.
Read more
  • 0
  • 0
  • 339

article-image-improving-ai-context-with-rag-using-azure-machine-learning-prompt-flow
Ryan Goodman
18 Dec 2023
11 min read
Save for later

Improving AI Context with RAG Using Azure Machine Learning prompt flow

Ryan Goodman
18 Dec 2023
11 min read
Dive deeper into the world of AI innovation and stay ahead of the AI curve! Subscribe to our AI_Distilled newsletter for the latest insights. Don't miss out – sign up today!IntroductionRetrieval Augmented Generation, or RAG, is a method that can expand the breadth or depth of information for Large Language Models (LLM). Retrieving and delivering more data to an LLM will result in applications with more contextual, relevant information. Unlike traditional web or mobile applications designed to retrieve structured data, RAG (Retrieval-Augmented Generation) requires data to be structured and indexed, and stored differently, most commonly in a vector database. The resulting experience should provide more contextual information with the added ability to cite the source of information. This narrower scope of information can result in a higher degree of accuracy and utility for your enterprise. To summarize RAG:Retrieval: When a question or formulated to an LLM-powered chat bot, RAG reviews the index to find relevant facts. This is like searching through an index where all the information is neatly summarized for quick access.Augmentation: It then takes these facts and feeds them to the language model, essentially giving it a brief on the subject matter at hand.Generation: With this briefing, the language model is now ready to craft a response that's not just based on what it already knows but also on the latest information it has just pulled in. In short, RAG keeps language models up-to-date and relevant, providing answers that are informed by the latest available data. It's a practical way to ensure AI remains accurate and useful for your organization, especially when dealing with current and evolving topics that may not be public knowledge. In this article, we will explore the differences between RAG and Fine tuning and how you can organize your RAG solution using Azure Machine Learning prompt flow. Retrieval Augmented Generation (RAG) vs Fine TuningWhen working with large language models through chat interfaces like ChatGPT, you will see that its foundational knowledge is point-in-time data. Fine-tuning, on the other hand, is akin to customizing the model with a new layer of knowledge that reflects your specific data, which becomes part of the model's intelligence. As of this article, OpenAI released GPT 4 Turbo which is based on a data set through April 2023. Extending an LLM’s body of knowledge can involve fine-tuning or RAG.Fine Tuning Foundational ModelsFine-tuning involves training a foundational model on a dataset specific to your application, effectively customizing the model to perform better for certain tasks or styles. To fine-tune a model, you need to have a dataset that represents the task or style you are aiming for and the computational resources to perform the training.Once fine-tuned, the model's knowledge is enhanced, and these changes are permanent unless the model is fine-tuned again with additional data. Fine-tuning is ideal for tasks needing deep customization where the information may be specialized but require re-training infrequently. While OpenAI has started to offer fine-tuning for certain models like GPT-3.5, not all foundational models or versions can be fine-tuned due to access restrictions to their parameters and training regimes.Retrieval Augmented GenerationRAG is like adding a live feed of information to the foundational model, enabling it to respond with the latest data without modifying the model itself. It involves augmenting a foundational language model’s response by dynamically integrating information retrieved from an external database, typically a vector database, at the time of the query.No Model Training Required: The foundational model's core parameters remain unchanged. Instead, RAG serves as a real-time data layer that the model queries to inform its responses.Real-time and Up to Date: Because RAG queries external data sources in real-time, it ensures that the language model's responses are enhanced by the most current and relevant information available.Image Credit: https://medium.com/@minh.hoque/retrieval-augmented-generation-grounding-ai-responses-in-factual-data-b7855c059322 RAG is widely adopted as the best starting point due to the following factors:Data Dynamics: Choose RAG for frequently changing data and fine-tuning for static, specialized domains. Like any data wrangling and model training problem, the results are only as good as your data quality.Resource Availability: With RAG you do not need expansive computational resources and budget like fine-tuning. You will still need skilled resources to implement and test RAG.Flexibility and Scalability: RAG offers adaptability to continuously add current information and ease of maintenance.Approaching RAG with Azure Machine Learning Prompt FlowWith a solid foundation of RAG vs fine-tuning, we will dive into the details of an approach to Retrieval Augmented Generation within Azure. Azure provides multiple solutions for creating and accessing vector indexes per Microsoft’s latest documentation. Azure offers 3 methods currently: Azure AI Studio, use a vector index and retrieval augmentation.Azure OpenAI Studio, use a search index with or without vectors.Azure Machine Learning, use a search index as a vector store in a prompt flow.Azure's approach to RAG lets you tailor the model to your business needs and integrate in private or public facing applications. What remains consistent is the ability to prepare and feed your data into the LLM of your choice. Within Azure Machine Learning Prompt Flow, Microsoft includes a number of practical features including a fact-checking layer alongside the existing model to ensure accuracy. Additionally, you can feed supplementary data directly to your large language models as prompts, enriching their responses with up-to-date and relevant information. Azure Machine Learning simplifies the process to augment your AI powered app with the latest data without the time and financial burdens often associated with comprehensive model retraining.  A benefit of using these services is the scalability and security and compliance functions that are native to Azure. A standard feature of Azure Machine Learning for ML models or LLMs is a point and click flow or notebook code interface to build your AI pipelines1.  Data Acquisition and Preparation with Azure Services for Immediate LLM Access:Azure Blob Storage for Data Storage is perfect for staging your data. These files can be anything from text files to PDFs.2. Vectorization and Indexing of your Data Using AI Studio and Azure AI SearchThis is a step that can be completed using one of multiple approaches including both open source and Azure native. Azure AI Studio significantly simplifies the creation and integration of a vector index for Retrieval-Augmented Generation (RAG) applications. Here are the main steps in the process:Initialization: Users start by selecting their data sources in Azure AI Studio, choosing from blob storage for easier testing and local file uploads.Index Creation: The platform guides users through configuring search settings and choosing an index storage location, with a focus on ease of use and minimal need for manual coding.This is one of many examples of how Azure AI Studio is democratizing the use of advanced RAG applications by merging and integrating different services in the Azure cloud together.                                                                                    SOURCE: Microsoft3. Constructing RAG Pipelines with Azure Machine Learning:Simplified RAG Pipeline Creation: With your index created, you can integrate it along with AI search as a plug-and-play component into your Prompt flow. With no/low code interface, you can drag and drop components to create your RAG pipeline.                                                                               Image Source: Microsoft.com Customization with Jupyter Notebooks: For those who are comfortable coding in Jupyter notebooks, Azure ML offers the flexibility to utilize Jupyter Notebooks natively. This will provide more control over the RAG pipeline to fit your project's unique needs. Additionally, there are other alternative flows that you can construct using libraries like LangChain as an alternative to using the Azure services.3. Manage AI Pipeline OperationsAzure Machine Learning provides a foundation designed for iterative and continuous updates. The full lifecycle for model deployment includes test data generation and prompt evaluation. ML and AI operations are needed to understand certain adjustments. For organizations already running Azure ML, prompt flow fits nicely into broader machine learning operations.Integrating RAG workflow into MLOPs pipeline through codes from azureml.core import Experiment from azureml.pipeline.core import Pipeline from azureml.pipeline.steps import PythonScriptStep # Create an Azure Machine Learning experiment experiment_name = 'rag_experiment' experiment = Experiment(ws, experiment_name) # Define a PythonScriptStep for RAG workflow integration rag_step = PythonScriptStep(name='RAG Step',                            script_name='rag_workflow.py',                            compute_target='your_compute_target',                            source_directory='your_source_directory',                            inputs=[rag_dataset.as_named_input('rag_data')],                            outputs=[],                            arguments=['--input_data', rag_dataset],                            allow_reuse=True) # Create an Azure Machine Learning pipeline with the RAG step rag_pipeline = Pipeline(workspace=ws, steps=[rag_step]) # Run the pipeline as an experiment pipeline_run = experiment.submit(rag_pipeline) pipeline_run.wait_for_completion(show_output=True) Here is the code snippets to create and manage data using Azure. from azureml.core import Dataset # Assuming you have a dataset named 'rag_dataset' in your Azure Machine Learning workspace rag_dataset = Dataset.get_by_name(ws, 'rag_dataset') # Split the dataset into training and testing sets train_data, test_data = rag_dataset.random_split(percentage=0.8, seed=42) # Convert the datasets to pandas DataFrames for easy manipulation train_df = train_data.to_pandas_dataframe() test_df = test_data.to_pandas_dataframe() ConclusionIt is important to note that the world of AI and LLMs is evolving at a rapid pace where months make a difference.  Azure Machine Learning for Retrieval Augmented Generation offers a transformative approach to leveraging Large Language Models and provides a compelling solution for enterprises that already have a competency center. Azure ML machine learning pipelines for data ingestion, robust training, management, and deployment capabilities for RAG is lowering the barrier for dynamic data integration with LLMs like OpenAI. As adoption continues to grow, we will see lots of exciting new use cases and success stories coming from organizations that adopt early and iterate fast. The benefit of Microsoft Azure is a single, managed and supported suite of services some of which already may be deployed within your organization. Azure services to support new AI adoption demands, Retrieval Augmented Generation included! Author BioRyan Goodman has dedicated 20 years to the business of data and analytics, working as a practitioner, executive, and entrepreneur. He recently founded DataTools Pro after 4 years at Reliant Funding, where he served as the VP of Analytics and BI. There, he implemented a modern data stack, utilized data sciences, integrated cloud analytics, and established a governance structure. Drawing from his experiences as a customer, Ryan is now collaborating with his team to develop rapid deployment industry solutions. These solutions utilize machine learning, LLMs, and modern data platforms to significantly reduce the time to value for data and analytics teams.
Read more
  • 0
  • 0
  • 469

article-image-ai-distilled-28-gen-ai-reshaping-industries-redefining-possibilities
Merlyn Shelley
15 Dec 2023
12 min read
Save for later

AI_Distilled #28: Gen AI - Reshaping Industries, Redefining Possibilities

Merlyn Shelley
15 Dec 2023
12 min read
Dive deeper into the world of AI innovation and stay ahead of the AI curve! Subscribe to our AI_Distilled newsletter for the latest insights. Don't miss out – sign up today!👋 Hello ,“Once in a while, technology comes along that is so powerful and so broadly applicable that it accelerates the normal march of economic progress. And like a lot of economists, I believe that generative AI belongs in that category.” - Andrew McAfee, Principal Research Scientist, MIT Sloan School of Management This vividly showcases the kaleidoscope of possibilities Gen AI unlocks as it emerges from its cocoon, orchestrating a transformative symphony across realms from medical science to office productivity. Take Google’s newly released AlphaCode 2, for example, which achieves human-level proficiency in programming, or Meta’s AudioBox, which pioneers next-generation audio production. Welcome to AI_Distilled #30, your ultimate guide to the latest advancements in AI, ML, NLP, and Gen AI. This week's highlights include: 📚 Unlocking the Secrets of Geospatial Data: Dive into Bonny P. McClain's new book, "Geospatial Analysis with SQL," and master the art of manipulating data across diverse geographical landscapes. Learn foundational concepts and explore advanced spatial algorithms for a transformative journey. 🌍 Let's shift our focus to the most recent updates and advancements in the AI industry: Microsoft Forms Historic Alliance with Labor Unions to Address AI Impact on Workers Meta’s Audiobox Advances Unified Audio Generation with Enhanced Controllability Europe Secures Deal on World's First Comprehensive AI Rules Google DeepMind Launches AlphaCode 2: Advancing AI in Competitive Programming Collaboration Stable LM Releases Zephyr 3B: Compact and Powerful Language Model for Edge Devices Meta Announces Purple Llama: Advancing Open Trust and Safety in Generative AI Google Cloud Unveils Cloud TPU v5p and AI Hypercomputer for Next-Gen AI Workloads Elon Musk's xAI Chatbot Launches on X We’ve also got you your fresh dose of GPT and LLM secret knowledge and tutorials: A Primer on Enhancing Output Accuracy Using Multiple LLMs Unlocking the Potential of Prompting: Steering Frontier Models to Record-Breaking Performance Navigating Responsible AI: A Comprehensive Guide to Impact Assessment Enhancing RAG-Based Chatbots: A Guide to RAG Fusion Implementation Evaluating Retrieval-Augmented Generation (RAG) Applications with RAGAs Framework Last but not least, don’t miss out on the hands-on strategies and tips straight from the AI community for you to use on your own projects:Creating a Vision Chatbot: A Guide to LLaVA-1.5, Transformers, and Runhouse Fine-Tuning LLMs: A Comprehensive Guide Building a Web Interface for LLM Interaction with Amazon SageMaker JumpStart Mitigating Hallucinations with Retrieval Augmented Generation What’s more, we’ve also shortlisted the best GitHub repositories you should consider for inspiration: bricks-cloud/BricksLLM kwaikeg/kwaiagents facebookresearch/Pearl andvg3/LSDM Stay curious and gear up for an intellectually enriching experience! 📥 Feedback on the Weekly EditionQ: How can we foster effective collaboration between humans and AI systems, ensuring that AI complements human skills and enhances productivity without causing job displacement or widening societal gaps?Share your valued opinions discreetly! Your insights could shine in our next issue for the 39K-strong AI community. Join the conversation! 🗨️✨ As a big thanks, get our bestselling "Interactive Data Visualization with Python - Second Edition" in PDF. Let's make AI_Distilled even more awesome! 🚀 Jump on in! Share your thoughts and opinions here! Writer’s Credit: Special shout-out to Vidhu Jain for their valuable contribution to this week’s newsletter content!  A quick heads-up: Our team is taking a well-deserved holiday break to recharge and return with fresh ideas. So, there'll be a pause in our weekly updates for the next two weeks. We're excited to reconnect with you in the new year, brimming with new insights and creativity. Wishing you a fantastic holiday season! See you in 2024! Cheers,  Merlyn Shelley  Editor-in-Chief, Packt  SignUp | Advertise | Archives⚡ TechWave: AI/GPT News & Analysis⭐ Microsoft Forms Historic Alliance with Labor Unions to Address AI Impact on Workers: Microsoft is partnering with the American Federation of Labor and Congress of Industrial Organizations, a coalition of 60 labor unions representing 12.5 million workers. They plan to discuss AI's impact on jobs, offer AI training to workers, and encourage unionization with "neutrality" terms. The goal is to improve worker collaboration, influence AI development, and shape policies for frontline workers' tech skills. ⭐ Meta’s Audiobox Advances Unified Audio Generation with Enhanced Controllability: Meta researchers have unveiled Audiobox, an advanced audio generation model addressing limitations in existing models. It prioritizes controllability, enabling unique styles via text descriptions and precise management of audio elements. Audiobox excels in speech and sound generation, achieving impressive benchmarks like 0.745 similarity on Librispeech for text-to-speech and 0.77 FAD on AudioCaps for text-to-sound using description and example-based prompts. ⭐ Europe Secures Deal on World's First Comprehensive AI Rules: EU negotiators have achieved a historic agreement on the first-ever comprehensive AI rules, known as the Artificial Intelligence Act. It addresses key issues, such as generative AI and facial recognition by law enforcement, aiming to establish clear regulations for AI while facing criticism for potential exemptions and loopholes. ⭐ Google DeepMind Launches AlphaCode 2: Advancing AI in Competitive Programming Collaboration: Google DeepMind has unveiled AlphaCode 2, a successor to its groundbreaking AI that writes code at a human level. It outperforms 85% of participants in 12 recent Codeforces contests, aiming to collaborate effectively with human coders and promote AI-human collaboration in programming, aiding problem-solving and suggesting code designs. ⭐ Stable LM Releases Zephyr 3B: Compact and Powerful Language Model for Edge Devices: Stable LM Zephyr 3B is a 3 billion parameter lightweight language model optimized for edge devices. It excels in text generation, especially instruction following and Q&A, surpassing larger models in linguistic accuracy. It's ideal for copywriting, summarization, and content personalization on resource-constrained devices, with a non-commercial license. ⭐ Meta Announces Purple Llama: Advancing Open Trust and Safety in Generative AI: Purple Llama is an initiative promoting trust and safety in generative AI. It provides tools like CyberSec Eval for cybersecurity benchmarking and Llama Guard for input/output filtering. Components are permissively licensed to encourage collaboration and standardization in AI safety tools. ⭐ Google Cloud Unveils Cloud TPU v5p and AI Hypercomputer for Next-Gen AI Workloads: Google Cloud has launched the powerful Cloud TPU v5p AI accelerator, addressing the needs of large generative AI models with 2X more FLOPS and 3X HBM. It trains models 2.8X faster than TPU v4 and is 4X more scalable. Google also introduced the AI Hypercomputer, an efficient supercomputer architecture for AI workloads, aiming to boost innovation in AI for enterprises and developers. ⭐ Elon Musk's xAI Chatbot Launches on X: Grok, created by xAI, debuts on X (formerly Twitter) for $16/month to Premium Plus subscribers. It offers conversational answers, similar to ChatGPT and Google's Bard. Grok-1 incorporates real-time X data, providing up-to-the-minute information. Elon Musk praises Grok's rebellious personality, though its intelligence remains comparable to other chatbots. Currently text-only, xAI intends to expand Grok's capabilities to include video, audio, and more.  🔮 Expert Insights from Packt Community Geospatial Analysis with SQL - By Bonny P McClain Embark on a captivating journey into geospatial analysis, a field beyond geography enthusiasts! This book reveals how combining geospatial magic with SQL can tackle real-world challenges. Learn to create spatial databases, use SQL queries, and incorporate PostGIS and QGIS into your toolkit. Key Concepts: 🌍 Foundations:    - Understand the importance of geospatial analysis.    - See how location info enhances data exploration. 🗺️ Tobler's Wisdom:    - Embrace Walter Tobler's second law of geography.    - Explore how external factors impact the area of interest. 🔍 SQL Spatial Data Science:    - Master geospatial analysis with SQL.    - Build databases, write queries, and use handy functions. 🛠️ Toolbox Upgrade:    - Boost skills with PostGIS and QGIS.    - Handle data questions and excel in spatial analysis. Decode geospatial secrets—perfect for analysts and devs seeking location-based insights! Read through the Chapter 1 unlocked here...  🌟 Secret Knowledge: AI/LLM Resources⭐ A Primer on Enhancing Output Accuracy Using Multiple LLMs: Explore using chain-of-thought prompts with LLMs like GPT-4 and PaLM2 for varied responses. Learn the "majority-vote/quorum" technique to enhance accuracy by combining responses from different LLMs using AIConfig for streamlined coordination, improving output reliability and minimizing errors. ⭐ Unlocking the Potential of Prompting: Steering Frontier Models to Record-Breaking Performance: The authors explore innovative prompting techniques to improve the performance of GPT-4 and similar models, introducing "Medprompt" and related methods. They achieve a 90.10% accuracy on the MMLU challenge with "Medprompt+," sharing code on GitHub for replication and LLM optimization. ⭐ Navigating Responsible AI: A Comprehensive Guide to Impact Assessment: This article introduces the RAI impact assessment, emphasizing aligning AI with responsible principles. It mentions Microsoft's tools like the Responsible AI Standard, v2, RAI Impact Assessment Template, and Guide. The approach involves identifying use cases, stakeholders, harms, and risk mitigation. It suggests adapting RAI to organizational needs and phased alignment with product releases. ⭐ Enhancing RAG-Based Chatbots: A Guide to RAG Fusion Implementation: In the fourth installment of this tutorial series, the focus is on implementing RAG Fusion, a technique to improve Retrieval-Augmented Generation (RAG) applications. It involves converting user queries into multiple questions, searching for content in a knowledge base, and re-ranking results. The tutorial aims to enhance semantic search in RAG applications. ⭐ Evaluating Retrieval-Augmented Generation (RAG) Applications with RAGAs Framework: The article discusses challenges in making a production-ready RAG application, highlighting the need to assess retriever and generator components separately and together. It introduces the RAGAs framework for reference-free evaluation using LLMs, offering metrics for component-level assessment. The article provides a guide to using RAGAs for evaluation, including prerequisites, setup, data preparation, and conducting assessments. 🔛 Masterclass: AI/LLM Tutorials⭐ Creating a Vision Chatbot: A Guide to LLaVA-1.5, Transformers, and Runhouse: Discover how to build a multimodal conversational model using LLaVA-1.5, Hugging Face Transformers, and Runhouse. The post introduces the significance of multimodal conversational models, blending language and visual elements. It emphasizes the limitations of closed-source models, showcasing open-source alternatives. The tutorial includes Python code available on GitHub for deploying a vision chat assistant, providing a step-by-step guide. LLaVA-1.5, with its innovative visual embeddings, is explained, highlighting its lightweight training and impressive performance. The tutorial's implementation code, building a vision chatbot, is made accessible through standardized chat templates, and the Runhouse platform simplifies deployment on various infrastructures. ⭐ Fine-Tuning LLMs: A Comprehensive Guide: Explore the potential of fine-tuning OpenAI’s LLMs to revolutionize tasks such as customer support chatbots and financial data analysis. Learn how fine-tuning enhances LLM performance on specific datasets and discover use cases in customer support and finance. The guide walks you through the step-by-step process of fine-tuning, from preparing a training dataset to creating and using a fine-tuned model. Experience how fine-tuned LLMs, exemplified by GPT-3.5 Turbo, can transform natural language processing, opening new possibilities for diverse industries and applications. ⭐ Building a Web Interface for LLM Interaction with Amazon SageMaker JumpStart: Embark on a comprehensive guide to creating a web user interface, named Chat Studio, enabling seamless interaction with LLMs like Llama 2 and Stable Diffusion through Amazon SageMaker JumpStart. Learn how to deploy SageMaker foundation models, set up AWS Lambda, IAM permissions, and run the user interface locally. Explore optional extensions to incorporate additional foundation models and deploy the application using AWS Amplify. This step-by-step tutorial covers prerequisites, deployment, solution architecture, and offers insights into the potential of LLMs, providing a hands-on approach for users to enhance conversational experiences and experiment with diverse pre-trained LLMs on AWS. ⭐ Mitigating Hallucinations with Retrieval Augmented Generation: Delve into a step-by-step guide exploring the deployment of LLMs, specifically Llama-2 from Amazon SageMaker JumpStart. Learn the crucial technique of RAG using the Pinecone vector database to counteract AI hallucinations. The primer introduces source knowledge incorporation through RAG, detailing how to set up Amazon SageMaker Studio for LLM pipelines. Discover two approaches to deploy LLMs using HuggingFaceModel and JumpStartModel. The guide further illustrates querying pre-trained LLMs and enhancing accuracy by providing additional context.   🚀 HackHub: Trending AI Tools⭐ bricks-cloud/BricksLLM: Cloud-native AI gateway written in Go enabling the creation of API keys with fine-grained access controls, rate limits, cost limits, and TTLs for both development and production use. ⭐ kwaikeg/kwaiagents: Comprises KAgentSys-Lite with limited tools, KAgentLMs featuring LLMs with agent capabilities, KAgentInstruct providing finetuning data, and KAgentBench offering over 3,000 human-edited evaluations for testing agent capabilities. ⭐ facebookresearch/Pearl: Production-ready Reinforcement Learning AI agent library from Meta prioritizing long-term feedback, adaptability to diverse environments, and resilience to limited observability. ⭐ andvg3/LSDM: Official implementation of a NeurIPS 2023 paper on Language-driven Scene Synthesis using a Multi-conditional Diffusion Model. AI_Distilled Talkback: Unmasking the Community Buzz! 💬 Q: “How can we foster effective collaboration between humans and AI systems, ensuring that AI complements human skills and enhances productivity without causing job displacement or widening societal gaps?”  💭 "With providing more information on LLM."  Share your thoughts here! Your opinions matter—let's make this space a reflection of diverse perspectives.
Read more
  • 0
  • 0
  • 200

article-image-metagpt-cybersecuritys-impact-on-investment-choices
James Bryant, Alok Mukherjee
13 Dec 2023
9 min read
Save for later

MetaGPT: Cybersecurity's Impact on Investment Choices

James Bryant, Alok Mukherjee
13 Dec 2023
9 min read
Dive deeper into the world of AI innovation and stay ahead of the AI curve! Subscribe to our AI_Distilled newsletter for the latest insights. Don't miss out – sign up today!This article is an excerpt from the book, The Future of Finance with ChatGPT and Power BI, by James Bryant, Alok Mukherjee. Enhance decision-making, transform your market approach, and find investment opportunities by exploring AI, finance, and data visualization with ChatGPT's analytics and Power BI's visuals.IntroductionThe MetaGPT model is a highly advanced and customizable model that has been designed to address specific research and analysis needs within various domains. In this particular context, it’s geared towards identifying investment opportunities within the US market that are influenced by cybersecurity regulatory changes or cyber breaches.Roles and responsibilitiesThe model has been configured to perform various specialized roles, including these:Cybersecurity regulatory research: Understanding changes in cybersecurity laws and regulations and their impact on the marketCyber breach analysis: Investigating cyber breaches, understanding their nature, and identifying potential investment risks or opportunitiesInvestment analysis: Evaluating investment opportunities based on insights derived from cybersecurity changesTrading decisions: Making informed buy or sell decisions on financial productsPortfolio management: Overseeing and aligning the investment portfolio based on cybersecurity dynamics Here’s how it worksResearch phase: The model initiates research on the given topics, either cybersecurity regulations or breaches, depending on the role. It breaks down the topic into searchable queries, collects relevant data, ranks URLs based on credibility, and summarizes the gathered information.Analysis phase: Investment analysts then evaluate the summarized information to identify trends, insights, and potential investment opportunities or risks. They correlate cybersecurity data with market behavior, investment potential, and risk factors.Trading phase: Based on the analysis, investment traders execute appropriate trading decisions, buying or selling assets that are influenced by the cybersecurity landscape.Management phase: The portfolio manager integrates all the insights to make overarching decisions about asset allocation, risk management, and alignment of the investment portfolio.The following are its purposes and benefits:Timely insights: By automating the research and analysis process, the model provides quick insights into a dynamic field such as cybersecurity, where changes can have immediate market impactsData-driven decisions: The model ensures that investment decisions are grounded in comprehensive research and objective analysis, minimizing biasCustomization: The model can be tailored to focus on specific aspects of cybersecurity, such as regulatory changes or particular types of breaches, allowing for targeted investment strategiesCollaboration: By defining different roles, the model simulates a collaborative approach, where various experts contribute their specialized knowledge to achieve a common investment goalIn conclusion, the MetaGPT model, with its diverse roles and sophisticated functions, serves as a powerful tool for investors looking to leverage the ever-changing landscape of cybersecurity. By integrating research, analysis, trading, and portfolio management, it provides a comprehensive, datadriven approach to identifying and capitalizing on investment opportunities arising from the complex interplay of cybersecurity and finance. It not only streamlines the investment process but also enhances the accuracy and relevance of investment decisions in a rapidly evolving field.Source: GitHub: MIT License: https://github.com/geekan/MetaGPT.Source: MetaGPT: Meta Programming for Multi-Agent Collaborative Framework Paper:[2308.00352] MetaGPT: Meta Programming for Multi-Agent Collaborative Framework (arxiv.org) (https://arxiv.org/abs/2308.00352)By Sirui Hong, Xiawu Zheng, Jonathan Chen, Yuheng Cheng, Jinlin Wang, Ceyao Zhang, Zili Wang, Steven Ka Shing Yau, Zijuan Lin, Liyang Zhou, Chenyu Ran, Lingfeng Xiao, Chenglin WuThe following is a Python code snippet:1. Begin with the installations:npm --version sudo npm install -g @mermaid-js/mermaid-cli git clone https://github.com/geekan/metagpt cd metagpt python setup.py install2.  Run the following Python code:# Configuration: OpenAI API Key # Open the config/key.yaml file and insert your OpenAI API key in place of the placeholder. # cp config/config.yaml config/key.yaml # save and close file # Import Necessary Libraries import asyncio import json from typing import Callable from pydantic import parse_obj_as # Import MetaGPT Specific Modules from metagpt.actions import Action from metagpt.config import CONFIG from metagpt.logs import logger from metagpt.tools.search_engine import SearchEngine from metagpt.tools.web_browser_engine import WebBrowserEngine, WebBrowserEngineType from metagpt.utils.text import generate_prompt_chunk, reduce_ message_length # Define Roles # NOTE: Replace these role definitions as per your project's needs. RESEARCHER_ROLES = { 'cybersecurity_regulatory_researcher': "Cybersecurity  Regulatory Researcher", 'cyber_breach_researcher': "Cyber Breach Researcher", 'investment_analyst': "Investment Analyst", 'investment_trader': "Investment Trader", 'portfolio_manager': "Portfolio Manager" } # Define Prompts # NOTE: Customize these prompts to suit your project's specific requirements. LANG_PROMPT = "Please respond in {language}." RESEARCH_BASE_SYSTEM = """You are a {role}. Your primary goal is  to understand and analyze \ changes in cybersecurity regulations or breaches, identify  investment opportunities, and make informed \ decisions on financial products, aligning with the current  cybersecurity landscape.""" RESEARCH_TOPIC_SYSTEM = "You are a {role}, and your research  topic is \"{topic}\"." SEARCH_TOPIC_PROMPT = """Please provide up to 2 necessary  keywords related to your \ research topic on cybersecurity regulations or breaches that  require Google search. \ Your response must be in JSON format, for example:  ["cybersecurity regulations", "cyber breach analysis"].""" SUMMARIZE_SEARCH_PROMPT = """### Requirements 1. The keywords related to your research topic and the search results are shown in the "Reference Information" section. 2. Provide up to {decomposition_nums} queries related to your research topic based on the search results. 3. Please respond in JSON format as follows: ["query1",  "query2", "query3", ...]. ### Reference Information {search} """ DECOMPOSITION_PROMPT = """You are a {role}, and before delving  into a research topic, you break it down into several \ sub-questions. These sub-questions can be researched through online searches to gather objective opinions about the given \ topic. --- The topic is: {topic} --- Now, please break down the provided research topic into  {decomposition_nums} search questions. You should respond with  an array of \ strings in JSON format like ["question1", "question2", ...]. """ COLLECT_AND_RANKURLS_PROMPT = """### Reference Information 1. Research Topic: "{topic}" 2. Query: "{query}" 3. The online search results: {results} --- Please remove irrelevant search results that are not related to the query or research topic. Then, sort the remaining search  results \ based on link credibility. If two results have equal credibility, prioritize them based on relevance. Provide the  ranked \ results' indices in JSON format, like [0, 1, 3, 4, ...], without including other words. """ WEB_BROWSE_AND_SUMMARIZE_PROMPT = '''### Requirements 1. Utilize the text in the "Reference Information" section to respond to the question "{query}". 2. If the question cannot be directly answered using the text,  but the text is related to the research topic, please provide \ a comprehensive summary of the text. 3. If the text is entirely unrelated to the research topic,  please reply with a simple text "Not relevant." 4. Include all relevant factual information, numbers, statistics, etc., if available. ### Reference Information {content} ''' CONDUCT_RESEARCH_PROMPT = '''### Reference Information {content} ### Requirements Please provide a detailed research report on the topic:  "{topic}", focusing on investment opportunities arising \ from changes in cybersecurity regulations or breaches. The report must: - Identify and analyze investment opportunities in the US market. - Detail how and when to invest, the structure for the investment, and the implementation and exit strategies. - Adhere to APA style guidelines and include a minimum word count of 2,000. - Include all source URLs in APA format at the end of the report. ''' # Roles RESEARCHER_ROLES = { 'cybersecurity_regulatory_researcher': "Cybersecurity  Regulatory Researcher", 'cyber_breach_researcher': "Cyber Breach Researcher", 'investment_analyst': "Investment Analyst", 'investment_trader': "Investment Trader", 'portfolio_manager': "Portfolio Manager" } # The rest of the classes and functions remain unchangedImportant notesExecute the installation and setup commands in your terminal before running the Python scriptDon’t forget to replace placeholder texts in config files and the Python script with actual data or API keysEnsure that MetaGPT is properly installed and configured on your machineIn this high-stakes exploration, we dissect the exhilarating yet precarious world of LLM-integrated applications. We delve into how they’re transforming finance while posing emergent ethical dilemmas and security risks that simply cannot be ignored. Be prepared to journey through real-world case studies that highlight the good, the bad, and the downright ugly of LLM applications in finance, from market-beating hedge funds to costly security breaches and ethical pitfalls.Conclusion"In an era shaped by cyber landscapes, MetaGPT emerges as the guiding light for astute investors. Seamlessly blending cybersecurity insights with finance, it pioneers a data-driven approach, unveiling opportunities and risks often concealed within regulatory shifts and breaches. This model isn't just a tool; it's the compass navigating the ever-changing intersection of cybersecurity and finance, empowering investors to thrive in an intricate, high-stakes market."Author BioJames Bryant, a finance and technology expert, excels at identifying untapped opportunities and leveraging cutting-edge tools to optimize financial processes. With expertise in finance automation, risk management, investments, trading, and banking, he's known for staying ahead of trends and driving innovation in the financial industry. James has built corporate treasuries like Salesforce and transformed companies like Stanford Health Care through digital innovation. He is passionate about sharing his knowledge and empowering others to excel in finance. Outside of work, James enjoys skiing with his family in Lake Tahoe, running half marathons, and exploring new destinations and culinary experiences with his wife and daughter.Aloke Mukherjee is a seasoned technologist with over a decade of experience in business architecture, digital transformation, and solutions architecture. He excels at applying data-driven solutions to real-world problems and has proficiency in data analytics and planning. Aloke worked at EMC Corp and Genentech and currently spearheads the digital transformation of Finance Business Intelligence at Stanford Health Care. In addition to his work, Aloke is a Certified Personal Trainer and is passionate about helping his clients stay fit. Aloke also has a passion for wine and exploring new vineyards.
Read more
  • 0
  • 0
  • 114

article-image-ai-distilled-28-unveiling-innovations-reshaping-our-world
Merlyn Shelley
11 Dec 2023
13 min read
Save for later

AI_Distilled #28: Unveiling Innovations Reshaping Our World

Merlyn Shelley
11 Dec 2023
13 min read
Dive deeper into the world of AI innovation and stay ahead of the AI curve! Subscribe to our AI_Distilled newsletter for the latest insights. Don't miss out – sign up today!👋 Hello ,“Generative AI has the potential to change the world in ways that we can’t even imagine. It has the power to create new ideas, products, and services that will make our lives easier, more productive, and more creative. It also has the potential to solve some of the world’s biggest problems, such as climate change, poverty, and disease.” -Bill Gates, Microsoft Co-Founder Microsoft Bing’s new Deep Search functionality is a case in point — Bing will now create AI prompts itself to provide detailed insights to user queries in ways traditional search engines can’t even match. Who could have thought LLMs would progress so much they would eventually prompt themselves? Even Runway ML is onto something big with its groundbreaking technology that creates realistic AI generated videos that will find their way to Hollywood. Welcome back to a new issue of AI Distilled - your one-stop destination for all things AI, ML, NLP, and Gen AI. Let’s get started with the latest news and developments across the AI sector:  Elon Musk's xAI Initiates $1 Billion Funding Drive in AI Race Bing’s New Deep Search Expands Queries AI Takes Center Stage in 2023 Word of the Year Lists OpenAI Announces Delay in GPT Store Launch to Next Year ChatGPT Celebrates First Anniversary with 110M Installs and $30M Revenue Milestone Runway ML and Getty Images Collaborate on AI Video Models for Hollywood and Advertising We’ve also curated the latest GPT and LLM resources, tutorials, and secret knowledge: Unlocking AI Magic: A Primer on 7 Essential Libraries for Developers Efficient LLM Fine-Tuning with QLoRA on a Laptop Rapid Deployment of Large Open Source LLMs with Runpod and vLLM’s OpenAI Endpoint Understanding Strategies to Enhance Retrieval-Augmented Generation (RAG) Pipeline Performance Understanding and Mitigating Biases and Toxicity in LLMs Finally, don’t forget to check-out our hands-on tips and strategies from the AI community for you to use on your own projects: A Step-by-Step Guide to Streamlining LLM Data Processing for Efficient Pipelines Fine-Tuning Mistral Instruct 7B on the MedMCQA Dataset Using QLoRA Accelerating Large-Scale Training: A Comprehensive Guide to Amazon SageMaker Data Parallel Library Enhancing LoRA-Based Inference Speed: A Guide to Efficient LoRA Decomposition Looking for some inspiration? Here are some GitHub repositories to get your projects going! tacju/maxtron Tanuki/tanuki.py roboflow/multimodal-maestro 03axdov/muskie Also, don't forget to check our expert insights column, which covers the interesting concepts of NLP from the book 'The Handbook of NLP with Gensim'. It's a must-read!    Stay curious and gear up for an intellectually enriching experience! 📥 Feedback on the Weekly EditionQuick question: How can we foster effective collaboration between humans and AI systems, ensuring that AI complements human skills and enhances productivity without causing job displacement or widening societal gaps?Share your valued opinions discreetly! Your insights could shine in our next issue for the 39K-strong AI community. Join the conversation! 🗨️✨ As a big thanks, get our bestselling "Interactive Data Visualization with Python - Second Edition" in PDF.  Let's make AI_Distilled even more awesome! 🚀 Jump on in! Share your thoughts and opinions here! Writer’s Credit: Special shout-out to Vidhu Jain for their valuable contribution to this week’s newsletter content!  Cheers,  Merlyn Shelley  Editor-in-Chief, Packt  SignUp | Advertise | Archives⚡ TechWave: AI/GPT News & Analysis🏐 Elon Musk's xAI Initiates $1 Billion Funding Drive in AI Race: xAI is on a quest to secure $1 billion in equity, aiming to stay competitive with tech giants like OpenAI, Microsoft, and Google in the dynamic AI landscape. Already amassing $135 million from investors, xAI's total funding goal is disclosed in a filing with the US Securities and Exchange Commission.  🏐 AI Alliance Launched by Tech Giants IBM and Meta: IBM and Meta have formed a new "AI Alliance" with over 50 partners to promote open and responsible AI development. Members include Dell, Intel, CERN, NASA and Sony. The alliance envisions fostering an open AI community for researchers and developers and can help members make progress if they openly share models or not. 🏐 Bing’s New Deep Search Expands Queries: Microsoft is testing a new Bing feature called Deep Search that uses GPT-4 to expand search queries before providing results. Deep Search displays the expanded topics in a panel for users to select the one that best fits what they want to know. It then tailors the search results to that description. Microsoft says the feature can take up to 30 seconds due to the AI generation. 🏐 AI Takes Center Stage in 2023 Word of the Year Lists: In 2023, AI dominates tech, influencing "word of the year" choices. Cambridge picks "hallucinate" for AI's tendency to invent information; Merriam-Webster chooses "authentic" to address AI's impact on reality. Oxford recognizes "prompt" for its evolved role in instructing generative AI, reflecting society's increased integration of AI into everyday language and culture. 🏐 OpenAI Announces Delay in GPT Store Launch to Next Year: OpenAI delays the GPT store release until next year, citing unexpected challenges and postponing the initial December launch plan. Despite recent challenges, including CEO changes and employee unrest, development continues, and updates for ChatGPT are expected. The GPT store aims to be a marketplace for users to sell and share custom GPTs, with creators compensated based on usage. 🏐 ChatGPT Celebrates First Anniversary with 110M Installs and $30M Revenue Milestone: ChatGPT's mobile apps, launched in May 2023 on iOS and later on Android, have exceeded 110 million installs, yielding nearly $30 million in revenue. The success is fueled by the ChatGPT Plus subscription, offering perks. Despite competition, downloads surge, with Android hitting 18 million in a week. The company expects continued growth by year-end 2023. 🏐 Runway ML and Getty Images Collaborate on AI Video Models for Hollywood and Advertising: NYC video AI startup Runway ML, backed by Google and NVIDIA, announces a partnership with Getty Images for the Runway <> Getty Images Model (RGM), a generative AI video model. Targeting Hollywood, advertising, media, and broadcasting, it enables customized content workflows for Runway enterprise customers. 🔮 Expert Insights from Packt Community The Handbook of NLP with Gensim - By Chris Kuo NLU + NLG = NLP NLP is an umbrella term that covers natural language understanding (NLU) and NLG. We’ll go through both in the next sections. NLU Many languages, such as English, German, and Chinese, have been developing for hundreds of years and continue to evolve. Humans can use languages artfully in various social contexts. Now, we are asking a computer to understand human language. What’s very rudimentary to us may not be so apparent to a computer. Linguists have contributed much to the development of computers’ understanding in terms of syntax, semantics, phonology, morphology, and pragmatics. NLU focuses on understanding the meaning of human language. It extracts text or speech input and then analyzes the syntax, semantics, phonology, morphology, and pragmatics in the language. Let’s briefly go over each one: Syntax: This is about the study of how words are arranged to form phrases and clauses, as well as the use of punctuation, order of words, and sentences. Semantics: This is about the possible meanings of a sentence based on the interactions between words in the sentence. It is concerned with the interpretation of language, rather than its form or structure. For example, the word “table” as a noun can refer to “a piece of furniture having a smooth flat top that is usually supported by one or more vertical legs” or a data frame in a computer language. NLU can understand the two meanings of a word in such jokes through a technique called word embedding.  Phonology: This is about the study of the sound system of a language, including the sounds of speech (phonemes), how they are combined to form words (morphology), and how they are organized into larger units such as syllables and stress patterns. For example, the sounds represented by the letters “p” and “b” in English are distinct phonemes. A phoneme is the smallest unit of sound in a language that can change the meaning of a word. Consider the words “pat” and “bat.” The only difference between these two words is the initial sound, but their meanings are different. Morphology: This is the study of the structure of words, including the way in which they are formed from smaller units of meaning called morphemes. It originally comes from “morph,” the shape or form, and “ology,” the study of something. Morphology is important because it helps us understand how words are formed and how they relate to each other. It also helps us understand how words change over time and how they are related to other words in a language. For example, the word “unkindness” consists of three separate morphemes: the prefix “un-,” the root “kind,” and the suffix “-ness.” Pragmatics: This is the study of how language is used in a social context. Pragmatics is important because it helps us understand how language works in real-world situations, and how language can be used to convey meaning and achieve specific purposes. For example, if you offer to buy your friend a McDonald’s burger, a large fries, and a large drink, your friend may reply "no" because he is concerned about becoming fat. Your friend may simply mean the burger meal is high in calories, but the conversation can also imply he may be fat in a social context. Now, let’s understand NLG. NLG While NLU is concerned with reading for a computer to comprehend, NLG is about writing for a computer to write. The term generation in NLG refers to an NLP model generating meaningful words or even articles. Today, when you compose an email or type a sentence in an app, it presents possible words to complete your sentence or performs automatic correction. These are applications of NLG.  This content is from the book The Handbook of NLP with Gensim - By Chris Kuo (Oct 2023). Start reading a free chapter or access the entire Packt digital library free for 7 days by signing up now. To learn more, click on the button below. Read through the Chapter 1 unlocked here...  🌟 Secret Knowledge: AI/LLM Resources🏀 Unlocking AI Magic: A Primer on 7 Essential Libraries for Developers: Discover seven cutting-edge libraries to enhance development projects with advanced AI features. From CopilotTextarea for AI-driven writing in React apps to PrivateGPT for secure, locally processed document interactions, explore tools that elevate your projects and impress users. 🏀 Efficient LLM Fine-Tuning with QLoRA on a Laptop: Explore QLoRA, an efficient memory-saving method for fine-tuning large language models on ordinary CPUs. The QLoRA API supports NF4, FP4, INT4, and INT8 data types for quantization, utilizing methods like LoRA and gradient checkpointing to significantly reduce memory requirements. Learn to implement QLoRA on CPUs, leveraging Intel Extension for Transformers, with experiments showcasing its efficiency on consumer-level CPUs. 🏀 Rapid Deployment of Large Open Source LLMs with Runpod and vLLM’s OpenAI Endpoint: Learn to swiftly deploy open-source LLMs into applications with a tutorial, featuring the Llama-2 70B model and AutoGen framework. Utilize tools like Runpod and vLLM for computational resources and API endpoint creation, with a step-by-step guide and the option for non-gated models like Falcon-40B. 🏀 Understanding Strategies to Enhance Retrieval-Augmented Generation (RAG) Pipeline Performance: Learn optimization techniques for RAG applications by focusing on hyperparameters, tuning strategies, data ingestion, and pipeline preparation. Explore improvements in inferencing through query transformations, retrieval parameters, advanced strategies, re-ranking models, LLMs, and prompt engineering for enhanced retrieval and generation. 🏀 Understanding and Mitigating Biases and Toxicity in LLMs: Explore the impact of ethical guidelines on Large Language Model (LLM) development, examining measures adopted by companies like OpenAI and Google to address biases and toxicity. Research covers content generation, jailbreaking, and biases in diverse domains, revealing complexities and challenges in ensuring ethical LLMs.  🔛 Masterclass: AI/LLM Tutorials🎯 A Step-by-Step Guide to Streamlining LLM Data Processing for Efficient Pipelines: Learn to optimize the development loop for your LLM-powered recommendation system by addressing slow processing times in data pipelines. The solution involves implementing a Pipeline class to save inputs/outputs, enabling efficient error debugging. Enhance developer experience with individual pipeline stages as functions and consider future optimizations like error classes and concurrency. 🎯 Fine-Tuning Mistral Instruct 7B on the MedMCQA Dataset Using QLoRA: Explore fine-tuning Mistral Instruct 7B, an open-source LLM, for medical entrance exam questions using the MedMCQA dataset. Utilize Google Colab, GPTQ version, and LoRA technique for memory efficiency. The tutorial covers data loading, prompt creation, configuration, training setup, code snippets, and performance evaluation, offering a foundation for experimentation and enhancement. 🎯 Accelerating Large-Scale Training: A Comprehensive Guide to Amazon SageMaker Data Parallel Library: This guide details ways to boost Large Language Model (LLM) training speed with Amazon SageMaker's SMDDP. It addresses challenges in distributed training, emphasizing SMDDP's optimized AllGather for GPU communication bottleneck, exploring techniques like EFA network usage, GDRCopy coordination, and reduced GPU streaming multiprocessors for improved efficiency and cost-effectiveness on Amazon SageMaker. 🎯 Enhancing LoRA-Based Inference Speed: A Guide to Efficient LoRA Decomposition: The article highlights achieving three times faster inference for public LoRAs using the Diffusers library. It introduces LoRA, a parameter-efficient fine-tuning technique, detailing its decomposition process and benefits, including quick transitions and reduced warm-up and response times in the Inference API.  🚀 HackHub: Trending AI Tools⚽ tacju/maxtron: Unified meta-architecture for video segmentation, enhancing clip-level segmenters with within-clip and cross-clip tracking modules. ⚽ Tanuki/tanuki.py: Simplifies the creation of apps powered by LLMs in Python by seamlessly integrating well-typed, reliable, and stateless LLM-powered functions into applications. ⚽ roboflow/multimodal-maestro: Empowers developers with enhanced control over large multimodal models, enabling the achievement of diverse outputs through effective prompting tactics. ⚽ 03axdov/muskie: Python-based ML library that simplifies the process of dataset creation and model utilization, aiming to reduce code complexity. 
Read more
  • 0
  • 0
  • 201
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-chatgpt-for-financial-analysis-palo-alto-networks
James Bryant, Alok Mukherjee
05 Dec 2023
11 min read
Save for later

ChatGPT for Financial Analysis: Palo Alto Networks

James Bryant, Alok Mukherjee
05 Dec 2023
11 min read
Dive deeper into the world of AI innovation and stay ahead of the AI curve! Subscribe to our AI_Distilled newsletter for the latest insights. Don't miss out – sign up today!This article is an excerpt from the book, The Future of Finance with ChatGPT and Power BI, by James Bryant, Alok Mukherjee. Enhance decision-making, transform your market approach, and find investment opportunities by exploring AI, finance, and data visualization with ChatGPT's analytics and Power BI's visualsIntroductionIn this section, we will explore an interesting example of how ChatGPT can be used to analyze and summarize earnings reports, enabling you to identify key insights and trends quickly. With the vast amount of information available in earnings reports, it can be challenging to sift through data and identify the most critical elements. Let’s see how ChatGPT can help.Here’s the scenario – Palo Alto Networks has just released its quarterly earnings report. You want to understand the company’s financial performance and identify any trends or potential issues that may impact the stock price or investment potential:Step 1 – Extract key data points:To get started, provide ChatGPT with the relevant earnings report data, such as revenue, net income, EPS, and any other important metrics. Be sure to include both current and historical data for comparison purposes. You can either input this data manually or automate the process using an API or web scraper. Let’s explore the automated process to add Palo Alto Networks’ financial information from September 2021 to March 2023 to ChatGPT.Step 1.1 – Automating data collection with Python and API/web scraping:1.  Choose a financial API or web scraping library in Python:If using an API, explore options such as Alpha Vantage (alphavantage.co):Obtain an API key from the Alpha Vantage website (free and paid versions).Choose a method – Python requests.  Make a request.If web scraping, use libraries such as Requests and Beautiful SoupFor web scraping, identify the URLs of the company’s financial statements or earnings reports from websites such as Yahoo Finance (finance.yahoo.com), Nasdaq (nasdaq. com), or the company’s investor relations page.2. Set up your Python script for data collection:For APIs: a. Import the necessary libraries (e.g., requests or pandas) – for example, import requests import pandas as pd. b. Define the API key, endpoint URL, and required parameters. c. Make a request to the API to fetch data using the requests library. d. Parse the response data and convert it into a pandas DataFrame.For web scraping: a. Import the necessary libraries (e.g., requests, BeautifulSoup, or pandas) – for example, import requests from bs4 import BeautifulSoup import pandas as pd. b. Define the URL(s) containing the financial data. c. Use the requests library to fetch the HTML content of the web page. d. Parse the HTML content using BeautifulSoup to extract the required financial data. e. Convert the extracted data into a pandas DataFrame.3. Collect historical data from September 2021 to March 2023 for the relevant financial metrics:Adjust the parameters in your API request or web scraping script to target the specified date range.4. Save the collected data in a structured format, such as a CSV file or a pandas DataFrame, for further processing and analysis:Use pandas’ DataFrame.to_csv() method to save the collected data as a CSV fileAlternatively, keep the data in a pandas DataFrame for further analysis within the Python script.With these additions, you should have a better understanding of where to obtain financial data and the necessary Python libraries to import for their data collection scripts.We will now provide a step-by-step guide using Python code for Palo Alto Networks’ financial data.Extract Palo Alto Networks’ quarterly financial data (revenue, net income, and EPS) for the time period September 2021–March 2023, and save it in a CSV file as text input, using the Alpha Vantage API key (finance website):1. Install the necessary Python package and pandas library in Command Prompt:pip install requests pip install pandas2. Create a new Python script file in Notepad, Notepad++, PyCharm, or Visual Studio code. It is important that you add your Alpha Vantage API key in the following api_key line. Copy and paste the following code into your Python script file, and name it PANW.py:import requests import pandas as pd api_key = "YOUR_API_KEY" symbol = "PANW" url = f"https://www.alphavantage.co/ query?function=EARNINGS&symbol={symbol}&apikey={api_key}" try: response = requests.get(url) response.raise_for_status() # Raise HTTPError for bad responses data = response.json() if 'quarterlyEarnings' in data: quarterly_data = data['quarterlyEarnings'] df = pd.DataFrame(quarterly_data) df_filtered = df[(df['reportedDate'] >= '2021-09-01') & (df['reportedDate'] <= '2023-03-31')] df_filtered.to_csv("palo_alto_financial_data.csv", index=False) input_text = "Analyze the earnings data of Palo Alto Networks from September 2021 to March 2023.\n\n" for idx, row in df_filtered.iterrows(): quarter = idx + 1 revenue = row.get('revenue', 'N/A') net_income = row.get('netIncome', 'N/A') eps = row.get('earningsPerShare', 'N/A') input_text += f"Quarter {quarter}:\n" input_text += f"Revenue: ${revenue}\n" input_text += f"Net Income: ${net_income}\n" input_text += f"Earnings Per Share: ${eps}\n\n" with open("palo_alto_financial_summary.txt", "w") as f: f.write(input_text) else: print("Data not available.") except requests.RequestException as e: print(f"An error occurred: {e}")3.  Run the Python script file:Python PANW.py4.  A separate text file, palo_alto_financial_summary.txt, and a CSV file, palo_ alto_financial_data.csv, will be created once the Python script has been executed:When the Python script, PANW.py, is executed, it performs several tasks to fetch and analyze the earnings data of Palo Alto Networks (the symbol PANW). First, it imports two essential libraries – requests to make API calls and pandas for data manipulation.The script starts by defining a few key variables – the API key to access financial data, the stock symbol of the company, and the URL to the Alpha Vantage API where the data can be retrieved. Then, a try block is initiated to safely execute the following operations.The script uses the requests.get() method to query the Alpha Vantage API. If the request is successful, the response is parsed as JSON and stored in a variable named data. It then checks whether data contains a key called quarterly earnings.If this key exists, the script proceeds to convert the quarterly earnings data into a pandas DataFrame. It filters this DataFrame to include only the entries between September 2021 and March 2023. The filtered data is then saved as a CSV file named palo_alto_financial_ data.csv:The CSV file contains raw financial data in tabular formThe CSV file can be imported into Excel, Google Sheets, or other specialized data analysis toolsThe script also constructs a text-based summary of the filtered earnings data, including revenue, net income, and EPS for each quarter within the specified date range. This summary is saved as a text file named palo_alto_financial_summary.txt:The TXT file provides a human-readable summary of the financial data for Palo Alto Networks for the specified data rangeTXT files can be used for quick overviews and presentationsIf any errors occur during this process, such as a failed API request, the script will catch these exceptions and print an error message, thanks to the except block. This ensures that the script fails gracefully, providing useful feedback instead of crashing.You can upload the CSV file (palo_alto_financial_data.csv) to ChatGPT directly if you are a ChatGPT Plus user by following these steps:Uploading a CSV file directly into ChatGPT is supported through the Advanced Data Analysis option for ChatGPT Plus users. You can access the OpenAI website at https://openai.com/, and then log in using your login credentials. Once logged in, access your Settings and Beta options by clicking on the three dots near your email address in the bottom-left corner of the screen. Go to Beta features and activate the Advanced data analysis function by moving the slider to the right to activate (the option will turn green). Once this feature is activated, go to GPT-4 at the top center of the screen and then select Advanced Data Analysis from the drop-down list. You can click on the plus sign in the dialog box to upload the CSV file to ChatGPT:•  CSV file size limitations: 500 MB•  CSV file retention: Files are retained while a conversation is active and for three hours after the conversation is pausedIf you are not a ChatGPT Plus user, follow the following instructions using the OpenAI API to upload the CSV file (palo_alto_financial_data.csv) into ChatGPT, and analyze the data using the GPT 3.5 turbo model:1. Create a new Python script file in Notepad, Notepad++, PyCharm, or Visual Studio Code. It is important that you add your OpenAI API key to the following api_key line. Copy and paste the following code into your Python script file and name it OPENAIAPI.py:import openai import pandas as pd df = pd.read_csv("palo_alto_financial_data.csv") csv_string = df.to_string(index=False) api_key = "your_openai_api_key_here" openai.api_key = api_key input_text = f"Here is the financial data for Palo Alto Networks:\n\n{csv_string}\n\nPlease analyze the data and provide insights." response = openai.Completion.create( engine="gpt-3.5-turbo", # Specifying GPT-3.5-turbo engine prompt=input_text, max_tokens=200 # Limiting the length of the generated text ) generated_text = response.choices[0].text.strip() print("GPT-3.5-turbo PANW Analysis:", generated_text)2.  Run the Python script file:Python OPENAIAPI.pyThis Python code snippet is responsible for interacting with the OpenAI API to send the formatted text input (the financial data prompt) to ChatGPT and receive the generated response. Here’s a breakdown of each part:The Python code snippet starts by importing two essential Python libraries – openai for interacting with the OpenAI API, and pandas for data manipulation.The script reads financial data from a CSV file named palo_alto_financial_data. csv using pandas, converting this data into a formatted string. It then sets up the OpenAI API by initializing it with a user-provided API key.Following this, the script prepares a prompt for GPT-3.5-turbo, consisting of the loaded financial data and a request for analysis. This prompt is sent to the GPT-3.5-turbo engine via the OpenAI API, which returns a text-based analysis, limited to 200 tokens.The generated analysis is then extracted from the API’s response and printed to the console with the label “GPT-3.5-turbo PANW Analysis.” The script essentially automates the process of sending financial data to the GPT-3.5-turbo engine for insightful analysis, making it easy to obtain quick, AI-generated insights on Palo Alto Networks’ financial performance.ConclusionIn conclusion, harnessing ChatGPT's capabilities, we've navigated Palo Alto Networks' earnings landscape. From automated data extraction to insightful analysis, this journey unveiled crucial financial trends. Whether utilizing APIs or scraping web data, the process demystified complexities, offering a streamlined approach. By generating comprehensive summaries and interacting with ChatGPT for deeper insights, the pathway to understanding financial data has been simplified. Embracing AI-powered analysis enables swift comprehension of earnings reports, empowering informed decisions in the realm of financial scrutiny and investment strategies.Author BioJames Bryant, a finance and technology expert, excels at identifying untapped opportunities and leveraging cutting-edge tools to optimize financial processes. With expertise in finance automation, risk management, investments, trading, and banking, he's known for staying ahead of trends and driving innovation in the financial industry. James has built corporate treasuries like Salesforce and transformed companies like Stanford Health Care through digital innovation. He is passionate about sharing his knowledge and empowering others to excel in finance. Outside of work, James enjoys skiing with his family in Lake Tahoe, running half marathons, and exploring new destinations and culinary experiences with his wife and daughter.Aloke Mukherjee is a seasoned technologist with over a decade of experience in business architecture, digital transformation, and solutions architecture. He excels at applying data-driven solutions to real-world problems and has proficiency in data analytics and planning. Aloke worked at EMC Corp and Genentech and currently spearheads the digital transformation of Finance Business Intelligence at Stanford Health Care. In addition to his work, Aloke is a Certified Personal Trainer and is passionate about helping his clients stay fit. Aloke also has a passion for wine and exploring new vineyards.
Read more
  • 0
  • 0
  • 199

article-image-ai-distilled-28-your-gen-ai-navigator-latest-news-insights
Merlyn Shelley
04 Dec 2023
13 min read
Save for later

AI_Distilled #28: Your Gen AI Navigator: Latest News & Insights

Merlyn Shelley
04 Dec 2023
13 min read
Dive deeper into the world of AI innovation and stay ahead of the AI curve! Subscribe to our AI_Distilled newsletter for the latest insights. Don't miss out – sign up today!👋 Hello ,“We will have for the first time something smarter than the smartest human. It's hard to say exactly what that moment is, but there will come a point where no job is needed.” -Elon Musk, CEO, Tesla.  Musk has released an important FSD v12 update to Tesla employees, a move touted as a breakthrough in the realization of true self-driving capabilities powered by neural nets. In the words of Musk, can self-driving cars be smarter than the smartest drivers? Only time will tell, but the future is exciting. Welcome to another AI_Distilled, you one-stop hub for all things Gen AI. Let's kick off today's edition with some of the latest news and analysis across the AI domain: 👉 Amazon and Salesforce Fortify Alliance to Boost AI Integration 👉 Tesla Initiates Rollout of FSD v12 to Employees 👉 Anthropic Unveils Claude 2.1 with Enhanced AI Capabilities 👉 Stability AI Unveils 'Stable Video Diffusion' AI Tool for Animated Images 👉 Amazon Unveils Amazon Q: A Generative AI-Powered Business Assistant 👉 Amazon Introduces New AI Chips for Model Training and Inference 👉 Pika Labs Raises $55M, Launches AI Video Platform 👉 Amazon AWS Unveils Ambitious Generative AI Vision at Re:Invent Next, we'll swiftly explore the secret knowledge column that features some key LLM resources: 💎 How to Enhance LLM Reasoning with System 2 Attention 💎 Unlocking AWS Wisdom with Amazon Q 💎 How to Optimize LLMs on Modest Hardware 💎 How to Assess AI System Risks 💎 Prompting Strategies for Domain-Specific Expertise in GPT-4 Hold on, there's additional news! Discover the hands-on tips and proven methods straight from the AI community:   📍 Customizing Models in Amazon Bedrock 📍 Building an Infinite Chat Memory GPT Voice Assistant in Python 📍 Generating High-Quality Computer Vision Datasets 📍 Understanding LSTM in NLPLooking to expand your AI toolkit on GitHub? Check out these repositories!  ✅ neurocult/agency ✅ lunyiliu/coachlm ✅ 03axdov/muskie ✅ robocorp/llmstatemachine Also, don't forget to check our expert insights column, which covers the interesting concepts of hybrid cloud from the book 'Achieving Digital Transformation Using Hybrid Cloud'. It's a must-read!   Stay curious and gear up for an intellectually enriching experience!  📥 Feedback on the Weekly EditionQuick question: How do you handle data quality issues, such as missing or inconsistent data, to ensure accurate visual representations? Share your valued opinions discreetly! Your insights could shine in our next issue for the 38K-strong AI community. Join the conversation! 🗨️✨ As a big thanks, get our bestselling "The Applied Artificial Intelligence Workshop" in PDF.   Let's make AI_Distilled even more awesome! 🚀 Jump on in! Share your thoughts and opinions here! Writer’s Credit: Special shout-out to Vidhu Jain for their valuable contribution to this week’s newsletter content!  Cheers,  Merlyn Shelley  Editor-in-Chief, Packt  SignUp | Advertise | Archives⚡ TechWave: AI/GPT News & Analysis🔸 Amazon and Salesforce Fortify Alliance to Boost AI Integration: Amazon and Salesforce strengthen their partnership, prioritizing AI integration for efficient data management. This collaboration enhances synergy between Salesforce and AWS, with Salesforce expanding its use of AWS technologies, including Hyperforce, while AWS leverages Salesforce products for unified customer profiles and personalized experiences. 🔸 Tesla Initiates Rollout of FSD v12 to Employees, Signaling Progress in Self-Driving Endeavor: Tesla begins the rollout of Full Self-Driving (FSD) v12 to employees, a key move for CEO Elon Musk's self-driving vision. The update shifts controls to neural nets, advancing autonomy. Musk aims to exit beta with v12, removing constant driver monitoring, but concerns persist about Tesla's responsibility and the timeline for full self-driving. 🔸 Anthropic Unveils Claude 2.1 with Enhanced AI Capabilities: Anthropic launches Claude 2.1 via API, featuring a groundbreaking 200K token context window, halving hallucination rates, and a beta tool use function. The expanded context window facilitates processing extensive content, improving accuracy, honesty, and comprehension, particularly in legal and financial documents. Integration capabilities with existing processes enhance Claude's utility in diverse operations. 🔸 Stability AI Unveils 'Stable Video Diffusion' AI Tool for Animated Images: Stability AI introduces Stable Video Diffusion, a free AI research tool that converts static images into brief videos using SVD and SVD-XT models. Running on NVIDIA GPUs, it generates 2-4 second MP4 clips with 576x1024 resolution, featuring dynamic scenes through panning, zooming, and animated effects. 🔸 Amazon Unveils Amazon Q: A Generative AI-Powered Business Assistant: Amazon Q is a new generative AI assistant for businesses, facilitating streamlined tasks, quick decision-making, and innovation. It engages in conversations, solves problems, and generates content by connecting to company information. Customizable plans prioritize user privacy and data security, enabling deployment in various tasks, from press releases to social media posts. 🔸 Amazon Introduces New AI Chips for Model Training and Inference: Amazon has launched new chips, including AWS Trainium2 and Graviton4, addressing GPU shortages for generative AI. Trainium2 boasts 4x performance and 2x energy efficiency, with a cluster of 100,000 chips capable of swift AI LLM training. Graviton4 targets inferencing, aiming to lessen GPU dependence, aligning with Amazon's commitment to meet rising AI demands. 🔸 Pika Labs Raises $55M, Launches AI Video Platform: Pika Labs, a video AI startup, secures $55 million in funding, led by a $35 million series A round from Lightspeed Venture Partners. They unveil Pika 1.0, a web platform enabling easy text prompt-based video creation and editing in diverse styles. Already used by 500,000+, the product aims to rival AI video generation platforms like Runway and Stability AI, as well as Adobe tools. 🔸 Amazon AWS Unveils Ambitious Generative AI Vision at Re:Invent: Amazon aims to lead in generative AI, surpassing rivals Azure and Google Cloud. Emphasizing the Bedrock service's diverse generative AI models and user-friendly data tools, Amazon focuses on enhancing Bedrock and introducing gen AI features to Amazon Quicksight for business intelligence applications.  🔮 Expert Insights from Packt Community Achieving Digital Transformation Using Hybrid Cloud by Vikas Grover, Ishu Verma, Praveen Rajagopalan Organizations of all sizes and industries appreciate the convenience of adjusting their resources based on demand and only paying for what they use. Leading public cloud service providers and SaaS offerings such as Amazon Web Services (AWS), Microsoft Azure, Google Cloud Platform (GCP), and Salesforce, respectively, have seen significant growth in recent years, catering to the needs of small start-ups and large enterprises alike. Hybrid cloud use cases Hybrid cloud has emerged as a popular solution for organizations looking to balance the benefits of public and private clouds while addressing the data security requirements, compliance needs for regulated applications, and performance and computing needs for applications running at remote edge locations. Here are four use cases that showcase the versatility and flexibility of the hybrid cloud in different industries: Security: A government agency uses a hybrid cloud approach to store sensitive national security data on a private cloud for maximum security while utilizing the public cloud for cost-effective data storage and processing for non-sensitive data. Proprietary Technology: A technology company uses a hybrid cloud approach to store and manage its proprietary software on a private cloud for maximum security and control while utilizing the public cloud for cost-effective development and testing. For example, financial service companies manage trading platforms on the private cloud for maximum control while using the public cloud for running simulations and back-testing algorithms. Competitive Edge: A retail company uses a hybrid cloud solution to store critical sales and customer information on a private cloud for security and compliance while utilizing the public cloud for real-time data analysis to gain a competitive edge by offering personalized customer experiences and insights. Telecom: A telecommunications company uses a hybrid cloud approach to securely store sensitive customer information on a private cloud while utilizing the public cloud for real-time data processing and analysis to improve network performance and customer experience. This approach helps the company maintain a competitive edge in the telecom sector by providing a superior network experience to its customers. Understanding the benefits of hybrid cloud computing A hybrid cloud provides a flexible solution. Many organizations have embraced and adopted the hybrid cloud. If we take an example of a cable company, Comcast (the world’s largest cable company), as per a technical paper published by Comcast for SCTE-ISBE, Comcast serves tens of millions of customers and hosts hundreds of tenants in eight regions and three public clouds. This is a great testimony of using a hybrid cloud for mission-critical workloads that need to run at scale. Hybrid cloud is more popular than ever and some of the reasons that organizations are adopting a hybrid cloud are as follows: Time to market: With choices available to your IT teams to leverage appropriate resources as needed by use case, new applications and services can be launched quickly. Manage costs: Hybrid cloud helps you with optimizing and consuming resources efficiently. Make use of your current investments in existing infrastructure and when needed to scale, burst the workloads in the public cloud. Reduced lock-in: Going into the cloud may be appealing, but once in and when costs start to rise and eat the bottom line of the organization, it would be another costly proposition to reverse-migrate some of your applications from the public cloud. A hybrid cloud allows you to run anywhere and reduces your lock-in. Gaining a competitive advantage: In the competitive world of business, relying solely on public cloud technologies can put you at a disadvantage. To stay ahead of the competition, it’s important to maintain control over and ownership of cutting-edge technologies. This way, you can build and grow your business in an increasingly competitive environment. This content is from the book Achieving Digital Transformation Using Hybrid Cloud by Vikas Grover, Ishu Verma, Praveen Rajagopalan (July 2023). Start reading a free chapter or access the entire Packt digital library free for 7 days by signing up now. To learn more, click on the button below.   Read through the Chapter 1 unlocked here...  🌟 Secret Knowledge: AI/LLM Resources🔸 How to Enhance LLM Reasoning with System 2 Attention: Meta researchers introduce System 2 Attention (S2A), a revolutionary technique enhancing Large Language Models (LLMs) by refining user prompts through psychological inspiration. S2A focuses on task-relevant data, boosting LLMs' accuracy in reasoning tasks by eliminating irrelevant information and instructing them to generate context effectively. 🔸 Unlocking AWS Wisdom with Amazon Q: A Guide for Optimal Decision-Making: Amazon Q, a robust chatbot trained on 17 years of AWS documentation, transforms AWS task execution. Explore its prowess in navigating AWS services intricacies, offering insights on serverless vs. containers and database choices. Enhance accuracy with expert guidance on AWS Well Architected Framework, troubleshooting, workload optimization, and content creation. 🔸 How to Optimize LLMs on Modest Hardware: Quantization, a key technique for running large language models on less powerful hardware, reduces model parameters' precision. PyTorch offers dynamic, static, and quantization-aware training strategies, each balancing model size, computational demand, and accuracy. Choosing hardware involves understanding the critical role of VRAM, challenging the notion that newer GPUs are always superior. 🔸 How to Assess AI System Risks: A Comprehensive Guide: Explore the nuanced realm of AI risk assessment in this guide, covering model and enterprise risks for responsible AI development. Understand the importance of defining inherent and residual risks, utilizing the NIST Risk Management Framework, and involving diverse stakeholders. Learn to evaluate risks using likelihood and severity scales, employing a risk matrix. 🔸 The Effectiveness of Prompting Strategies for Domain-Specific Expertise in GPT-4: This study explores prompting strategies to leverage domain-specific expertise from the versatile GPT-4 model. It reveals GPT-4's exceptional performance as a medical specialist, surpassing finely-tuned medical models. Medprompt, a combination of prompting strategies, enables GPT-4 to achieve over 90% accuracy on the challenging MedQA dataset, challenging the conventional need for extensive fine-tuning and showcasing the broad applicability of generalist models across diverse domains. 🔛 Masterclass: AI/LLM Tutorials🔸 Customizing Models in Amazon Bedrock: A Step-by-Step Guide: Embark on the journey of tailoring foundation models in Amazon Bedrock to align with your specific domain and organizational needs, enriching user experiences. This comprehensive guide introduces two customization options: fine-tuning and continued pre-training. Learn how to enhance model accuracy through fine-tuning using your task-specific labeled dataset and explore the process of creating fine-tuning jobs via the Amazon Bedrock console or APIs. Additionally, explore continued pre-training, available in public preview for Amazon Titan Text models, understanding its benefits in making models more domain-specific. The guide provides practical demos using AWS SDK for Python (Boto3) and offers crucial insights on data privacy, network security, billing, and provisioned throughput.  🔸 Building an Infinite Chat Memory GPT Voice Assistant in Python: Learn to build a customizable GPT voice assistant with OpenAI's cloud assistant feature. This guide explores the assistant API, providing auto-vectorization and intelligent context handling for extensive chat recall. Enjoy advantages like enhanced security, limitless memory, local message history retrieval, and flexible interfaces. Gain essential tools and skills for implementation, including an OpenAI API key, ffmpeg installation, and required Python packages. 🔸 Generating High-Quality Computer Vision Datasets: This guide outlines the process of building a customized and diverse computer vision dataset. It covers generating realistic image prompts with ChatGPT, utilizing a vision image generation model, automating object detection, and labeling. Learn to enhance dataset quality for improved computer vision projects through prompt customization and model utilization. 🔸 Understanding LSTM in NLP: A Python Guide: This guide explores employing Long Short-Term Memory (LSTM) layers for natural language processing in Python. It covers theoretical aspects, details coding of the layer's forward pass, and includes a practical implementation with a dataset, enhancing understanding and application of LSTM in NLP through text data preprocessing and sentiment encoding.  🚀 HackHub: Trending AI Tools🔸 neurocult/agency: Explore the capabilities of LLMs and generative AI with this library designed with a clean, effective, and Go-idiomatic approach. 🔸 lunyiliu/coachlm: Code and data for an automatic instruction revision method tailored for LLM instruction tuning to implement CoachLM and enhance the precision of LLM instruction tuning effortlessly. 🔸 https://github.com/03axdov/muskie: Python-based ML library streamlining the creation of custom datasets and model usage with minimal code requirements. 🔸 robocorp/llmstatemachine: Python library to unlock GPT-powered agents with ease, incorporating state machine logic and chat history memory for seamless development.  
Read more
  • 0
  • 0
  • 147

article-image-deploying-llms-with-amazon-sagemaker-part-2
Joshua Arvin Lat
30 Nov 2023
19 min read
Save for later

Deploying LLMs with Amazon SageMaker - Part 2

Joshua Arvin Lat
30 Nov 2023
19 min read
Dive deeper into the world of AI innovation and stay ahead of the AI curve! Subscribe to our AI_Distilled newsletter for the latest insights. Don't miss out – sign up today!IntroductionIn the first part of this post, we showed how easy it is to deploy large language models (LLMs) in the cloud using a managed machine learning service called Amazon SageMaker. In just a few steps, we were able to deploy a MistralLite model in a SageMaker Inference Endpoint. If you’ve worked on real ML-powered projects in the past, you probably know that deploying a model is just the first step! There are definitely a few more steps before we can consider that our application is ready for use.If you’re looking for the link to the first part, here it is: Deploying LLMs with Amazon SageMaker - Part 1In this post, we’ll build on top of what we already have in Part 1 and prepare a demo user interface for our chatbot application. That said, we will tackle the following sections in this post:● Section I: Preparing the SageMaker Notebook Instance (discussed in Part 1)● Section II: Deploying an LLM using the SageMaker Python SDK to a SageMaker Inference Endpoint (discussed in Part 1)● Section III: Enabling Data Capture with SageMaker Model Monitor●  Section IV: Invoking the SageMaker inference endpoint using the boto3 client●  Section V: Preparing a Demo UI for our chatbot application●  Section VI: Cleaning UpWithout further ado, let’s begin!Section III: Enabling Data Capture with SageMaker Model MonitorIn order to analyze our deployed LLM, it’s essential that we’re able to collect the requests and responses to a central storage location. Instead of building our own solution that collects the information we need, we can just utilize the built-in Model Monitor capability of SageMaker. Here, all we need to do is prepare the configuration details and run the update_data_capture_config() method of the inference endpoint object and we’ll have the data capture setup enabled right away! That being said, let’s proceed with the steps required to enable and test data capture for our SageMaker Inference endpoint:STEP # 01: Continuing where we left off in Part 1 of this post, let’s get the bucket name of the default bucket used by our session:s3_bucket_name = sagemaker_session.default_bucket() s3_bucket_nameSTEP # 02: In addition to this, let’s prepare and define a few prerequisites as well:prefix = "llm-deployment" base = f"s3://{s3_bucket_name}/{prefix}" s3_capture_upload_path = f"{base}/model-monitor"STEP # 03: Next, let’s define the data capture config:from sagemaker.model_monitor import DataCaptureConfig data_capture_config = DataCaptureConfig(    enable_capture = True,    sampling_percentage=100,    destination_s3_uri=s3_capture_upload_path,    kms_key_id=None,    capture_options=["REQUEST", "RESPONSE"],    csv_content_types=["text/csv"],    json_content_types=["application/json"] )Here, we specify that we’ll be collecting 100% of the requests and responses that pass through the deployed model.STEP # 04: Let’s enable data capture so that we’re able to save in Amazon S3 the request and response data:predictor.update_data_capture_config(    data_capture_config=data_capture_config )Note that this step may take about 8-10 minutes to complete. Feel free to grab a cup of coffee or tea while waiting!STEP # 05: Let’s check if we are able to capture the input request and output response by performing another sample request:result = predictor.predict(input_data)[0]["generated_text"] print(result)This should yield the following output:"The meaning of life is a philosophical question that has been debated by thinkers and philosophers for centuries. There is no single answer that can be definitively proven, as the meaning of life is subjective and can vary greatly from person to person.\n\nSome people believe that the meaning of life is to find happiness and fulfillment through personal growth, relationships, and experiences. Others believe that the meaning of life is to serve a greater purpose, such as through a religious or spiritual calling, or by making a positive impact on the world through their work or actions.\n\nUltimately, the meaning of life is a personal journey that each individual must discover for themselves. It may involve exploring different beliefs and perspectives, seeking out new experiences, and reflecting on what brings joy and purpose to one's life."Note that it may take a minute or two before the .jsonl file(s) containing the request and response data appear in our S3 bucket.STEP # 06: Let’s prepare a few more examples:prompt_examples = [    "What is the meaning of life?",    "What is the color of love?",    "How to deploy LLMs using SageMaker",    "When do we use Bedrock and when do we use SageMaker?" ] STEP # 07: Let’s also define the perform_request() function which wraps the relevant lines of code for performing a request to our deployed LLM model:def perform_request(prompt, predictor):    input_data = {        "inputs": f"<|prompter|>{prompt}</s><|assistant|>",        "parameters": {            "do_sample": False,            "max_new_tokens": 2000,            "return_full_text": False,        }    }      response = predictor.predict(input_data)    return response[0]["generated_text"] STEP # 08: Let’s quickly test the perform_request() function:perform_request(prompt_examples[0], predictor=predictor)STEP # 09: With everything ready, let’s use the perform_request() function to perform requests using the examples we’ve prepared in an earlier step:from time import sleep for example in prompt_examples:    print("Input:", example)      generated = perform_request(        prompt=example,        predictor=predictor    )    print("Output:", generated)    print("-"*20)    sleep(1)This should return the following:Input: What is the meaning of life? ... -------------------- Input: What is the color of love? Output: The color of love is often associated with red, which is a vibrant and passionate color that is often used to represent love and romance. Red is a warm and intense color that can evoke strong emotions, making it a popular choice for representing love. However, the color of love is not limited to red. Other colors that are often associated with love include pink, which is a softer and more feminine shade of red, and white, which is often used to represent purity and innocence. Ultimately, the color of love is subjective and can vary depending on personal preferences and cultural associations. Some people may associate love with other colors, such as green, which is often used to represent growth and renewal, or blue, which is often used to represent trust and loyalty. ...Note that this is just a portion of the overall output and you should get a relatively long response for each input prompt.Section IV: Invoking the SageMaker inference endpoint using the boto3 clientWhile it’s convenient to use the SageMaker Python SDK to invoke our inference endpoint, it’s best that we also know how to use boto3 as well to invoke our deployed model. This will allow us to invoke the inference endpoint from an AWS Lambda function using boto3.Image 10 — Utilizing API Gateway and AWS Lambda to invoke the deployed LLMThis Lambda function would then be triggered by an event from an API Gateway resource similar to what we have in Image 10. Note that we’re not planning to complete the entire setup in this post but having a working example of how to use boto3 to invoke the SageMaker inference endpoint should easily allow you to build an entire working serverless application utilizing API Gateway and AWS Lambda.STEP # 01: Let’s quickly check the endpoint name of the SageMaker inference endpoint:predictor.endpoint_nameThis should return the endpoint name with a format similar to what we have below:'MistralLite-HKGKFRXURT'STEP # 02: Let’s prepare our boto3 client using the following lines of code:import boto3 import json boto3_client = boto3.client('runtime.sagemaker')STEP # 03: Now, let’s invoke the endpointbody = json.dumps(input_data).encode() response = boto3_client.invoke_endpoint(    EndpointName=predictor.endpoint_name,    ContentType='application/json',    Body=body )   result = json.loads(response['Body'].read().decode())STEP # 04: Let’s quickly inspect the result:resultThis should give us the following:[{'generated_text': "The meaning of life is a philosophical question that has been debated by thinkers and philosophers for centuries. There is no single answer that can be definitively proven, as the meaning of life is subjective and can vary greatly from person to person..."}] STEP # 05: Let’s try that again and print the output text:result[0]['generated_text']This should yield the following output:"The meaning of life is a philosophical question that has been debated by thinkers and philosophers for centuries..."STEP # 06: Now, let’s define perform_request_2 which uses the boto3 client to invoke our deployed LLM:def perform_request_2(prompt, boto3_client, predictor):    input_data = {        "inputs": f"<|prompter|>{prompt}</s><|assistant|>",        "parameters": {            "do_sample": False,            "max_new_tokens": 2000,            "return_full_text": False,        }    }      body = json.dumps(input_data).encode()    response = boto3_client.invoke_endpoint(        EndpointName=predictor.endpoint_name,        ContentType='application/json',        Body=body    )      result = json.loads(response['Body'].read().decode())    return result[0]["generated_text"]STEP # 07: Next, let’s run the following block of code to have our deployed LLM answer the same set of questions using the perform_request_2() function:for example in prompt_examples:    print("Input:", example)      generated = perform_request_2(        prompt=example,        boto3_client=boto3_client,        predictor=predictor    )    print("Output:", generated)    print("-"*20)    sleep(1)This will give us the following output:Input: What is the meaning of life? ... -------------------- Input: What is the color of love? Output: The color of love is often associated with red, which is a vibrant and passionate color that is often used to represent love and romance. Red is a warm and intense color that can evoke strong emotions, making it a popular choice for representing love. However, the color of love is not limited to red. Other colors that are often associated with love include pink, which is a softer and more feminine shade of red, and white, which is often used to represent purity and innocence. Ultimately, the color of love is subjective and can vary depending on personal preferences and cultural associations. Some people may associate love with other colors, such as green, which is often used to represent growth and renewal, or blue, which is often used to represent trust and loyalty. ... Given that it may take a few minutes before the .jsonl files appear in our S3 bucket, let’s wait for about 3-5 minutes before proceeding to the next section. Feel free to grab a cup of coffee or tea while waiting!STEP # 08: Let’s run the following block of code to list the captured data files stored in our S3 bucket:results = !aws s3 ls {s3_capture_upload_path} --recursive resultsSTEP # 09: In addition to this, let’s store the list inside the processed variable:processed = [] for result in results:    partial = result.split()[-1]    path = f"s3://{s3_bucket_name}/{partial}"    processed.append(path)   processedSTEP # 10: Let’s create a new directory named captured_data using the mkdir command:!mkdir -p captured_dataSTEP # 11: Now, let’s download the .jsonl files from the S3 bucket to the captured_data directory in our SageMaker Notebook Instance:for index, path in enumerate(processed):    print(index, path)    !aws s3 cp {path} captured_data/{index}.jsonlSTEP # 12: Let’s define the load_json_file() function which will help us load files with JSON content:import json def load_json_file(path):    output = []      with open(path) as f:        output = [json.loads(line) for line in f]          return outputSTEP # 13: Using the load_json_file() function we defined in an earlier step, let’s load the .jsonl files and store them inside the all variable for easier viewing:all = [] for i, _ in enumerate(processed):    print(f">: {i}")    new_records = load_json_file(f"captured_data/{i}.jsonl")    all = all + new_records     allRunning this will yield the following response:Image 11 — All captured data points inside the all variableFeel free to analyze the nested structure stored in all variables. In case you’re interested in how this captured data can be analyzed and processed further, you may check Chapter 8, Model Monitoring and Management Solutions of my 2nd book “Machine Learning Engineering on AWS”.Section V: Preparing a Demo UI for our chatbot applicationYears ago, we had to spend a few hours to a few days before we were able to prepare a user interface for a working demo. If you have not used Gradio before, you would be surprised that it only takes a few lines of code to set everything up. In the next set of steps, we’ll do just that and utilize the model we’ve deployed in the previous parts of our demo application:STEP # 01: Continuing where we left off in the previous part, let’s install a specific version of gradio using the following command:!pip install gradio==3.49.0STEP # 02: We’ll also be using a specific version of fastapi as well:!pip uninstall -y fastapi !pip install fastapi==0.103.1STEP # 03: Let’s prepare a few examples and store them in a list:prompt_examples = [    "What is the meaning of life?",    "What is the color of love?",    "How to deploy LLMs using SageMaker",    "When do we use Bedrock and when do we use SageMaker?",    "Try again",    "Provide 10 alternatives",    "Summarize the previous answer into at most 2 sentences" ]STEP # 04: In addition to this, let’s define the parameters using the following block of code:parameters = {    "do_sample": False,    "max_new_tokens": 2000, }STEP # 05: Next, define the process_and_response() function which we’ll use to invoke the inference endpoint:def process_and_respond(message, chat_history):    processed_chat_history = ""    if len(chat_history) > 0:        for chat in chat_history:            processed_chat_history += f"<|prompter|>{chat[0]}</s><|assistant|>{chat[1]}</s>"              prompt = f"{processed_chat_history}<|prompter|>{message}</s><|assistant|>"    response = predictor.predict({"inputs": prompt, "parameters": parameters})    parsed_response = response[0]["generated_text"][len(prompt):]    chat_history.append((message, parsed_response))    return "", chat_historySTEP # 06: Now, let’s set up and prepare the user interface we’ll use to interact with our chatbot:import gradio as gr with gr.Blocks(theme=gr.themes.Monochrome(spacing_size="sm")) as demo:    with gr.Row():        with gr.Column():                      message = gr.Textbox(label="Chat Message Box",                                 placeholder="Input message here",                                 show_label=True,                                 lines=12)            submit = gr.Button("Submit")                      examples = gr.Examples(examples=prompt_examples,                                   inputs=message)        with gr.Column():            chatbot = gr.Chatbot(height=900)      submit.click(process_and_respond,                 [message, chatbot],                 [message, chatbot],                 queue=False)Here, we can see the power of Gradio as we only needed a few lines of code to prepare a demo app.STEP # 07: Now, let’s launch our demo application using the launch() method:demo.launch(share=True, auth=("admin", "replacethis1234!"))This will yield the following logs:Running on local URL:  http://127.0.0.1:7860 Running on public URL: https://123456789012345.gradio.live STEP # 08: Open the public URL in a new browser tab. This will load a login page which will require us to input the username and password before we are able to access the chatbot.Image 12 — Login pageSpecify admin and replacethis1234! in the login form to proceed.STEP # 09: After signing in using the credentials, we’ll be able to access a chat interface similar to what we have in Image 13. Here, we can try out various types of prompts.Image 13 — The chatbot interfaceHere, we have a Chat Message Box where we can input and run our different prompts on the left side of the screen. We would then see the current conversation on the right side.STEP # 10: Click the first example “What is the meaning of life?”. This will auto-populate the text area similar to what we have in Image 14:Image 14 — Using one of the examples to populate the Chat Message BoxSTEP # 11:Click the Submit button afterwards. After a few seconds, we should get the following response in the chat box:Image 15 — Response of the deployed modelAmazing, right? Here, we just asked the AI what the meaning of life is.STEP # 12: Click the last example “Summarize the previous answer into at most 2 sentences”. This will auto-populate the text area with the said example. Click the Submit button afterward.Image 16 — Summarizing the previous answer into at most 2 sentencesFeel free to try other prompts. Note that we are not limited to the prompts available in the list of examples in the interface.Important Note: Like other similar AI/ML solutions, there's the risk of hallucinations or the generation of misleading information. That said, it's critical that we exercise caution and validate the outputs produced by any Generative AI-powered system to ensure the accuracy of the results.Section VI: Cleaning UpWe’re not done yet! Cleaning up the resources we’ve created and launched is a very important step as this will help us ensure that we don’t pay for the resources we’re not planning to use.STEP # 01: Once you’re done trying out various types of prompts, feel free to turn off and clean up the resources launched and created using the following lines of code:demo.close() predictor.delete_endpoint()STEP # 02: Make sure to turn off (or delete) the SageMaker Notebook instance as well. I’ll leave this to you as an exercise!Wasn’t that easy?! As you can see, deploying LLMs with Amazon SageMaker is straightforward and easy. Given that Amazon SageMaker handles most of the heavy lifting to manage the infrastructure, we’re able to focus more on the deployment of our machine learning model. We are just scratching the surface as there is a long list of capabilities and features available in SageMaker. If you want to take things to the next level, feel free to read 2 of my books focusing heavily on SageMaker: “Machine Learning with Amazon SageMaker Cookbook” and “Machine Learning Engineering on AWS”.Author BioJoshua Arvin Lat is the Chief Technology Officer (CTO) of NuWorks Interactive Labs, Inc. He previously served as the CTO of 3 Australian-owned companies and also served as the Director for Software Development and Engineering for multiple e-commerce startups in the past. Years ago, he and his team won 1st place in a global cybersecurity competition with their published research paper. He is also an AWS Machine Learning Hero and he has been sharing his knowledge in several international conferences to discuss practical strategies on machine learning, engineering, security, and management. He is also the author of the books "Machine Learning with Amazon SageMaker Cookbook", "Machine Learning Engineering on AWS", and "Building and Automating Penetration Testing Labs in the Cloud". Due to his proven track record in leading digital transformation within organizations, he has been recognized as one of the prestigious Orange Boomerang: Digital Leader of the Year 2023 award winners.
Read more
  • 0
  • 0
  • 251

article-image-deploying-llms-with-amazon-sagemaker-part-1
Joshua Arvin Lat
29 Nov 2023
13 min read
Save for later

Deploying LLMs with Amazon SageMaker - Part 1

Joshua Arvin Lat
29 Nov 2023
13 min read
Dive deeper into the world of AI innovation and stay ahead of the AI curve! Subscribe to our AI_Distilled newsletter for the latest insights. Don't miss out – sign up today!IntroductionHave you ever tried asking a Generative AI-powered chatbot the question: “What is the meaning of life?”. In case you have not tried that yet, here’s the response I got when I tried that myself using a custom chatbot app I built with a managed machine learning (ML) service called Amazon SageMaker.                                              Image 01 — Asking a chatbot the meaning of lifeYou would be surprised that I built this quick demo application myself in just a few hours! In this post, I will teach you how to deploy your own Large Language Models (LLMs) in a SageMaker Inference Endpoint (that is, a machine learning-powered server that responds to inputs) with just a few lines of code.                                                   Image 02 — Deploying an LLM to a SageMaker Inference EndpointWhile most tutorials available teach us how to utilize existing Application Programming Interfaces (APIs) to prepare chatbot applications, it’s best that we also know how to deploy LLMs in our own servers in order to guarantee data privacy and compliance. In addition to this, we’ll be able to manage the long-term costs of our AI-powered systems as well. One of the most powerful solutions available for these types of requirements is Amazon SageMaker which helps us focus on the work we need to do instead of worrying about cloud infrastructure management.We’ll divide the hands-on portion into the following sections:●  Section I: Preparing the SageMaker Notebook Instance●  Section II: Deploying an LLM using the SageMaker Python SDK to a SageMaker Inference Endpoint●  Section III: Enabling Data Capture with SageMaker Model Monitor (discussed in Part 2)●  Section IV: Invoking the SageMaker inference endpoint using the boto3 client (discussed in Part 2)●  Section V: Preparing a Demo UI for our chatbot application (discussed in Part 2)●  Section VI: Cleaning Up (discussed in Part 2) Without further ado, let’s begin!Section I: Preparing the SageMaker Notebook InstanceLet’s start by creating a SageMaker Notebook instance. Note that while we can also do this in SageMaker Studio, running the example in a Sagemaker Notebook Instance should do the trick. If this is your first time launching a SageMaker Notebook instance, you can think of it as your local machine with several tools pre-installed already where we can run our scripts.STEP # 01: Sign in to your AWS account and navigate to the SageMaker console by typing sagemaker in the search box similar to what we have in the following image:                                                           Image 03 — Navigating to the SageMaker consoleChoose Amazon SageMaker from the list of options available as highlighted in Image 03.STEP # 02: In the sidebar, locate and click Notebook instances under Notebook:                                 Image 04 — Locating Notebook instances in the sidebar STEP # 03: Next, locate and click the Create notebook instance button.STEP # 04: In the Create notebook instance page, you’ll be asked to input a few configuration parameters before we’re able to launch the notebook instance where we’ll be running our code:                                                          Image 05 — Creating a new SageMaker Notebook instanceSpecify a Notebook instance name (for example, llm-demo) and select a Notebook instance type. For best results, you may select a relatively powerful instance type (ml.m4.xlarge) where we will run the scripts. However, you may decide to choose a smaller instance type such as ml.t3.medium (slower but less expensive). Note that we will not be deploying our LLM inside this notebook instance as the model will be deployed in a separate inference endpoint (which will require a more powerful instance type such as an ml.g5.2xlarge).STEP # 05:Create an IAM role by choosing Create a new role from the list of options available in the IAM role dropdown (under Permissions and encryption).                                                                             Image 06 — Opening the Jupyter appThis will open the following popup window. Given that we’re just working on a demo application, the default security configuration should do the trick. Click the Create role button.Important Note: Make sure to have a more secure configuration when dealing with production (or staging) work environments.Won’t dive deep into how cloud security works in this post so feel free to look for other resources and references to further improve the current security setup. In case you are interested to learn more about cloud security, feel free to check my 3rd book “Building and Automating Penetration Testing Labs in the Cloud”. In the 7th Chapter of the book (Setting Up an IAM Privilege Escalation Lab), you’ll learn how misconfigured machine learning environments on AWS can easily be exploited with the right sequence of steps.STEP #06: Click the Create notebook instance button. Wait for about 5-10 minutes for the SageMaker Notebook instance to be ready.Important Note: Given that this will launch a resource that will run until you turn it off (or delete it), make sure to complete all the steps in the 2nd part of this post and clean up the created resources accordingly.STEP # 07:Once the instance is ready, click Open Jupyter similar to what we have in Image 07:                                                                            Image 07 — Opening the Jupyter appThis will open the Jupyter application in a browser tab. If this is your first time using this application, do not worry as detailed instructions will be provided in the succeeding steps to help you get familiar with this tool.STEP # 08:Create a new notebook by clicking New and selecting conda_python3 from the list of options available: Image 08 — Creating a new notebook using the conda_python3 kernelIn case you are wondering about what a kernel is, it is simply an “engine” or “environment” with pre-installed libraries and prerequisites that executes the code specified in the notebook cells. You’ll see this in action in a bit.STEP # 09:At this point, we should see the following interface where we can run various types of scripts and blocks of code:                                                                              Image 09 — New Jupyter notebookFeel free to rename the Jupyter Notebook before proceeding to the next step. If you have not used a Jupyter Notebook before, you may run the following line of code by typing the following in the text field and pressing SHIFT + ENTER. print('hello')This should print the output hello right below the text field where we placed our code.Section II: Deploying an LLM using the SageMaker Python SDK to a SageMaker Inference EndpointSTEP # 01: With everything ready, let’s start by installing a specific version of the SageMaker Python SDK: !pip install sagemaker==2.192.1Here, we’ll be using v2.192.1. This will help us ensure that you won’t encounter breaking changes even if you work on the hands-on solutions in this post at a later date.In case you are wondering what the SageMaker Python SDK is, it is simply a software development kit (SDK) with the set of tools and APIs to help developers interact with and utilize the different features and capabilities of Amazon SageMaker.STEP # 02: Next, let’s import and prepare a few prerequisites by running the following block of code: import sagemaker import time sagemaker_session = sagemaker.Session() region = sagemaker_session.boto_region_name role = sagemaker.get_execution_role()STEP # 03: Let’s import HuggingFaceModel and get_huggingface_llm_image_uri as well:from sagemaker.huggingface import HuggingFaceModel, get_huggingface_llm_image_uriSTEP # 04: Next, let’s define the generate_random_label() function which we’ll use later when naming our resources:from string import ascii_uppercase from random import choice def generate_random_label():    letters = ascii_uppercase      return ''.join(choice(letters) for i in range(10))This will help us avoid naming conflicts when creating and configuring our resources.STEP # 05: Use the get_huggingface_llm_image_uri function we imported in an earlier step to retrieve the container image URI for our LLM. In addition to this, let’s define the model_name we’ll use later when deploying our LLM to a SageMaker endpoint:image_uri = get_huggingface_llm_image_uri( backend="huggingface", region=region, version="1.1.0" ) model_name = "MistralLite-" + generate_random_label()STEP # 06: Before, we proceed with the actual deployment, let’s quickly inspect what we have in the image_uri variable:image_uriThis will output the following variable value:'763104351884.dkr.ecr.us-east-1.amazonaws.com/huggingface-pytorch-tgi-inference:2.0.1-tgi1.1.0-gpu-py39-cu118-ubuntu20.04'STEP # 07: Similarly, let’s check the variable value of model_name model_nameThis will give us the following:'MistralLite-HKGKFRXURT'Note that you’ll get a different model_name value since we’re randomly generating a portion of the model nameSTEP # 08: Let’s prepare the hub model configuration as well:hub_env = { 'HF_MODEL_ID': 'amazon/MistralLite', 'HF_TASK': 'text-generation', 'SM_NUM_GPUS': '1', "MAX_INPUT_LENGTH": '16000', "MAX_TOTAL_TOKENS": '16384', "MAX_BATCH_PREFILL_TOKENS": '16384', "MAX_BATCH_TOTAL_TOKENS":  '16384', }Here, we specify that we’ll be using the MistralLite model. If this is your first time hearing out MistralLite, it is a fine-tuned Mistral-7B-v0.1 language model. It can perform significantly better on several long context retrieve and answering tasks. For more information, feel free to check: https://huggingface.co/amazon/MistralLite.STEP # 09: Let’s initialize the HuggingFaceModel object using some of the prerequisites and variables we’ve prepared in the earlier steps:model = HuggingFaceModel(    name=model_name,    env=hub_env,    role=role,    image_uri=image_uri )STEP # 10: Now, let’s proceed with the deployment of the model using the deploy() method:predictor = model.deploy( initial_instance_count=1, instance_type="ml.g5.2xlarge", endpoint_name=model_name, )Here, we’re using an ml.g5.2xlarge for our inference endpoint.Given that this step may take about 10-15 minutes to complete, feel free to grab a cup of coffee or tea while waiting!Important Note: Given that this will launch a resource that will run until you turn it off (or delete it), make sure to complete all the steps in the 2nd part of this post and clean up the created resources accordingly.STEP # 11: Now, let’s prepare our first input data:question = "What is the meaning of life?" input_data = { "inputs": f"<|prompter|>{question}</s><|assistant|>", "parameters": {    "do_sample": False,    "max_new_tokens": 2000,    "return_full_text": False, } }STEP # 12: With the prerequisites ready, let’s have our deployed LLM process the input data we prepared in the previous step:result = predictor.predict(input_data)[0]["generated_text"] print(result)This should yield the following output:The meaning of life is a philosophical question that has been debated by thinkers and philosophers for centuries. There is no single answer that can be definitively proven, as the meaning of life is subjective and can vary greatly from person to person. ...Looks like our SageMaker Inference endpoint (where the LLM is deployed) is working just fine!ConclusionThat wraps up the first part of this post. At this point, you should have a good idea of how to deploy LLMs using Amazon SageMaker. However, there’s more in store for us in the second part as we’ll build on top of what we have already and enable data capture to help us collect and analyze the data (that is, the input requests and output responses) that pass through the inference endpoint. In addition to this, we’ll prepare a demo user interface utilizing the ML model we deployed in this post.If you’re looking for the link to the second part, here it is: Deploying LLMs with Amazon SageMaker - Part 2We are just scratching the surface as there is a long list of capabilities and features available in SageMaker. If you want to take things to the next level, feel free to read 2 of my books focusing heavily on SageMaker: “Machine Learning with Amazon SageMaker Cookbook” and “Machine Learning Engineering on AWS”.Author BioJoshua Arvin Lat is the Chief Technology Officer (CTO) of NuWorks Interactive Labs, Inc. He previously served as the CTO of 3 Australian-owned companies and also served as the Director for Software Development and Engineering for multiple e-commerce startups in the past. Years ago, he and his team won 1st place in a global cybersecurity competition with their published research paper. He is also an AWS Machine Learning Hero and he has been sharing his knowledge in several international conferences to discuss practical strategies on machine learning, engineering, security, and management. He is also the author of the books "Machine Learning with Amazon SageMaker Cookbook", "Machine Learning Engineering on AWS", and "Building and Automating Penetration Testing Labs in the Cloud". Due to his proven track record in leading digital transformation within organizations, he has been recognized as one of the prestigious Orange Boomerang: Digital Leader of the Year 2023 award winners.
Read more
  • 0
  • 0
  • 747
article-image-ai-distilled-27-ai-breakthroughs-open-source-pioneers
Merlyn Shelley
24 Nov 2023
13 min read
Save for later

AI_Distilled #27: AI Breakthroughs & Open-Source Pioneers

Merlyn Shelley
24 Nov 2023
13 min read
Dive deeper into the world of AI innovation and stay ahead of the AI curve! Subscribe to our AI_Distilled newsletter for the latest insights. Don't miss out – sign up today!👋 Hello ,Welcome to another AI_Distilled! This edition brings you key stories on AI, ML, NLP, Gen AI, and more. Our mission is to keep you informed, empowering your skill advancement. Before we embark on the need-to-know updates, let’s take a moment to observe an important perspective from an industry leader. “We’re now seeing a major second wave…let’s acknowledge that without open source, how would AI have made the tremendous progress it has over the last decade” -Jensen Huang, NVIDIA CEO Amidst the uncertainty surrounding Sam Altman's removal and reinstatement at OpenAI, the open-source community emerges as a potential beneficiary. Also, as OpenAI pauses new signups for ChatGPT Plus, enterprises are anticipated to seek stability and long-term impact by turning to open-source AI models such as Llama, Mistral, Falcon, and MPT for their AI application development needs. Both proprietary and open-source models will play roles, but the latter's contributions are crucial for advancing AI technology's impact on work and life. In this week’s edition, we’ll talk about Google DeepMind unveiling an advanced AI music generation model and experiments, Meta releasing Emu Video and Emu Edit, major breakthroughs in generative AI research, Microsoft Ignite 2023 bringing new AI expansions and product announcements, and Galileo's Hallucination Index identifying GPT-4 as the best LLM for different use cases. We’ve also got you your fresh dose of AI secret knowledge and tutorials including how to implement emerging practices for society-centered AI, how to speed up and improve LLM output with skeleton-of-thought, getting started with Llama 2 in 5 steps, and how to build an AI assistant with real-time web access in 100 lines of code using Python and GPT-4. Also, don't forget to check our expert insights column, which covers the interesting concepts of data architecture from the book 'Modern Data Architecture on AWS'. It's a must-read!  Stay curious and gear up for an intellectually enriching experience!  📥 Feedback on the Weekly EditionHey folks!After the stunning OpenAI DevDay, many of us were eager to embark on creating our custom GPT magic. But let's chat about the recent hiccups: the pause on ChatGPT-4 new sign-ups and the shift or reformation in OpenAI's leadership. It's got us all wondering about the future of our handy tools. Quick question: Ever tried ChatGPT's Advanced Data Analysis? Now that it's temporarily on hold for new users, it's got us thinking, right? Share your take on these changes in the comments. Your thoughts count! We're turning the spotlight on you – some of the best insights will be featured in our next issue for our 38K-strong AI-focused community. Don't miss out on the chance to share your views! 🗨️✨ As a big thanks, get our bestselling "The Applied Artificial Intelligence Workshop" in PDF.   Let's make AI_Distilled even more awesome! 🚀 Jump on in! Share your thoughts and opinions here! Writer’s Credit: Special shout-out to Vidhu Jain for their valuable contribution to this week’s newsletter content!  Cheers,  Merlyn Shelley  Editor-in-Chief, Packt  SignUp | Advertise | Archives⚡ TechWave: AI/GPT News & Analysis🔳 Sam Altman Is Reinstated as OpenAI’s Chief Executive: OpenAI reinstated CEO Sam Altman, reversing his ouster amid a board shake-up. The revamped board, led by Bret Taylor, includes Lawrence Summers and Adam D'Angelo, with Microsoft's support. Negotiations involved concessions, including an independent investigation into Altman's leadership. Some outgoing members sought to curb Altman's power. Altman's removal sparked a campaign by allies and employees for his return. The board initially stood by its decision but ultimately reinstated Altman for a fresh start. 🔳 Google DeepMind Unveils Advanced AI Music Generation Model and Experiments: Google DeepMind introduces Lyria, an advanced AI music generation model, and collaborates with YouTube on two experiments, "Dream Track" and "Music AI tools," revolutionizing music creation. Lyria excels in maintaining musical continuity, while the experiments support artists and producers in crafting unique soundtracks and enhancing the creative process. 🔳 Meta Unveils Emu Video and Emu Edit: Advancements in Generative AI Research: Meta has unveiled two major advancements in generative AI: Emu Video, a text-to-video platform using diffusion models for high-quality content generation, and Emu Edit, an image editing tool for precise control. Human evaluations favor Emu Video over previous models, showcasing substantial progress in creative and effective generative AI tools. 🔳 Google's AI Search Feature Expands to 120+ Countries: Google's Search Generative Experience (SGE) has expanded to 120+ countries, offering generative AI summaries and language support for Spanish, Portuguese, Korean, and Indonesian. Users can ask follow-up questions and get interactive definitions. The update will initially roll out in the US before expanding globally, enhancing natural language interactions in search results. 🔳 Microsoft Ignite 2023 Brings New AI Expansions and Product Announcements: Microsoft's Ignite 2023 highlighted the company's deepened AI commitment, featuring Bing Chat's rebranding to Copilot, custom AI chips, and new AI tools like Copilot for Azure. Microsoft Teams will offer AI-driven home decoration and voice isolation. The company consolidated planning tools, introduced generative AI copyright protection, Windows AI Studio for local AI deployment, and Azure AI Speech for text-to-speech avatars. The event underscored Microsoft's emphasis on AI integration across its products and services. 🔳 Microsoft Emerges as Ultimate Winner in OpenAI Power Struggle: Microsoft emerged victorious in the OpenAI power struggle by hiring ousted CEO Sam Altman and key staff, including Greg Brockman, to lead a new advanced AI team. This strategic move solidifies Microsoft's dominance in the industry, positioning it as a major player in AI without acquiring OpenAI, valued at $86 billion. The recent turmoil at OpenAI has led to employee threats of quitting and joining Altman at Microsoft, potentially granting Microsoft access to significant AI talent. 🔳 Galileo's Hallucination Index Identifies GPT-4 As the Best LLM for Different Use Cases: San Francisco based Galileo has introduced a Hallucination Index to aid users in selecting the most reliable Large Language Models (LLMs) for specific tasks. Evaluating various LLMs, including Meta's Llama series, the index found GPT-4 excelled, and OpenAI's models consistently performed well, supporting trustworthy GenAI applications. 🔳 Microsoft Releases Orca 2: Small Language Models That Outperform Larger Ones: Orca 2, comprising 7 billion and 13 billion parameter models, excels in intricate reasoning tasks, surpassing larger counterparts. Developed by fine-tuning LLAMA 2 base models on tailored synthetic data, Orca 2 showcases advancements in smaller language model research, demonstrating adaptability across tasks like reasoning, grounding, and safety through post-training with carefully filtered synthetic data. 🔳 NVIDIA CEO Predicts Major Second Wave of AI: Jensen Huang predicts a significant AI surge, citing breakthroughs in language replicated in biology, manufacturing, and robotics, offering substantial opportunities for Europe. Praising France's AI leadership, he emphasizes the importance of region-specific AI systems reflecting cultural nuances and highlights the crucial role of data in regional AI growth. 🔮 Expert Insights from Packt Community Modern Data Architecture on AWS - By Behram Irani Challenges with on-premises data systems As data grew exponentially, so did the on-premises systems. However, visible cracks started to appear in the legacy way of architecting data and analytics use cases. The hardware that was used to process, store, and consume data had to be procured up-front, and then installed and configured before it was ready for use. So, there was operational overhead and risks associated with procuring the hardware, provisioning it, installing software, and maintaining the system all the time. Also, to accommodate for future data growth, people had to estimate additional capacity way in advance. The concept of hardware elasticity didn’t exist.The lack of elasticity in hardware meant that there were scalability risks associated with the systems in place, and these risks would surface whenever there was a sudden growth in the volume of data or when there was a market expansion for the business. Buying all this extra hardware up-front also meant that a huge capital expenditure investment had to be made for the hardware, with all the extra capacity lying unused from time to time. Also, software licenses had to be paid for and those were expensive, adding to the overall IT costs. Even after buying all the hardware upfront, it was difficult to maintain the data platform’s high performance all the time. As data volumes grew, latency started creeping in, which adversely affected the performance of certain critical systems. As data grow into big data, the type of data produced was not just structured data; a lot of business use cases required semi-structured data, such as JSON files, and even unstructured data, such as images and PDF files. In subsequent chapters, we will go through some use cases that specify different types of data. As the sources of data grew, so did the number of ETL pipelines. Managing these pipelines became cumbersome. And on top of that, with so much data movement, data started to duplicate at multiple places, which made it difficult to create a single source of truth for the data. On the flip side, with so many data sources and data owners within an organization, data became siloed, which made it difficult to share across different LOBs in the organization. This content is from the book “Modern Data Architecture on AWS” writtern by Behram Irani (Aug 2023). Start reading a free chapter or access the entire Packt digital library free for 7 days by signing up now. To learn more, click on the button below. Read through the Chapter 1 unlocked here...  🌟 Secret Knowledge: AI/LLM Resources🤖 How to Use Amazon CodeWhisperer for Command Line: Amazon introduces Amazon CodeWhisperer for the command line, enhancing developer productivity with contextual CLI completions and AI-driven natural language-to-bash translation. The tool provides CLI completions and translates natural language instructions into executable shell code snippets, modernizing the command line experience for over thirty million engineers. 🤖 How to Implement Emerging Practices for Society-Centered AI: The post underscores the importance of AI professionals addressing societal implications, advocating for multidisciplinary collaboration. It stresses the significance of measuring AI's impact on society to enhance effectiveness and identify areas for improvement in developing systems that benefit the broader community. 🤖 How to Speed Up and Improve LLM Output with Skeleton-of-Thought: The article introduces the Skeleton-of-Thought (SoT) approach, aiming to enhance the efficiency of Language Models (LLMs) by reducing generation latency and improving answer quality. SoT guides LLMs to generate answer skeletons first, then completes them in parallel, potentially accelerating open-source and API-based models for various question categories. 🤖 Understanding SuperNIC to Enhance AI Efficiency: The BlueField-3 SuperNIC is pivotal in AI-driven innovation, boosting workload efficiency and networking speed in AI cloud computing. With a 1:1 GPU to SuperNIC ratio, it enhances productivity. Integrated with NVIDIA Spectrum-4, it provides adaptive routing, out-of-order packet handling, and optimized congestion control for superior outcomes in enterprise data centers. 🤖 Step-by-step guide to the Evolution of LLMs: The post explores the 12-month evolution of Large Language Models (LLMs), from text completion to dynamic chatbots with code execution and knowledge access. It emphasizes the frequent release of new features, models, and techniques, notably the November 2022 launch of ChatGPT, accelerating user adoption and triggering an AI arms race, while questioning if such rapid advancements are bringing us closer to practical AI agents.  🔛 Masterclass: AI/LLM Tutorials👉 How to Get Started with Llama 2 in 5 Steps: Llama 2, an open-source large language model, is now free for research and commercial use. This blog outlines a five-step guide, covering prerequisites, model setup, fine-tuning, inference, and additional resources for users interested in utilizing Llama 2. 👉 How to Integrate GPT-4 with Python and Java: A Developer's Guide: The article explores integrating GPT-4 with Python and Java, emphasizing Python's compatibility and flexibility. It provides examples, discusses challenges like rate limits, and encourages collaboration for harnessing GPT-4's transformative potential, highlighting the importance of patience and debugging skills. 👉 How to Build an AI Assistant with Real-Time Web Access in 100 Lines of Code Using Python and GPT-4: This article guides readers in creating a Python-based AI assistant with real-time web access using GPT-4 in just 100 lines of code. The process involves initializing clients with API keys, creating the assistant using the OpenAI and Tavily libraries, and implementing a function for retrieving real-time information from the web. The author offers a detailed step-by-step guide with code snippets. 👉 Step-by-step guide to building a real-time recommendation engine with Amazon MSK and Rockset: This tutorial demonstrates building a real-time product recommendation engine using Amazon Managed Streaming for Apache Kafka (Amazon MSK) and Rockset. The architecture allows instant, personalized recommendations critical for e-commerce, utilizing Amazon MSK for capturing high-velocity user data and AWS Managed services for scalability in handling customer requests, API invocations, and data ingestion.  🚀 HackHub: Trending AI Tools💮 protectai/ai-exploits: Collection of real-world AI/ML exploits for responsibly disclosed vulnerabilities, aiming to raise awareness of the amount of vulnerable components in the AI/ML ecosystem. 💮 nlmatics/llmsherpa: Provides strategic APIs to accelerate LLM use cases, includes a LayoutPDFReader that provides layout information for PDF to text parsers, and is tested on a wide variety of PDFs. 💮 QwenLM/Qwen-Audio: Large audio language model proposed by Alibaba Cloud developers can use for speech editing, sound understanding and reasoning, music appreciation, and multi-turn dialogues in diverse audio-oriented scenarios. 💮 langchain-ai/opengpts: Open-source effort creating a similar experience to OpenAI's GPTs and Assistants API. It builds upon LangChain, LangServe, and LangSmith.  Readers’ Feedback! 💬 💭 Anish says, "The growing number of subscribers is really exciting. I particularly appreciate the transformation of 2D images into 3D models from Adobe and going through 'Tackling Hallucinations in LLMs' by Bijit Ghosh. These kinds of practical contexts are truly my preference for the upcoming newsletters." 💭 Tony says, "Very informative, far-reaching, and extremely timely. On point. Just keep it up, keep your eye on news and knowledge, and keep cluing us all once a week please, Merlyn. You're doing a fine job."  Share your thoughts here! Your opinions matter—let's make this space a reflection of diverse perspectives. 
Read more
  • 0
  • 0
  • 131

article-image-debugging-and-improving-code-with-chatgpt
Dan MacLean
23 Nov 2023
8 min read
Save for later

Debugging and Improving Code with ChatGPT

Dan MacLean
23 Nov 2023
8 min read
Dive deeper into the world of AI innovation and stay ahead of the AI curve! Subscribe to our AI_Distilled newsletter for the latest insights. Don't miss out – sign up today!This article is an excerpt from the book, R Bioinformatics Cookbook - Second Edition, by Dan MacLean. Discover over 80 recipes for modeling and handling real-life biological data using modern libraries from the R ecosystem.IntroductionEmbrace the power of streamlined debugging and code refinement with ChatGPT's expertise. Unravel the possibilities of effortlessly troubleshooting errors, optimizing performance, and refining code structures. This article explores how ChatGPT, armed with its extensive programming knowledge, assists in identifying and rectifying coding errors, offering tailored solutions, and fostering a deeper understanding of code logic. Dive into a journey of code enhancement, where ChatGPT becomes your indispensable companion in navigating complexities and elevating your coding experiencesDebugging and improving code with ChatGPTChatGPT is capable of debugging code due to its extensive training on programming-related text. It has acquired knowledge of common programming patterns and errors, allowing it to understand code snippets, identify potential issues, and provide suggestions or improvements. It can identify minor errors by highlighting syntax issues such as missing brackets, incorrect function calls, and invalid variable assignments. ChatGPT helps troubleshooting by asking clarifying questions to better understand the problem and guide developers toward the root cause, and can aid in code optimization by suggesting efficiency improvements, identifying performance bottlenecks, and proposing more optimized implementations. Lastly, the debugging process with ChatGPT can foster learning and exploration, enabling developers to discover new concepts, programming techniques, or functionalities of specific libraries or packages. In this recipe, we’ll walk through an example debugging session with ChatGPT and look at how it helps us to reflect better on our own code.Getting readyWe just need an account at ChatGPT: https://chat.openai.com/.How to do it…We can debug some code as follows:1. Set up some code that we expect to work and try it:data <- data.frame( predictor = c(1, 2, 3, 4, NA), response = c("A", "B", "C", "D", "E") ) model <- lm(response ~ predictor, data = data)2. Send the broken code to ChatGPT:Help me fix the following code: data <- data.frame( predictor = c(1, 2, 3, 4, NA), response = c("A", "B", "C", "D", "E") ) model <- lm(response ~ predictor, data = data)3. Read and run the resulting response in R:To fix the code, you need to handle the missing value (NA) in the predictor column. Here's the corrected code: # Fixing the code data <- data.frame( predictor = c(1, 2, 3, 4, NA), response = c("A", "B", "C", "D", "E") ) # Replace NA in the predictor column with the mean value data$predictor[is.na(data$predictor)] <- mean(data$predictor, na.rm = TRUE) # Convert the response variable to a factor data$response <- as.factor(data$response) # Perform linear regression model <- lm(response ~ predictor, data = data) summary(model) In the fixed code, I replaced the NA value in the predictor column with the mean value of the remaining numeric values using the mean() function with na.rm = TRUE argument. Then, I converted the response variable to a factor using the as.factor() function. Finally, the linear regression model is computed with the corrected data, and the summary(model) displays the model summary.4. Tell ChatGPT the code still doesn’t work:It still doesn't work - Im getting thie error: Error in as.character.factor(x) : malformed factor In addition: Warning message: In Ops.factor(r, 2) : '^' not meaningful for factors 5. Read the response and run the code:Apologies for the oversight. The error is occurring because the ^ operator is not applicable to factors in the formula used in the linear regression. To fix this, we can convert the factor to a numeric representation using numeric codes or dummy variables. Here's the updated code: # Fixing the code data <- data.frame( predictor = c(1, 2, 3, 4, NA), response = c("A", "B", "C", "D", "E") ) # Replace NA in the predictor column with the mean value data$predictor[is.na(data$predictor)] <- mean(data$predictor, na.rm = TRUE) # Convert the response variable to a factor data$response <- as.factor(data$response) # Convert the factor to numeric codes data$response <- as.numeric(data$response) # Perform linear regression model <- lm(response ~ predictor, data = data) summary(model) In the updated code, I've used as.numeric() to convert the response factor variable to numeric codes. This allows the regression model to treat it as a numerical response. Now, the lm() function should run without the error, and summary(model) will display the model summary.6. Ask it to check whether the original code was appropriately analyzed:Were the original data suitable for a linear model?7. Read the response:No, the original data provided in the code were not suitable for a linear model. Here's why: 1. The `predictor` variable was numeric, which is suitable for a linear model. 2. The `response` variable, however, was categorical with values "A", "B", "C", "D", and "E". Categorical variables cannot be directly used as the response variable in a linear regression model because they do not represent continuous numeric values. To use a categorical variable as the response variable in a linear model, you typically need to encode it into dummy variables or use techniques like multinomial logistic regression. Dummy variable encoding represents each category as a separate binary variable (0 or 1). Once the categorical variable is properly encoded, it can be used as the response variable in a linear regression model.Again, that’s how the process goes though the output can be different from user to user but the meaning should be the same.How it works…In step 1, we set up a data frame of some data as a test set to use in a linear model; we wish to examine the relationship between the two variables so assume that a linear model will work. When we run the code in R, we get this:Error in lm.fit(x, y, offset = offset, singular.ok = singular.ok, ...) : NA/NaN/Inf in 'y' In addition: Warning message: In storage.mode(v) <- "double" : NAs introduced by coercion This is a typically confounded R error message. We want help, so in step 2, we ask ChatGPT to fix the code.Step 3 shows us ChatGPT’s response, which suggests fixing the NA values that are in the predictor column. That seems reasonable, and, as it explains, ChatGPT gives us some code that imputes a new value from the mean of all the other values – again, a reasonable value to impute. When we run the code, it still doesn’t work and we get a new error, so in step 4, we tell ChatGPT about it and ask it to fix the new errors.In step 5, we see an apologetic language model attempt to correct the error. It gives us a confusing reason for doing some strange text/number conversion and fixed code. When we run this new code in the console, we get output like this:## Call: ## lm(formula = response ~ predictor, data = data) ## ## Residuals: ## 1 2 3 4 5 ## -0.5 -0.5 -0.5 -0.5 2.0 ## ## Coefficients: ## Estimate Std. Error t value Pr(>|t|) ## (Intercept) 0.5000 1.5546 0.322 0.769 ## predictor 1.0000 0.5774 1.732 0.182 ## ## Residual standard error: 1.291 on 3 degrees of freedom ## Multiple R-squared: 0.5, Adjusted R-squared: 0.3333 ## F-statistic: 3 on 1 and 3 DF, p-value: 0.1817This looks a bit strange – the residuals are weird and the rest of the values look poor. We start to question whether this was the right thing to do in the first place.In step 6, we ask ChatGPT whether the linear model was the right sort of analysis. It responds as in step 7, telling us quite clearly that it was not appropriate.This recipe highlights that we can use ChatGPT to fix code that doesn’t work, but shows also that ChatGPT will not reason without prompting. Here, it let us pursue a piece of code that wasn’t right for the task. As a language model, it can’t know that, even though we believe it would be evident from the question setup. It didn’t try to correct our flawed assumptions or logic. We still need to be responsible for the logic and applicability of our code.ConclusionIn conclusion, ChatGPT emerges as a valuable ally in code debugging, offering insightful solutions and guiding developers toward efficient, error-free code. While it excels in identifying issues and suggesting fixes, it's crucial to recognize that ChatGPT operates within the boundaries of provided information. This journey underscores the importance of critical thinking in code analysis, reminding us that while ChatGPT empowers us with solutions, the responsibility for logical correctness and code applicability ultimately rests on the developer. Leveraging ChatGPT's expertise alongside mindful coding practices paves the way for seamless debugging and continual improvement in software development endeavors.Author BioProfessor Dan MacLean has a Ph.D. in molecular biology from the University of Cambridge and gained postdoctoral experience in genomics and bioinformatics at Stanford University in California. Dan is now Head of Bioinformatics at the world-leading Sainsbury Laboratory in Norwich, UK where he works on bioinformatics, genomics, and machine learning. He teaches undergraduates, post-graduates, and post-doctoral students in data science and computational biology. His research group has developed numerous new methods and software in R, Python, and other languages with over 100,000 downloads combined.
Read more
  • 0
  • 0
  • 486

article-image-generative-ai-in-natural-language-processing
Jyoti Pathak
22 Nov 2023
9 min read
Save for later

Generative AI in Natural Language Processing

Jyoti Pathak
22 Nov 2023
9 min read
Dive deeper into the world of AI innovation and stay ahead of the AI curve! Subscribe to our AI_Distilled newsletter for the latest insights. Don't miss out – sign up today!IntroductionGenerative AI is a pinnacle achievement, particularly in the intricate domain of Natural Language Processing (NLP). As businesses and researchers delve deeper into machine intelligence, Generative AI in NLP emerges as a revolutionary force, transforming mere data into coherent, human-like language. This exploration into Generative AI's role in NLP unveils the intricate algorithms and neural networks that power this innovation, shedding light on its profound impact and real-world applications.Let us dissect the complexities of Generative AI in NLP and its pivotal role in shaping the future of intelligent communication.                                                                                                                   SourceWhat is generative AI in NLP?Generative AI in Natural Language Processing (NLP) is the technology that enables machines to generate human-like text or speech. Unlike traditional AI models that analyze and process existing data, generative models can create new content based on the patterns they learn from vast datasets. These models utilize advanced algorithms and neural networks, often employing architectures like Recurrent Neural Networks (RNNs) or Transformers, to understand the intricate structures of language.Generative AI models can produce coherent and contextually relevant text by comprehending context, grammar, and semantics. They are invaluable tools in various applications, from chatbots and content creation to language translation and code generation.Technical Marvel Behind Generative AIAt the heart of Generative AI in NLP lie advanced neural networks, such as Transformer architectures and Recurrent Neural Networks (RNNs). These networks are trained on massive text corpora, learning intricate language structures, grammar rules, and contextual relationships. Through techniques like attention mechanisms, Generative AI models can capture dependencies within words and generate text that flows naturally, mirroring the nuances of human communication.Practical Examples and Code Snippets:1. Text Generation with GPT-3OpenAI's GPT-3 (Generative Pre-trained Transformer 3) is a state-of-the-art generative language model. Using the OpenAI API, you can generate text with just a few lines of code.import openai openai.api_key = 'YOUR_API_KEY' prompt = "Once upon a time," response = openai.Completion.create( engine="text-davinci-003", prompt=prompt, max_tokens=150 ) generated_text = response.choices[0].text.strip() print("Generated Text: ", generated_text)2. Chatbot Development with RasaRasa is an open-source framework used for building conversational AI applications. It leverages generative models to create intelligent chatbots capable of engaging in dynamic conversations.from rasa.core.agent import Agent agent = Agent.load("models/dialogue", interpreter="models/nlu") user_input = input("User: ") responses = agent.handle_text(user_input) for response in responses:    print("Bot: ", response["text"])3. Language Translation with MarianMTMarianMT is a multilingual translation model provided by the Hugging Face Transformers library.from transformers import MarianTokenizer, MarianMTModel model_name = "Helsinki-NLP/opus-mt-en-de" tokenizer = MarianTokenizer.from_pretrained(model_name) model = MarianMTModel.from_pretrained(model_name) input_text = "Hello, how are you?" translated_text = model.generate(**tokenizer(input_text, return_tensors="pt", padding=True)) translated_text = tokenizer.batch_decode(translated_text, skip_special_tokens=True) print("Translated Text: ", translated_text)Generative AI in NLP enables these practical applications. It is a cornerstone for numerous other use cases, from content creation and language tutoring to sentiment analysis and personalized recommendations, making it a transformative force in artificial intelligence.How do Generative AI models help in NLP?                                                                                                                           SourceGenerative AI models, especially those powered by deep learning techniques like Recurrent Neural Networks (RNNs) and Transformer architectures, play a pivotal role in enhancing various aspects of NLP:1. Language TranslationGenerative AI models, such as OpenAI's GPT-3, have significantly improved machine translation. Training on multilingual datasets allows these models to translate text with remarkable accuracy from one language to another, enabling seamless communication across linguistic boundaries.Example Code Snippet:from transformers import MarianMTModel, MarianTokenizer model_name = "Helsinki-NLP/opus-mt-en-de" tokenizer = MarianTokenizer.from_pretrained(model_name) model = MarianMTModel.from_pretrained(model_name) input_text = "Hello, how are you?" translated_text = model.generate(**tokenizer(input_text, return_tensors="pt", padding=True)) translated_text = tokenizer.batch_decode(translated_text, skip_special_tokens=True) print("Translated Text: ", translated_text) translated_text)2. Content Creation Generative AI models assist in content creation by generating engaging articles, product descriptions, and creative writing pieces. Businesses leverage these models to automate content generation, saving time and resources while ensuring high-quality output.Example Code Snippet:from transformers import GPT2LMHeadModel, GPT2Tokenizer model_name = "gpt2" tokenizer = GPT2Tokenizer.from_pretrained(model_name) model = GPT2LMHeadModel.from_pretrained(model_name) input_prompt = "In a galaxy far, far away" generated_text = model.generate(tokenizer.encode(input_prompt, return_tensors="pt", max_length=150)) generated_text = tokenizer.decode(generated_text[0], skip_special_tokens=True) print("Generated Text: ", generated_text)Usage of Generative AIGenerative AI, with its remarkable ability to generate human-like text, finds diverse applications in the technical landscape. Let's delve into the technical nuances of how Generative AI can be harnessed across various domains, backed by practical examples and code snippets.●  Content CreationGenerative AI plays a pivotal role in automating content creation. By training models on vast datasets, businesses can generate high-quality articles, product descriptions, and creative pieces tailored to specific audiences. This is particularly useful for marketing campaigns and online platforms where engaging content is crucial.Example Code Snippet (Using OpenAI's GPT-3 API):import openai openai.api_key = 'YOUR_API_KEY' prompt = "Write a compelling introduction for a new tech product." response = openai.Completion.create( engine="text-davinci-003", prompt=prompt, max_tokens=150 ) generated_content = response.choices[0].text.strip() print("Generated Content: ", generated_content)Practical Usage: A marketing company employs Generative AI to draft product descriptions for an e-commerce website, tailoring each description based on product features and customer preferences.●  Chatbots and Virtual AssistantsGenerative AI empowers intelligent chatbots and virtual assistants, enabling natural and dynamic user conversations. These systems understand user queries and generate contextually relevant responses, enhancing customer support experiences and user engagement.Example Code Snippet (Using Rasa Framework):from rasa.core.agent import Agent agent = Agent.load("models/dialogue", interpreter="models/nlu") user_input = input("User: ") responses = agent.handle_text(user_input) for response in responses:   print("Bot: ", response["text"])Practical Usage: An online customer service platform integrates Generative AI-powered chatbots to handle customer inquiries, providing instant and accurate responses, improving user satisfaction, and reducing response time.●  Code GenerationGenerative AI assists developers by generating code snippets and completing lines of code. This accelerates the software development process, aiding programmers in writing efficient and error-free code.Example Code Snippet (Using CodeBERT Model):from transformers import CodeBertForConditionalGeneration, CodeBertTokenizer model_name = "microsoft/codebert-base" tokenizer = CodeBertTokenizer.from_pretrained(model_name) model = CodeBertForConditionalGeneration.from_pretrained(model_name) code_prompt = "Sort a list in Python." generated_code = model.generate(tokenizer.encode(code_prompt, return_tensors="pt", max_length=150)) generated_code = tokenizer.decode(generated_code[0], skip_special_tokens=True) print("Generated Code: ", generated_code)Practical Usage: A software development team integrates Generative AI into their IDE, allowing developers to receive instant code suggestions and completions, leading to faster coding, reduced errors, and streamlined collaboration.●  Creative Writing and StorytellingGenerative AI fuels creativity by generating imaginative stories, poetry, and scripts. Authors and artists use these models to brainstorm ideas or overcome creative blocks, producing unique and inspiring content.Example Code Snippet (Using GPT-3 API for Creative Writing):import openai openai.api_key = 'YOUR_API_KEY' prompt = "Write a sci-fi story about space exploration." response = openai.Completion.create( engine="text-davinci-003", prompt=prompt, max_tokens=500 ) creative_writing = response.choices[0].text.strip() print("Creative Writing: ", creative_writing)Practical Usage: A novelist utilizes Generative AI to generate plot outlines and character dialogues, providing creative prompts that serve as a foundation for developing engaging and unique stories.Also, Generative AI models excel in language translation tasks, enabling seamless communication across diverse languages. These models accurately translate text, breaking down language barriers in global interactions.Practical Usage: A language translation service utilizes Generative AI to translate legal documents, business contracts, or creative content from one language to another, ensuring accuracy and preserving the nuances of the original text.Generative AI's technical prowess is reshaping how we interact with technology. Its applications are vast and transformative, from enhancing customer experiences to aiding creative endeavors and optimizing development workflows. Stay tuned as this technology evolves, promising even more sophisticated and innovative use cases.Future of Generative AI in NLPAs Generative AI continues to evolve, the future holds limitless possibilities. Enhanced models, coupled with ethical considerations, will pave the way for applications in sentiment analysis, content summarization, and personalized user experiences. Integrating Generative AI with other emerging technologies like augmented reality and voice assistants will redefine the boundaries of human-machine interaction.ConclusionGenerative AI is a testament to the remarkable strides made in artificial intelligence. Its sophisticated algorithms and neural networks have paved the way for unprecedented advancements in language generation, enabling machines to comprehend context, nuance, and intricacies akin to human cognition. As industries embrace the transformative power of Generative AI, the boundaries of what devices can achieve in language processing continue to expand. This relentless pursuit of excellence in Generative AI enriches our understanding of human-machine interactions. It propels us toward a future where language, creativity, and technology converge seamlessly, defining a new era of unparalleled innovation and intelligent communication. As the fascinating journey of Generative AI in NLP unfolds, it promises a future where the limitless capabilities of artificial intelligence redefine the boundaries of human ingenuity.Author BioJyoti Pathak is a distinguished data analytics leader with a 15-year track record of driving digital innovation and substantial business growth. Her expertise lies in modernizing data systems, launching data platforms, and enhancing digital commerce through analytics. Celebrated with the "Data and Analytics Professional of the Year" award and named a Snowflake Data Superhero, she excels in creating data-driven organizational cultures.Her leadership extends to developing strong, diverse teams and strategically managing vendor relationships to boost profitability and expansion. Jyoti's work is characterized by a commitment to inclusivity and the strategic use of data to inform business decisions and drive progress.
Read more
  • 0
  • 0
  • 1110
article-image-ai-distilled-26-uncover-the-latest-in-ai-from-industry-leaders
Merlyn Shelley
21 Nov 2023
13 min read
Save for later

AI_Distilled #26: Uncover the latest in AI from industry leaders

Merlyn Shelley
21 Nov 2023
13 min read
Dive deeper into the world of AI innovation and stay ahead of the AI curve! Subscribe to our AI_Distilled newsletter for the latest insights. Don't miss out – sign up today!👋 Hello ,Welcome back to a new issue of AI_Distilled - your guide to the key advancements in AI, ML, NLP, and GenAI. Let's dive right into an industry expert’s perspective to sharpen our understanding of the field's rapid evolution. "In the near future, anyone who's online will be able to have a personal assistant powered by artificial intelligence that's far beyond today's technology."  - Bill Gates, Co-Founder, Microsoft. In a recent interview, Gates minced no words when he said software is still “pretty dumb” even in today’s day and age. The next 5 years will be crucial, he believes, as everything we know about computing in our personal and professional lives is on the brink of a massive disruption. Even everyday things as simple as phone calls are due for transformation as evident from Samsung unveiling the new 'Galaxy AI' and real-time translate call feature. In this issue, we’ll talk about Google exploring massive investment in AI startup Character.AI, Microsoft's GitHub Copilot user base surging to over a million, OpenAI launching data partnerships to enhance AI understanding, and Adobe researchers’ breakthrough AI that transforms 2D images into 3D models in 5 seconds.We’ve also got you your fresh dose of AI secret knowledge and tutorials. Explore how to scale multimodal understanding to long videos, navigate the landscape of hallucinations in LLMs, read a practical guide to enhancing RAG system responses, how to generate Synthetic Data for Machine Learning and unlock the power of low-code GPT AI apps.  📥 Feedback on the Weekly EditionWe've hit 6 months and 38K subscribers in our AI_Distilled newsletter journey — thanks to you!  The best part? Our emails are opened by 60% of recipients each week. We're dedicated to tailoring them to enhance your Data & AI practice. Let's work together to ensure they fully support your AI efforts and make a positive impact on your daily work.Share your thoughts in a quick 5-minute survey to shape our content. As a big thanks, get our bestselling "The Applied Artificial Intelligence Workshop" in PDF.  Let's make AI_Distilled even more awesome! 🚀 Jump on in! Complete the Survey. Get a Packt eBook for Free!Writer’s Credit: Special shout-out to Vidhu Jain for their valuable contribution to this week’s newsletter content!  Cheers,  Merlyn Shelley  Editor-in-Chief, Packt  SignUp | Advertise | Archives⚡ TechWave: AI/GPT News & Analysis🔳 Google Explores Massive Investment in AI Startup Character.AI: Google is reportedly in discussions to invest 'hundreds of millions' in Character.AI, an AI chatbot startup founded by ex-Google Brain employees. The investment is expected to deepen the collaboration between the two entities, leveraging Google's cloud services and Tensor Processing Units (TPUs) for model training. Character.AI, offering virtual interactions with celebrities and customizable chatbots, targets a youthful audience, particularly those aged 18 to 24, constituting 60% of its web traffic.  🔳 AI Actions Empowers AI Platforms with Zapier Integration: AI Actions introduces a tool enabling AI platforms to seamlessly run any Zapier action, leveraging Zapier's extensive repository of 20,000+ searches and actions. The integration allows natural language commands to trigger Zapier actions, eliminating obstacles like third-party app authentication and API integrations. Supported on platforms like ChatGPT, GPTs, Zapier, and customizable solutions, AI Actions provides flexibility for diverse applications. 🔳 Samsung Unveils 'Galaxy AI' and Real-Time Translate Call Feature: Samsung declares its commitment to AI with a preview of "Galaxy AI," a comprehensive mobile AI experience that combines on-device AI with cloud-based AI collaborations. The company introduced an upcoming feature, "AI Live Translate Call," embedded in its native phone app, offering real-time audio and text translations on the device during calls. Set to launch early next year, Galaxy AI is anticipated to debut with the Galaxy S24 lineup.  🔳 Google Expands Collaboration with Anthropic, Prioritizing AI Security and Cloud TPU v5e Accelerators: In an intensified partnership, Google announces its extended collaboration with Anthropic, focusing on elevated AI security and leveraging Cloud TPU v5e chips for AI inference. The collaboration, dating back to Anthropic's inception in 2021, highlights their joint efforts in AI safety and research. Anthropic, utilizing Google's Cloud services like GKE clusters, AlloyDB, and BigQuery, commits to Google Cloud's security services for model deployment. 🔳 Microsoft's GitHub Copilot User Base Surges to Over a Million, CEO Nadella Reports: Satya Nadella announced a substantial 40% growth in paying customers for GitHub Copilot in the September quarter, surpassing one million users across 37,000 organizations. Nadella highlights the rapid adoption of Copilot Chat, utilized by companies like Shopify, Maersk, and PWC, enhancing developers' productivity. The Bing search engine, integrated with OpenAI's ChatGPT, has facilitated over 1.9 billion chats, demonstrating a growing interest in AI-driven interactions. Microsoft's Azure revenue, including a significant contribution from AI services, exceeded expectations, reaching $24.3 billion, with the Azure business rising by 29%.  🔳 Dell and Hugging Face Join Forces to Streamline LLM Deployment: Dell and Hugging Face unveil a strategic partnership aimed at simplifying the deployment of LLMs for enterprises. With the burgeoning interest in generative AI, the collaboration seeks to address common concerns such as complexity, security, and privacy. The companies plan to establish a Dell portal on the Hugging Face platform, offering custom containers, scripts, and technical documentation for deploying open-source models on Dell servers.  🔳 OpenAI Launches Data Partnerships to Enhance AI Understanding: OpenAI introduces Data Partnerships, inviting collaborations with organizations to develop both public and private datasets for training AI models. The initiative aims to create comprehensive datasets reflecting diverse subject matters, industries, cultures, and languages, enhancing AI's understanding of the world. Two partnership options are available: Open-Source Archive for public datasets and Private Datasets for proprietary AI models, ensuring sensitivity and access controls based on partners' preferences. 🔳 Iterate Unveils AppCoder LLM for Effortless AI App Development: California-based Iterate introduces AppCoder LLM, a groundbreaking model embedded in the Interplay application development platform. This innovation allows enterprises to generate functional code for AI applications effortlessly by issuing natural language prompts. Unlike existing AI-driven coding solutions, AppCoder LLM, integrated into Iterate's platform, outperforms competitors, producing better outputs in terms of functional correctness and usefulness.  🔳 Adobe Researchers Unveil Breakthrough AI: Transform 2D Images into 3D Models in 5 Seconds: A collaborative effort between Adobe Research and Australian National University has resulted in a groundbreaking AI model capable of converting a single 2D image into a high-quality 3D model within a mere 5 seconds. The Large Reconstruction Model for Single Image to 3D (LRM) utilizes a transformer-based neural network architecture with over 500 million parameters, trained on approximately 1 million 3D objects. This innovation holds vast potential for industries like gaming, animation, industrial design, AR, and VR.  🔮 Expert Insights from Packt Community Synthetic Data for Machine Learning - By Abdulrahman Kerim Training ML models Developing an ML model usually requires performing the following essential steps: Collecting data. Annotating data. Designing an ML model. Training the model. Testing the model. These steps are depicted in the following diagram: Fig – Developing an ML model process. Now, let’s look at each of the steps in more detail to better understand how we can develop an ML model. Collecting and annotating data The first step in the process of developing an ML model is collecting the needed training data. You need to decide what training data is needed: Train using an existing dataset: In this case, there’s no need to collect training data. Thus, you can skip collecting and annotating data. However, you should make sure that your target task or domain is quite similar to the available dataset(s) you are planning to deploy. Otherwise, your model may train well on this dataset, but it will not perform well when tested on the new task or domain. Train on an existing dataset and fine-tune on a new dataset: This is the most popular case in today’s ML. You can pre-train your model on a large existing dataset and then fine-tune it on the new dataset. Regarding the new dataset, it does not need to be very large as you are already leveraging other existing dataset(s). For the dataset to be collected, you need to identify what the model needs to learn and how you are planning to implement this. After collecting the training data, you will begin the annotation process. Train from scratch on new data: In some contexts, your task or domain may be far from any available datasets. Thus, you will need to collect large-scale data. Collecting large-scale datasets is not simple. To do this, you need to identify what the model will learn and how you want it to do that. Making any modifications to the plan later may require you to recollect more data or even start the data collection process again from scratch. Following this, you need to decide what ground truth to extract, the budget, and the quality you want. This content is from the book “Synthetic Data for Machine Learning” written by Abdulrahman Kerim (Oct 2023). Start reading a free chapter or access the entire Packt digital library free for 7 days by signing up now. To learn more, click on the button below. Read through the Chapter 1 unlocked here...  🌟 Secret Knowledge: AI/LLM Resources🤖 Scaling Multimodal Understanding to Long Videos: A Comprehensive Guide: This guide provides a step-by-step explanation of the challenges associated with modeling diverse modalities like video, audio, and text. Learn about the Mirasol3B architecture, which efficiently handles longer videos, and understand the coordination between time-aligned and contextual modalities. The guide also introduces the Combiner, a learning module to effectively combine signals from video and audio information.  🤖 Mastering AI and ML Workloads: A Guide with Cloud HPC Toolkit: This post, authored by Google Cloud experts, delves into the convergence of HPC systems with AI and ML, highlighting their mutual benefits. They provide instructions on deploying clusters, utilizing preconfigured partitions, and utilizing powerful tools such as enroot and Pyxis for container integration. Discover the simplicity of deploying AI models on Google Cloud with the Cloud HPC Toolkit, fostering innovation and collaboration between HPC and AI communities. 🤖 Mastering the GPT Workflow: A Comprehensive Guide to AI Language Model: From understanding the basics of GPT's architecture and pre-training concept to unraveling the stages of the GPT workflow, including pre-training, fine-tuning, evaluation, and deployment, this guide provides a step-by-step walkthrough. Gain insights into ethical considerations, bias mitigation, and challenges associated with GPT models. Delve into future developments, including model scaling, multimodal capabilities, explainable AI enhancements, and improved context handling.  🤖 Navigating the Landscape of Hallucinations in LLMs: A Comprehensive Exploration: Delve into the intricate world of LLMs and the challenges posed by hallucinations in this in-depth blog post. Gain an understanding of the various types of hallucinations, ranging from harmless inaccuracies to potentially harmful fabrications, and their implications in real-world applications. Explore the root factors leading to hallucinations, such as overconfidence and lack of grounded reasoning, during LLM training.  🤖 Unveiling the Core Challenge in GenAI: Cornell University's Insightful Revelation: Cornell University researchers unveil a pivotal threat in GenAI, emphasizing the crucial role of "long-term memory" and the need for a vector database for contextual retrieval. Privacy issues emerge in seemingly secure solutions, shedding light on the complex challenges of handling non-numerical data in advanced AI models. 🔛 Masterclass: AI/LLM Tutorials👉 Unlocking the Power of Low-Code GPT AI Apps: A Comprehensive Guide. Explore how AINIRO.IO introduces the concept of "AI Apps" by seamlessly integrating ChatGPT with CRUD operations, enabling natural language interfaces to databases. Dive into the intricacies of creating a dynamic AI-based application without extensive coding, leveraging the Magic cloudlet to generate CRUD APIs effortlessly. Explore the significant implications of using ChatGPT for business logic in apps, offering endless possibilities for user interactions. 👉 Deploying LLMs Made Easy with ezsmdeploy 2.0 SDK: This post provides an in-depth understanding of the new capabilities, allowing users to effortlessly deploy foundation models like Llama 2, Falcon, and Stable Diffusion with just a few lines of code. The SDK automates instance selection, configuration of autoscaling, and other deployment details, streamlining the process of launching production-ready APIs. Whether deploying models from Hugging Face Hub or SageMaker Jumpstart, ezsmdeploy 2.0 reduces the coding effort required to integrate state-of-the-art models into production, making it a valuable tool for data scientists and developers. 👉 Enhancing RAG System Responses: A Practical Guide: Discover how to enhance the performance of your Retrieval-Augmented Generation (RAG) systems in generative AI applications by incorporating an interactive clarification component. This post offers a step-by-step guide on improving the quality of answers in RAG use cases where users present vague or ambiguous queries. Learn how to implement a solution using LangChain to engage in a conversational dialogue with users, prompting them for additional details to refine the context and provide accurate responses.  👉 Building Personalized ChatGPT: A Step-by-Step Guide. In this post, you'll learn how to explore OpenAI's GPT Builder, offering a beginner-friendly approach to customize ChatGPT for various applications. With the latest GPT update, users can now create personalized ChatGPT versions, even without technical expertise. The tutorial focuses on creating a customized GPT named 'EduBuddy,' designed to enhance the educational journey with tailored learning strategies and interactive features. 🚀 HackHub: Trending AI Tools💮 reworkd/tarsier: Open-source utility library for multimodal web agents, facilitating interaction with GPT-4(V) by visually tagging interactable elements on a page.  💮 recursal/ai-town-rwkv-proxy: Allows developers to locally run a large AI town using the RWKV model, a linear transformer with low inference costs. 💮 shiyoung77/ovir-3d: Enables open-vocabulary 3D instance retrieval without training on 3D data, addressing the challenge of obtaining diverse annotated 3D categories.  💮 langroid/langroid: User-friendly Python framework for building LLM-powered applications through a Multi-Agent paradigm. 💮 punica-ai/punica: Framework for Low Rank Adaptation (LoRA) to incorporate new knowledge into a pretrained LLM with minimal storage and memory impact. 
Read more
  • 0
  • 0
  • 110

article-image-llms-for-extractive-summarization-in-nlp
Mostafa Ibrahim
20 Nov 2023
7 min read
Save for later

LLMs For Extractive Summarization in NLP

Mostafa Ibrahim
20 Nov 2023
7 min read
Dive deeper into the world of AI innovation and stay ahead of the AI curve! Subscribe to our AI_Distilled newsletter for the latest insights. Don't miss out – sign up today!SourceIntroductionIn today's era, filtering out vital information from the overwhelming volume of data has become crucial. As we navigate vast amounts of information, the significance of adept text summarization becomes clear. This process not only conserves our time but also optimizes the use of resources, ensuring we focus on what truly matters.                                                                                                              SourceIn this article, we will delve into the intricacies of text summarization, particularly focusing on the role of Large Language Models (LLMs) in the process. We'll explore their foundational principles, their capabilities in extractive summarization, and the advanced techniques they deploy. Moreover, we'll shed light on the challenges they face and the innovative solutions proposed to overcome them. Without further ado let’s dive in!What are LLMs?LLMs, standing for Large Language Models, are intricate computational structures designed for the detailed analysis and understanding of text. They fall under the realm of Natural Language Processing, a domain dedicated to enabling machines to interpret human language. One of the distinguishing features of LLMs is their vast scale, equipped with an abundance of parameters that facilitate the storage of extensive linguistic data. In the context of summarization, two primary techniques emerge: extractive and abstractive. Extractive summarization involves selecting pertinent sentences or phrases directly from the source material, whereas abstractive summarization synthesizes new sentences that encapsulate the core message in a more condensed manner. With their advanced linguistic comprehension, LLMs are instrumental in both methods, but their proficiency in extractive summarization is notably prominent.Why Utilize LLMs for Extractive Summarization?Extractive summarization entails selecting crucial sentences or phrases from a source document to compose a concise summary. Achieving this demands an intricate and thorough grasp of the document's content, especially when it pertains to extensive and multifaceted texts.The expansive architecture of LLMs, including state-of-the-art models like ChatGPT, grants them the capability to process and analyze substantial volumes of text, surpassing the limitations of smaller models like BERT which can handle only 512 tokens. This considerable size and intricate design allow LLMs to produce richer and more detailed representations of content.LLMs excel not only in recognizing the overt details but also in discerning the implicit or subtle nuances embedded within a text. Given their profound understanding, LLMs are uniquely positioned to identify and highlight the sentences or phrases that truly encapsulate the essence of any content, making them indispensable tools for high-quality extractive summarization.Techniques and Approaches with LLMsWithin the realm of Natural Language Processing (NLP), the deployment of specific techniques to distill vast texts into concise summaries is of paramount importance. One such technique is sentence scoring. In this method, each sentence in a document is assigned a quantitative value, representing its relevance and importance. LLMs, owing to their extensive architectures, can be meticulously fine-tuned to carry out this scoring with high precision, ensuring that only the most pertinent content is selected for summarization.Next, we turn our attention to the attention visualization in LLMs. This technique provides a graphical representation of the segments of text to which the model allocates the most significance during processing. For extractive summarization, this visualization serves as a crucial tool, as it offers insights into which sections of the text the model deems most relevant.Lastly, the integration of hierarchical models enhances the capabilities of LLMs further. These models approach texts in a structured manner, segmenting them into defined chunks before processing each segment for summarization. The inherent capability of LLMs to process lengthy sequences means they can operate efficiently at both the segmentation and the summarization stages, ensuring a comprehensive analysis of extended documents.Practical Implementation of Extractive Summarization Using LLMsIn this section, we offer a hands-on experience by providing a sample code snippet that utilizes a pre-trained Large Language Model known as bert for text summarization. In order to specify extractive summarization we will be using the bert-extractive-summarizer package, which is an extension of the Hugging Face Transformers library. This package provides a simple way to use BERT for extractive summarization.Step 1: Install and Import Nesseccary Libraries!pip install bert-extractive-summarizer from summarizer import SummarizerStep 2: Load the Extractive Bert Summarization ModelIn our case, the LLM of choice is the t5 large model.model = Summarizer()Step 3:  Create a Sample Text to Summarizetext = """Climate change represents one of the most significant challenges facing the world today. It is characterized by changes in weather patterns, rising global temperatures, and increasing levels of greenhouse gases in the atmosphere. The impact of climate change is far-reaching, affecting ecosystems, biodiversity, and human societies across the globe. Scientists warn that immediate action is necessary to mitigate the most severe consequences of this global phenomenon. Strategies to address climate change include reducing carbon emissions, transitioning to renewable energy sources, and conserving natural habitats. International cooperation is crucial, as the effects of climate change transcend national borders, requiring a unified global response. The Paris Agreement, signed by 196 parties at the COP 21 in Paris on 12 December 2015, is one of the most comprehensive international efforts to combat climate change, aiming to limit global warming to well below 2 degrees Celsius."""Step 4: Performing Extractive SummarizationIn this step, we'll be performing extractive summarization, explicitly instructing the model to generate a summary consisting of the two sentences deemed most significant.summary = model(text, num_sentences=2)  # You can specify the number of sentences in the summary print("Extractive Summary:") print(summary)Output for Extractive Summary: Climate change represents one of the most significant challenges facing the world today. The impact of climate change is far-reaching, affecting ecosystems, biodiversity, and human societies across the globe.Challenges and Overcoming ThemThe journey of extractive summarization using LLMs is not without its bumps. A significant challenge is redundancy. Extractive models, in their quest to capture important sentences, might pick multiple sentences conveying similar information, leading to repetitive summaries.Then there's the issue of coherency. Unlike abstractive summarization, where models generate summaries, extractive methods merely extract. The outcome might not always flow logically, hindering a reader's understanding and detracting from the quality.To combat these challenges, refined training methods can be employed. Training data can be curated to include diverse sentence structures and content, pushing the model to discern nuances and reduce redundancy. Additionally, reinforcement learning techniques can be integrated, where the model is rewarded for producing non-redundant, coherent summaries and penalized for the opposite. Over time, through continuous feedback and iterative training, LLMs can be fine-tuned to generate crisp, non-redundant, and coherent extractive summaries.ConclusionIn conclusion, the realm of text summarization, enhanced by the capabilities of Large Language Models (LLMs), presents a dynamic and evolving landscape. Throughout this article, we've journeyed through the foundational aspects of LLMs, their prowess in extractive summarization, and the methodologies and techniques they adopt.While challenges persist, the continuous advancements in the field promise innovative solutions on the horizon. As we move forward, the relationship between LLMs and text summarization will undoubtedly shape the future of how we process and understand vast data volumes efficiently.Author BioMostafa Ibrahim is a dedicated software engineer based in London, where he works in the dynamic field of Fintech. His professional journey is driven by a passion for cutting-edge technologies, particularly in the realms of machine learning and bioinformatics. When he's not immersed in coding or data analysis, Mostafa loves to travel.Medium
Read more
  • 0
  • 0
  • 492