Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon

How-To Tutorials

7019 Articles
article-image-exploring-text-to-image-with-adobe-firefly
Joseph Labrecque
28 Jun 2023
9 min read
Save for later

Exploring Text to Image with Adobe Firefly

Joseph Labrecque
28 Jun 2023
9 min read
About Adobe FireflyWe first introduced Adobe Firefly in the article Animating Adobe Firefly Content with Adobe Animate published on June 14th of 2023. Let’s have a quick recap before moving ahead with a deeper look at Firefly itself.Adobe Firefly is a new set of generative AI tools which can be accessed via https://firefly.adobe.com/ by anyone with an Adobe ID that isn’t restricted by age or other factors. While Firefly is in beta… all generated images are watermarked with a Firefly badge in the lower left corner and Adobe recommends that it only be used for personal image generation and not for commercial use. These restrictions will change, of course, once Firefly procedures become integrated within Adobe software such as Photoshop and Illustrator. We plan to explore how to use Firefly-driven workflows within creative desktop software in future articles.Image 1: The Adobe Firefly websiteA couple of things make Firefly unique as a prompt-based image-generation service:The generative models were all trained on Adobe Stock content which Adobe already has rights to. This differs from many other such Ais whose models are trained on sets of content scraped from the web or otherwise acquired in a less-than-ideal way in terms of artist's rights.Firefly is accessed through a web-based interface and not through a Discord bot or installed software. The design of the user experience is pleasant to interface with and provides a low barrier to entry.Firefly goes way beyond prompt-based image generation. A number of additional generative workflows currently exist for use – with a number of additional procedures in exploration. As mentioned, Firefly as a web-based service may be a temporary channel – as more of the procedures are tested and integrated within existing desktop software.In the remainder of this article, we will focus on the text-to-image basics available in Firefly.Using Text to Image within FireflyWe will begin our explorations of Adobe Firefly with the ability to generate an image from a detailed text prompt. This is the form of generative AI that most people have some experience with and will likely come easily to those who have used similar services such as MidJourney, Stable Diffusion, or others.When you first enter the Firefly web experience, you will be presented with the various workflows available.We want to locate the Text to Image module and click Generate to enter the experience.Image 2: The Text to image module in FireflyFrom there, you’ll be taken to a view that showcases images generated through this process, along with a text input that invites you to enter a prompt to “describe the image you want to generate”.Image 3: The text-to-image prompt requests your input to beginEnter the following simple prompt: “Castle in a field of flowers”Click the Generate button when complete. You’ll then be taken into the heart of the Firefly experience.  Image 4: The initial set of images is generated from your promptWhen you enter the text-to-image module, you are presented with a set of four images generated from the given prompt. The prompt itself appears beneath the set of images and along the right-hand side is a column of options that can be adjusted.Exploring the Text-to-Image UIWhile the initial set of images that Firefly has generated do match our simple prompt pretty closely… we can manipulate a lot of additional controls which can have great influence upon the generated images.The first set of parameters you will see along the right-hand side of the screen is the Aspect ratio.Image 5: The image set aspect ratio can be adjustedThere is a rather large set of options in a dropdown selection that determines the aspect ratio of the generated images. As we see above, the default is Square (1:1). Let’s change that to Landscape (4:3) by choosing that option from the dropdown.Below that set of options, you will find Content-type.Image 6: Content type defines a stylistic bias for your imagesThe default is set to Art but you also have Photo, Graphic, and None as alternative choices. Each one of these will apply a bias to how the image is generated to be more photographic, more like a graphic, or even more traditional artwork. Choosing None will remove all such bias and allow your prompt to carry the full weight of intention. Chose None before moving on – as we will change our prompt to be much more descriptive to better direct Firefly.Beneath this, you will find the Styles section of options.Image 7: Styles are predefined terms that can be activated as neededStyles are basically keywords that are appended to your prompt in order to influence the results in very specific ways. These style prompts function just as if you’ve written the term as part of your written prompt. They exist as a sort of a predefined list of stylistic options that are easily added and removed from your prompt and are categorized by concepts such as Movements, Techniques, Materials, and more. As styles are added to your prompt, they appear beneath it and can be easily removed in order to allow easy exploration of ideas.At the very bottom of this area of the interface are a set of dropdown selections which include options for Color and tone, Lighting, and Composition.Image 8: You can also influence color and tone, lighting, and compositionJust as with the sections above, as you apply certain choices in these categories, they appear as keywords below your prompt. Choose a Muted color from the Color and tone list. Additionally, apply the Golden hour option from the Lighting dropdown.  Remember… you can always add any of these descriptors into the text prompt itself – so don’t feel limited by only the choices presented through the UI.Using a More Detailed Text PromptOkay – now that we’ve adjusted the aspect ratio and have either cleared or set a number of additional options… let’s make our text prompt more descriptive in order to generate a more interesting image.Change the current text prompt, which reads “castle in a field of flowers” to now read the much more detailed “vampiric castle in a field of flowers with a forest in the distance and mountains against the red sky”.Click the Generate button to have Firefly re-interpret our intent using the new prompt, presenting a much more detailed set of images derived from the prompt along with any keyword options we’ve included.Image 9: The more detail you put into your prompt – the more control you have over the generative visualsIf you find one of the four new images to be acceptable, it can be easily downloaded to your computer.Image 10: There are many options when hovering over an image – including downloadSimply hover your mouse over the chosen image and a number of additional options appear. We will explore these additional options in much greater detail in a future article. Click the download icon to begin the download process for that image.As Firefly begins preparing the image for download, a small overlay dialog appears.Image 11: Content credentials are applied to the image as it is downloadedFirefly applies metadata to any generated image in the form of content credentials and the image download process begins.What are content credentials? They are driven as part of the Content Authenticity Initiative to help promote transparency in AI. This is how Adobe describes content credentials in their Firefly FAQ:Content Credentials are sets of editing, history, and attribution details associated with content that can be included with that content at export or download. By providing extra context around how a piece of content was produced, they can help content producers get credit and help people viewing the content make more informed trust decisions about it. Content Credentials can be viewed by anyone when their respective content is published to a supporting website or inspected with dedicated tools. -- AdobeOnce the image is downloaded, it can be viewed and shared just like any other image file.Image 12: The chosen image is downloaded and ready for useAlong with content credentials, a small badge is placed upon the lower right of the image which visually identifies the image as having been produced with Adobe Firefly (beta).There is a lot more Firefly can do – and we will explore these additional options and procedures in the coming articles.Author BioJoseph is a Teaching Assistant Professor, Instructor of Technology, University of Colorado Boulder / Adobe Education Leader / Partner by DesignJoseph Labrecque is a creative developer, designer, and educator with nearly two decades of experience creating expressive web, desktop, and mobile solutions. He joined the University of Colorado Boulder College of Media, Communication, and Information as faculty with the Department of Advertising, Public Relations, and Media Design in Autumn 2019. His teaching focuses on creative software, digital workflows, user interaction, and design principles and concepts. Before joining the faculty at CU Boulder, he was associated with the University of Denver as adjunct faculty and as a senior interactive software engineer, user interface developer, and digital media designer.Labrecque has authored a number of books and video course publications on design and development technologies, tools, and concepts through publishers which include LinkedIn Learning (Lynda.com), Peachpit Press, and Adobe. He has spoken at large design and technology conferences such as Adobe MAX and for a variety of smaller creative communities. He is also the founder of Fractured Vision Media, LLC; a digital media production studio and distribution vehicle for a variety of creative works.Joseph is an Adobe Education Leader and member of Adobe Partners by Design. He holds a bachelor’s degree in communication from Worcester State University and a master’s degree in digital media studies from the University of Denver.Author of the book: Mastering Adobe Animate 2023
Read more
  • 0
  • 0
  • 494

article-image-building-and-deploying-web-app-using-langchain
Avratanu Biswas
26 Jun 2023
12 min read
Save for later

Building and deploying Web App using LangChain

Avratanu Biswas
26 Jun 2023
12 min read
So far, we've explored the LangChain modules and how to use them (refer to the earlier blog post on LangChain Modules here). In this section, we'll focus on the LangChain Indexes and Agent module and also walk through the process of creating and launching a web application that everyone can access. To make things easier, we'll be using Databutton, an all-in-one online workspace to build and deploy web apps, integrated with Streamlit, a Python web- development framework known for its support in building interactive web applications.What are LangChain Agents?In simpler terms, LangChain Agents are tools that enable Large Language Models (LLMs) to perform various actions, such as accessing Google search, executing Python calculations, or making SQL queries, thereby empowering LLMs to make informed decisions and interact with users by using tools and observing their outputs. The official documentation of LangChain describes Agents as:" …there is an agent which has access to a suite of tools. Depending on the user input, the agent can then decide which, if any, of these tools to call… In building agents, there are several abstractions involved. The Agent abstraction contains the application logic, receiving user input and previous steps to return either an AgentAction (tool and input) or AgentFinish (completion information). Agent covers another aspect, called Tools, which represents the actions agents can take, while Toolkits group tools for specific use cases (e.g., SQL querying). Lastly, the Agent Executor manages the iterative execution of the agent with the available tools. Thus, in this section, we will briefly explore such abstractions while using the Agent functionality to integrate tools and primarily focus on building a real-world easily deployable web application.IndexesThis module provides utility functions for structuring documents using indexes and allowing LLMs to interact with them effectively. We will focus on one of the most commonly used retrieval systems, where indexes are used to find the most relevant documents based on a user's query. Additionally, LangChain supports various index and retrieval types, with a focus on vector databases for unstructured data. We will explore this component in detail as it can be leveraged in a wide number of real-world applications.Image 1 Langchain workflow by AuthorWorkflow of a question & answer generation interface using Retrieval index, where we leverage all types of Indexes which LangChain provides. Indexes are primarily of four types, namely : Document Loaders, Text Splitters, VectorStores, and Retrievers. Briefly, (a) the documents fetched from any datasource is split into chunks using text splitter modules (b) Embeddings are created (c)Stored over a vector store index ( vector databases such as chromadb / pinecone / weaviate, etc ) (d) Queries from the user is retrieved via retrieval QA chain We will use the  WikipediaLoader to load Wikipedia documents related to our query "LangChain" and retrieve the metadata and a portion of the page content of the first document.from langchain.document_loaders import WikipediaLoader docs = WikipediaLoader(query='LangChain', load_max_docs=2).load() docs[0].metadata docs[0].page_content[:400]CharacterTextSplitter is used to split the loaded documents into smaller chunks for further processing.from langchain.text_splitter import CharacterTextSplitter text_splitter = CharacterTextSplitter(chunk_size=4000, chunk_overlap=0) texts = text_splitter.split_documents(docs)The OpenAIEmbeddings the module is then employed to generate embeddings for the text chunks.from langchain.embeddings import OpenAIEmbeddings embeddings = OpenAIEmbeddings(openai_api_key=OPENAI_API_KEY)We will use Chroma vector store, which is created from the generated text chunks and embeddings, allowing for efficient storage and retrieval of vectorized data.Next, the RetrievalQA module is instantiated with an OpenAI LLM and the created retriever, setting up a question-answering system.from langchain.vectorstores import Chroma db = Chroma.from_documents(texts, embeddings) retriever = db.as_retriever() from langchain.chains import RetrievalQA from langchain.llms import OpenAI Qa  = RetrievalQA.from_chain_type(llm=OpenAI(openai_api_key=OPENAI_API_KEY), chain_type="stuff", retriever=retriever)At this stage, we can easily seek answers from the stored indexed data. For instance, query = "What is LangChain?" qa.run(query)LangChain is a framework designed to simplify the creation of applications using large language models (LLMs).query = "When was LangChain founded?" qa.run(query)LangChain was founded in October 2022.query = "When was LangChain founded?" qa.run(query)LangChain was founded in October 2022.query = "Who is the founder?" qa.run(query) The founder of LangChain is Harrison Chase.The Q&A functionality implemented using the retrieval chain provides reasonable answers to most of our queries. Different types of indexes provided by LangChain, can be leveraged for various real-world use cases involving data structuring and retrieval. Moving forward, we will delve into the next section, where we will focus on the final component called the "Agent." During this section, we will not only gain a hands-on understanding of its usage but also build and deploy a web app using an online workspace called Databutton.Building Web App using DatabuttonPrerequisitesTo begin using Databutton, all that is required is to sign up through their official website. Once logged in, we can either create a blank template app from scratch or choose from the pre-existing templates provided by Databutton.Image by Author | Screen grasp showing on how to start working with a new blank appOnce the blank app is created, we generate our online workspace consisting of several features for building and deploying the app. We can immediately begin writing our code within the online editor. The only requirement at this stage is to include the necessary packages or dependencies that our app requires.Image by Author | Screen grasp showing the different components available within the Databutton App's online workspace. Databutton's workspace initialization includes some essential packages by default. However, for our specific app, we need to add two additional packages - openai and langchain. This can be easily accomplished within the "configuration" workspace of Databutton.Image by Author | Screen grasp of the configuration options within Databutton's online workspace. Here we can add the additional packages which we need for working with our app. The workspace is generated with few pre-installed dependencies.Writing the codeNow that we have a basic understanding of Agents and their abstraction methods, let's put them to use, alongside incorporating some basic Streamlit syntax for the front end.Importing the required modules: For building the web app, we will require the Streamlit library and several LangChain modules. Additionally, we will utilise a helper function that relies on the sys and io libraries for capturing and displaying function outputs. We will discuss the significance of this helper function towards the end to better understand its purpose.# Modules to Import import streamlit as st import sys import io import re from typing import Callable, Any from langchain.agents.tools import Tool from langchain.agents import initialize_agent from langchain.llms import OpenAI from langchain.chains import LLMChain from langchain import LLMMathChain from langchain import PromptTemplateUsing the LangChain modules and building the main user interface: We set the title of the app using st.title() syntax and also enables the user to enter their OpenAI API key using the st.text_input() widget.# Set the title of the app st.title("LangChain `Agent` Module Usage Demo App") # Get the OpenAI API key from the user OPENAI_API_KEY = st.text_input( "Enter your OpenAI API Key to get started", type="password" )As we discussed in the previous sections, we need to define a template for the prompt that incorporates a placeholder for the user's query.# Define a template for the prompt template = """You are a friendly and polite AI Chat Assistant. You must try to provide accurate but concise answers. If you don't know the answer, just say "I don't know." Question: {query} Answer: """ # Create a prompt template object with the template prompt = PromptTemplate(template=template, input_variables=["query"])Next, we implement a conditional loop. If the user has provided an OpenAI API key, we proceed with the flow of the app. The user is asked to enter their query using the st.text_input() widget.# Check if the user has entered an OpenAI API key if OPENAI_API_KEY: # Get the user's query query = st.text_input("Ask me anything")Once the user has the correct API keys inserted, from this point onward, we will proceed with the implementation of LangChain modules. Some of these modules may be new to us, while others may have already been covered in our previous sections.Next, we create instances of the OpenAI language model, OpenAI, the LLMMathChain for maths-related queries, and the LLMChain for general-purpose queries.# Check if the user has entered a query if query: # Create an instance of the OpenAI language model llm = OpenAI(temperature=0, openai_api_key=OPENAI_API_KEY) # Create an instance of the LLMMathChain for math-related queries llm_math_chain = LLMMathChain.from_llm(llm=llm, verbose=True) # Create an instance of the LLMChain for general-purpose queries llm_chain = LLMChain(llm=llm, prompt=prompt)Following that, we create a list of tools that the agent will utilize. Each tool comprises a name, a corresponding function to handle the query, and a brief description.# Create a list of tools for the agent tools = [ Tool( name="Search", func=llm_chain.run, description="Useful for when you need to answer general purpose questions", ), Tool( name="Calculator", func=llm_math_chain.run, description="Useful for when you need to answer questions about math", ), ]Further, we need to initialize a zero-shot agent with the tools and other parameters. This agent employs the ReAct framework to determine which tool to utilize based solely on the description associated with each tool. It is essential to provide a description of each tool.# Initialize the zero-shot agent with the tools and parameters zero_shot_agent = initialize_agent( agent="zero-shot-react-description", tools=tools, llm=llm, verbose=True, max_iterations=3, ) And now finally, we can easily call the zero-shot agent with the user's query using the run(query) method.# st.write(zero_shot_agent.run(query))However, this would only yield the final outcome of the result within our Streamlit UI, without providing access to the underlying LangChain thought process (i.e. the verbose ) that we typically observe in a Notebook environment. This information is crucial to understand which tools our agent is opting for based on the user query. To address this, a helper function called capture_and_display_output was created.# Helper function to dump LangChain Verbose / Though Process # Function to capture and display the output of a function def capture_and_display_output(func: Callable[..., Any], args, **kwargs) -> Any: # Redirect stdout to a string buffer original_stdout = sys.stdout sys.stdout = output_catcher = io.StringIO() # Call the function and capture the response response = func(args, *kwargs) # Restore the original stdout and get the captured output sys.stdout = original_stdout output_text = output_catcher.getvalue() # Clean the output text by removing escape sequences cleaned_text = re.sub(r"\x1b\[([0-9]{1,2}(;[0-9]{1,2})?)?[m|K]", "", output_text) # Split the cleaned text into lines and concatenate them with line breaks lines = cleaned_text.split("\n") concatenated_string = "\n".join([s if s else "\n" for s in lines]) # Display the captured output in an expander with st.expander("Thoughts", expanded=True): st.write(concatenated_string)This function allows users to monitor the actions undertaken by the agent. Consequently, the response from the agent is displayed within the UI.# Call the zero-shot agent with the user's query and capture the output response = capture_and_display_output(zero_shot_agent.run, query) Image by Author | Screen grasp of the app in local deployment displays the entire verbose or rather the thought process  Deploy and Testing of the AppThe app can now be easily deployed by clicking the "Deploy" button on the top left-hand side. The deployed app will provide us with a unique URL that can be shared with everyone!Image by Author | Screen grasp of the Databutton online workspace showing the Deploy options. Yay! We have successfully built and deployed a LangChain-based web app from scratch. Here's the link to the app ! The app also consists of a view code page , which can be accessed via this link.To test the web app, we will employ two different types of prompts. One will be a general question that can be answered by any LLMs, while the other will be a maths-related question. Our hypothesis is that the LangChain agents will intelligently determine which agents to execute and provide the most appropriate response. Let's proceed with the testing to validate our assumption.Image by Author | Screen grasped from the deployed web app.Two different prompts were used to validate our assumptions. Based on the thought process ( displayed in the UI under the thoughts expander ), we can easily interpret which Tool has been chosen by the Agent. (Left) Usage of LLMMath chain incorporating Tool (Right) Usage of a simple LLM Chain incorporating Tool.ConclusionTo summarise, we have not only explored various aspects of working with LangChain and LLMs but have also successfully built and deployed a web app powered by LangChain. This demonstrates the versatility and capabilities of LangChain in enabling the development of powerful applications.ReferencesLangChain Agents official documentation : https://python.langchain.com/en/latest/modules/agents.htmlDatabutton : https://www.databutton.io/Streamlit :  https://streamlit.io/ Build a Personal Search Engine Web App using Open AI Text Embeddings : https://medium.com/@avra42/build-a-personal-search-engine-web-app-using-open-ai-text-embeddings-d6541f32892dPart 1: Using LangChain for Large Language Model — powered Applications: https://www.packtpub.com/article-hub/using-langchain-for-large-language-model-powered-applicationsDeployed Web app - https://databutton.com/v/23ks6sem Source code for the app - https://databutton.com/v/23ks6sem/View_CodeAuthor BioAvratanu Biswas, Ph.D. Student ( Biophysics ), Educator, and Content Creator, (Data Science, ML & AI ).Twitter    YouTube    Medium     GitHub
Read more
  • 0
  • 0
  • 408

article-image-emotionally-intelligent-ai-transforming-healthcare-with-wysa
Julian Melanson
22 Jun 2023
6 min read
Save for later

Emotionally Intelligent AI: Transforming Healthcare with Wysa

Julian Melanson
22 Jun 2023
6 min read
Artificial Intelligence is reshaping the boundaries of industries worldwide, and healthcare is no exception. An exciting facet of this technological revolution is the emergence of empathetic AI, sophisticated algorithms designed to understand and respond to human emotions.The advent of AI in healthcare has promised efficiency, accuracy, and predictability. However, a crucial element remained largely unexplored: the human component of empathetic understanding. Empathy, defined as the capacity to understand and share the feelings of others, is fundamental to human interactions. Especially in healthcare, a practitioner's empathy can bolster patients' immune responses and overall well-being. Unfortunately, complex emotions associated with medical errors—such as hurt, frustration, and depression—often go unaddressed. Furthermore, emotional support is strongly correlated with the prognosis of chronic diseases such as cardiovascular disorders, cancer, and diabetes, underscoring the need for empathetic care. So how can we integrate empathy into AI? To find the answer, let's examine Wysa, an AI-backed mental health service platform.Wysa, leveraging AI's power, simulates a conversation with a chatbot to provide emotional support to users. It offers an interactive platform for people experiencing mood swings, stress, and anxiety, delivering personalized suggestions and tools to manage their mental health. This AI application extends beyond mere data processing and ventures into the realm of human psychology, demonstrating a unique fusion of technology and empathy.In 2022, the U.S. Food and Drug Administration (FDA) awarded Wysa the Breakthrough Device Designation. This designation followed an independent, peer-reviewed clinical trial published in the Journal of Medical Internet Research (JMIR). The study demonstrated Wysa's efficacy in managing chronic musculoskeletal pain and associated depression and anxiety, positioning it as a potential game-changer in mental health care.Wysa's toolset is primarily based on cognitive behavioral therapy (CBT), a type of psychotherapy that helps individuals change unhelpful thought patterns. It deploys a smartphone-based conversational agent to deliver CBT, effectively reducing symptoms of depression and anxiety, improving physical function, and minimizing pain interference.The FDA Breakthrough Device program is designed to expedite the development and approval of innovative medical devices and products. By granting this designation, the FDA acknowledged Wysa's potential to transform the treatment landscape for life-threatening or irreversibly debilitating diseases. This prestigious endorsement facilitates efficient communication between Wysa and the FDA's experts, accelerating the product's development during the premarket review phase.Wysa's success encapsulates the potential of empathetic AI to revolutionize healthcare. However, to fully capitalize on this opportunity, healthcare organizations need to revise and refine their strategies. An effective emotional support mechanism, powered by empathetic AI, can significantly enhance patient safety, satisfaction scores, and ultimately, the quality of life. For this to happen, continued development of technologies that cater to patients' emotional needs is paramount.While AI's emergence in healthcare has often been viewed through the lens of improved efficiency and decision-making, the human touch should not be underestimated. As Wysa demonstrates, AI has the potential to extend beyond its traditional boundaries and bring a much-needed sense of empathy into the equation. An emotionally intelligent AI could be instrumental in providing round-the-clock emotional support, thereby revolutionizing mental health care.As we advance further into the AI era, the integration of empathy into AI systems signifies an exciting development. AI platforms like Wysa, which blends technological prowess with human-like understanding, could be a pivotal force in transforming the healthcare landscape. As empathetic AI continues to evolve, it holds the promise of bridging the gap between artificial and human intelligence, ultimately enhancing patient care in the healthcare sector.A Step-By-Step Guide To Using WysaDownload the App: Android users can download Wysa from the Google Play Store. If you're an Apple user, you can find Wysa in the Apple App Store.Explore the App: Once installed, you can explore Wysa’s in-app activities which feature various educational modules, or “Packs”. These packs cover a range of topics, from stress management and managing anger, to coping with school stress and improving sleep.Engage with Wysa Bot: Each module features different “exercises” guided by the Wysa AI bot, a friendly penguin character. These exercises may involve question-answers, mindfulness activities, or short exercise videos. While all the modules can be viewed in the free app, only one exercise per module is accessible. To unlock the entire library, you’ll need to upgrade to the premium app.Consider Therapy Option: Wysa also offers a “therapy” option, which gives you access to a mental health coach and all the content in the premium version. Do note that this service is not formal therapy as provided by licensed therapists. The coaches are based in the US or India, and while they can offer support and encouragement, they are not able to provide diagnoses or treatment.Attend Live Sessions: Live sessions are carried out through instant messaging in the app, lasting for 30 minutes each week. In between these live sessions, you can message your coach at any time and usually expect at least a daily response.Complete Assigned Tasks: After each live session, your coach will assign you specific tasks to complete before your next session. You will complete these tasks guided by the Wysa AI bot.Maintain Anonymity: An important feature of Wysa is its respect for user privacy. The app doesn't require you to create an account, enter your real name, or provide an email address. To get started, all you need is a nickname.*Remember, Wysa is a tool designed to help manage stress and anxiety, improve sleep, and promote overall mental wellbeing. However, it does not replace professional psychological or medical advice. Always consult with a healthcare professional if you are in need of immediate assistance or dealing with severe mental health issues.SummaryArtificial intelligence (AI) is transforming healthcare in many ways, including by providing new tools for mental health management. One example of an AI-powered mental health app is Wysa, which uses conversational AI to help users cope with stress, anxiety, and depression. Wysa has been clinically proven to be effective in reducing symptoms of mental illness, and it can be used as a supplement to traditional therapy or as a standalone intervention.As AI continues to develop, it is likely that we will see even more innovative ways to use this technology to improve mental health care. AI-powered apps like Wysa have the potential to make mental health care more accessible and affordable, and they can also help to break down the stigma around mental illnesses.Author BioJulian Melanson is one of the founders of Leap Year Learning. Leap Year Learning is a cutting-edge online school that specializes in teaching creative disciplines and integrating AI tools. We believe that creativity and AI are the keys to a successful future and our courses help equip students with the skills they need to succeed in a continuously evolving world. Our seasoned instructors bring real-world experience to the virtual classroom and our interactive lessons help students reinforce their learning with hands-on activities.No matter your background, from beginners to experts, hobbyists to professionals, Leap Year Learning is here to bring in the future of creativity, productivity, and learning!
Read more
  • 0
  • 0
  • 156

article-image-making-the-most-of-chatgpt-code-interpreter-plugin
Julian Melanson
22 Jun 2023
5 min read
Save for later

Making the Most of ChatGPT Code Interpreter Plugin

Julian Melanson
22 Jun 2023
5 min read
As we stand on the threshold of a new age in Artificial Intelligence,, OpenAI recently announced the rollout of its ChatGPT Code Interpreter plugin for ChatGPT Plus subscribers, marking a significant evolution in the capabilities of AI. This groundbreaking technology is not merely another link in the AI chain but rather an exemplar of the transformative capabilities AI can bring to the table, particularly in programming and data analysis.The ChatGPT Code Interpreter plugin positions itself as an invaluable tool for developers, promising to significantly augment and streamline their workflows. Among its multiple functionalities, three stand out due to their potential impact on the programming landscape and data analysis, which work in tandem to extract valuable insights.Data VisualizationAt its core, the ChatGPT Code Interpreter excels in the domain of data visualization. In a world increasingly reliant on data, the ability to transform complex datasets into visually comprehensible and comprehensive formats is priceless. The plugin simplifies the arduous task of crunching through complex numbers and data sets, producing insightful visualizations without the need for prompt engineering. This proficiency in creatively rendering data echoes the power of platforms like Wolfram, introducing a new era of ease and efficiency in data comprehension.Here’s an example from Justin Fineberg’s TikTok: Click hereFile ConversionThe ChatGPT Code Interpreter extends its versatility into the realm of file conversion. This feature provides a simple solution to the often cumbersome task of converting files from one format to another. Its impressive functionality ranges from changing audio file formats, like MP3 to WAV, to converting an image into a text file. This capability paves the way for more accessible content transformation, such as easily convert PDF documents into editable text files.Here’s an example from Twitter user Riley Goodside: Click herePython Code ExecutionWhat sets the ChatGPT Code Interpreter plugin apart is its prowess in executing Python code within a sandboxed, firewalled execution environment. This essentially means that all the data visualizations are generated using Python, thereby lending the plugin an additional layer of power and versatility.As the plugin is still in its alpha stage, gaining access currently involves joining a waitlist, and  OpenAI has not publicly stated when a large-scale rollout will take place. However, those eager to explore its features have an alternative route via Discord's GPT Assistant bot, which already incorporates the Code Interpreter plugin to enhance its features and functionalities.This revolutionary plugin is not merely an advanced code interpreter; it's a complete tool that uses Python to generate code from natural language input and run it. The results are then presented within the dialogue box. The chatbot’s functionality extends to solving mathematical problems, data analysis and visualization, and file conversion, with an adeptness in these domains that rivals experienced coders.Beyond its immediate capabilities, the ChatGPT Code Interpreter plugin has broader implications for the programming and data analysis industry. It is reminiscent of Github Copilot X in its design, aimed at making workflows more creative and efficient. For instance, when asked to plot a function, the plugin not only generates the graph but also offers the option to 'show work', revealing the exact code it created and executed to generate the graph.The accessibility and user-friendliness of the plugin are expected to democratize the coding landscape, opening up the world of programming to a wider audience. This feature holds tremendous potential to accelerate collaborations, allowing technical and non-technical team members to work together more effectively on data analysis projects.Practical use cases for the ChatGPT Code Interpreter extend beyond the realm of programming, spanning various industries. Marketing teams, for instance, can leverage their capabilities to analyze customer data, segment audiences, and create targeted campaigns. Finance teams can utilize the plugin for tasks like financial modeling, forecasting, and risk analysis. Similarly, human resource teams can use it to analyze employee data, identify performance trends, and make data-driven hiring decisions. Even the healthcare sector stands to benefit, as the tool can analyze patient data, identify patterns in health outcomes, and thus enhance patient care.Accessing ChatGPT Code InterpreterIf you’re selected from the waitlist, here’s a step-by-step guide on how to install the plugin:Ensure you're a ChatGPT Plus subscriber, paying the $20 monthly fee.Log into ChatGPT on the OpenAI website.Click on 'Settings', then the three-dot menu next to your login name.In the 'Beta features' menu, enable 'Plug-ins'. For web browsing access, enable that too.Close the menu, find the language model selector, and choose 'Plugin Store' from the drop-down.Click 'All plug-ins', find 'Code Interpreter' in the list and install it.Now, you can interact with ChatGPT using the Code Interpreter plug-in.SummaryThe ChatGPT Code Interpreter plugin presents a transformative approach to programming and data analysis, automating code generation, facilitating data exploration, and improving code quality. This plugin empowers users to derive more value from their data, aiding in the formulation of strategic insights. As AI continues to evolve, tools like the ChatGPT Code Interpreter will undoubtedly play an instrumental role in shaping the future of data interaction and understanding, ultimately revolutionizing the landscape of data analysis.Author BioJulian Melanson is one of the founders of Leap Year Learning. Leap Year Learning is a cutting-edge online school that specializes in teaching creative disciplines and integrating AI tools. We believe that creativity and AI are the keys to a successful future and our courses help equip students with the skills they need to succeed in a continuously evolving world. Our seasoned instructors bring real-world experience to the virtual classroom and our interactive lessons help students reinforce their learning with hands-on activities.No matter your background, from beginners to experts, hobbyists to professionals, Leap Year Learning is here to bring in the future of creativity, productivity, and learning!
Read more
  • 0
  • 0
  • 187
Banner background image

article-image-creating-cicd-pipelines-effortlessly-using-chatgpt
Sagar Lad
22 Jun 2023
5 min read
Save for later

Creating CI/CD Pipelines Effortlessly using ChatGPT

Sagar Lad
22 Jun 2023
5 min read
 In the fast-paced world of software development, continuous integration and continuous delivery (CI/CD) pipelines have become crucial for efficient and reliable software deployment. However, building and maintaining these pipelines can be a daunting task, requiring extensive knowledge and expertise. But what if there was a way to simplify this process and make it more accessible to developers of all skill levels? Enter ChatGPT, the groundbreaking language model developed by OpenAI. In this article, we explore how ChatGPT can effortlessly guide developers in building robust CI/CD pipelines, revolutionizing the way software is delivered.What is DevOps?DevOps team is a combination of the Development team and Operations team working together in order to accelerate the time to market and quality of the software development. DevOps way of working is more a shift in the mindset which has a major effect on the team and organization’s way of working. With the DevOps mindset, there are no silos between the development and operations teams. DevOps team mainly focuses on the automation to increase the reliability by enabling continuous integration, and continuous deployment pipelines.Image 1 : DevOps Lifecycle DevOps Lifecycle mainly consists of setting up an automated and collaborative environment to discover, plan, build, and test the artefacts. Once the artefacts are downloaded, they can be deployed to their respective environments. Throughout the DevOps lifecycle, the complete team has to work closely to maintain the alignment, velocity, and quality of the deliverables.DevOps implementation mainly involves below activities :Discover productAn iterative approach for agile planningBuild pipelines for branching, merging, and workflows for the development processAutomated Testing using CI pipelines Deployment using release pipelinesOperationalize and manage end-to-end IT deliveryContinuous monitoring of the deployed softwareContinuous feedback and improvements for future releaseChatGPT for DevOpsAutomation: DevOps Tools like Jenkins, Ansible, Terraform, etc provide workflow automation. On the other hand, we can use chatGPT to automate the below DevOps activities that require manual intervention.Automation Testing: Testing scenarios and automated testing can be enabled using the chatGPT as a part of the continuous build pipeline.Release documentation: Maintaining documentation for each feature is a manual and tedious task. With the help of chatGPT, we can write code in any language to create an automated documentation for each release. With ChatGPT, we can generate code for bicep templates and YAML for Azure DevOps, Terraform , Jenkins, and Lambda code for the DevOps activities.  DevSecOps Implementation: Security is a critical factor in order to develop any software. ChatGPT can be used to monitor cloud resources, manage and detect security vulnerabilities , and scan for networking issues, open ports, database, and storage configurations as per the industry standards and requirements.Continuous Monitoring: Customised dashboard for monitoring is a key metric for proactive monitoring and taking data-driven decisions. We can create a dashboard using the chatGPT to generate code including a variety of components such as charts, graphs, and tables.Let’s now ask chatGPT for each of these steps to create a DevOps process flow for a software project. The first step is to set up the Azure DevOps repository structure including the branching policies, and pre, and post-deployment approval:Image 2: Azure Devop Repo structure & branching policiesAs you can see, a recommendation from ChatGPT is to create a new Azure DevOps repository with proper naming conventions. In order to set up the branching policies, we need to configure the build validations, set up the reviewers, status check, and work item linking in the Azure Boards.Image 3: Azure DevOps Continuous Integration PipelineHere, we have requested chatGPT to create a YAML continuous integration build pipeline in Azure DevOps including the code quality checks and testing. ChatGPT provides us with a YAML pipeline that has multiple stages - one for sonarqube, one for fortify code quality checks, one for automation testing, and one to download the artefacts. Once the CI pipeline is ready, let’s ask ChatGPT to build IaC(Infrastructure as a Code Pipeline) to deploy Azure services like Azure Data Factory and Azure Databricks.Image 4 : Azure DevOps Continuous Deployment PipelinesHere, we can see the step-by-step process to build the continuous deployment pipelines which are using shell script to deploy the Azure Data Factory and Azure CLI to deploy the Azure Databricks. This pipeline also has an integration with the branches to include and it is using variable groups to create a generic pipeline. Let’s see how we can build monitoring dashboards using chatGPT:ConclusionSo chatGPT is not a threat to the DevOps engineers but it will boost up productivity by embracing the technology to set up and implement the DevOps way of working.  In order to get the desired results, detailed prompt input should be provided to generate content from chatGPT to meet the expectations.Author BioSagar Lad is a Cloud Data Solution Architect with a leading organization and has deep expertise in designing and building Enterprise-grade Intelligent Azure Data and Analytics Solutions. He is a published author, content writer, Microsoft Certified Trainer, and C# Corner MVP.Link - Medium , Amazon , LinkedIn
Read more
  • 0
  • 0
  • 909

article-image-creating-essay-generation-methods-with-chatgpt-api
Martin Yanev
22 Jun 2023
11 min read
Save for later

Creating Essay Generation Methods with ChatGPT API

Martin Yanev
22 Jun 2023
11 min read
This article is an excerpt from the book, Building AI Applications with ChatGPT API, by Martin Yanev. This book will help you master ChatGPT, Whisper, and DALL-E APIs by building nine innovative AI projectsIn this section, we will dive into the implementation of the key functions within the essay generator application. These functions are responsible for generating the essay based on user input and saving the generated essay to a file. By understanding the code, you will be able to grasp the inner workings of the application and gain insight into how the essay generation and saving processes are accomplished.We will begin by exploring the generate_essay() function. This function will retrieve the topic entered by the user from the input field. It will then set the engine type for the OpenAI API, create a prompt using the topic, and make a request to the OpenAI API for essay generation. The response received from the API will contain the generated essay, which will be extracted and displayed in the essay output area of the application. To add that functionality, simply remove the pass placeholder and follow the code below.def generate_essay(self): topic = self.topic_input.text()    length = 500    engine = "text-davinci-003"    prompt = f"Write an {length/1.5} words essay on the following topic: {topic} \n\n"    response = openai.Completion.create(engine=engine, prompt=prompt, max_tokens=length)    essay = response.choices[0].text    self.essay_output.setText(essay)Here, we retrieve the topic entered by the user from the topic_input QLineEdit widget and assign it to the topic variable using the text() method. This captures the user's chosen topic for the essay. For now, we can define the length variable and set it to 500. This indicates the desired length of the generated essay. We will modify this value later by adding a dropdown menu with different token sizes to generate essays of different lengths.We can also specify the engine used for the OpenAI API to be text-davinci-003, which will generate the essay. You can adjust this value to utilize different language models or versions based on your requirements. We can also create the prompt variable, which is a string containing the prompt for the essay generation.It is constructed by concatenating the text Write a {length/1.5} essay on the following topic: where the length/1.5 variable specifies how many words our essay should be. We need to divide the tokens number by 1.5, as 1 word in English represents about 1.5 tokes. After specifying the instructions, we can pass the topic variable to the prompt. This prompt serves as the initial input for the essay generation process and provides context for the generated essay.Once all variables are defined, we make a request to the ChatGPT API with the specified engine, prompt, and the maximum number of tokens (in this case, 500). The API processes the prompt and generates a response, which is stored in the response variable. From the response, we extract the generated essay by accessing the text attribute of the first choice. This represents the generated text of the essay. Finally, we can pass the AI response to the essay_output, displaying it in the user interface for the user to read and interact with.Moving on, we will examine the save_essay() function. This function will retrieve the topic and the generated essay. It will utilize the docx library to create a new Word document and add the final essay to the document. The document will then be saved with the filename based on the provided topic, resulting in a Word document that contains the generated essay. After removing the pass keyword, you can implement the described functionality using the code snippet below.def save_essay(self):    topic = self.topic_input.text()    final_text = self.essay_output.toPlainText()    document = docx.Document()    document.add_paragraph(final_text)    document.save(topic + ".docx")Here we retrieve the text entered in the topic_input widget and assign it to the topic variable using the text() method. This captures the topic entered by the user, which will be used as the filename for the saved essay. Next, we use the toPlainText() method on the essay_output widget to retrieve the generated essay text and assign it to the final_text variable. This ensures that the user can edit the ChatGPT-generated essay before saving it. By capturing the topic and the final text, we are now equipped to proceed with the necessary steps to save the essay to a file.We can now use the docx library to create a new Word document by calling docx.Document(), which initializes an empty document. We then add a paragraph to the document by using the add_paragraph() method and passing in the final_text variable, which contains the generated essay text. This adds the generated essay as a paragraph to the document. We can now save the document by calling document.save() and providing a filename constructed by concatenating the topic variable, which represents the topic entered by the user. This saves the document as a Word file with the specified filename.You can now test your Essay Generator by running the code in PyCharm and generating an essay following the steps below (see Figure 8.3):Enter a topic: Write an essay topic of your choice in the Topic Input field. For this example, I have chosen the topic “Ancient Egypt”.Generate Essay: Perform a single click on the Generate Essay button. The app will reach ChatGPT API and within a few seconds, you will have your essay displayed inside the Essay Output field.Edit the Essay: You can edit the essay generated by the Artificial Intelligence before saving it.Save: Perform a single click on the Save button. This action will save the edited essay to a Word document utilizing the save_essay() method. The Word document will be saved in the root directory of your project.Figure 8.3: Essay Generator creating an “Ancient Egypt” essayOnce the essay has been saved to a Word document, you can reshare it with your peers, submit it as a school assignment or use any Word styling options on it.This section discussed the implementation of key functions in our essay generator application using the ChatGPT API. We built the generate_essay() method that retrieved the user’s topic input and sent a request to the ChatGPT API for generating an AI essay. We also developed the save_essay() method that saved the generated essay in a Word document.In the next section, we will introduce additional functionality to the essay generator application. Specifically, we will allow the user to change the number of AI tokens used for generating the essay.Controlling the ChatGPT API TokensIn this section, we will explore how to enhance the functionality of the essay generator application by allowing users to have control over the number of tokens used when communicating with ChatGPT. By enabling this feature, users will be able to generate essays of different lengths, tailored to their specific needs or preferences. Currently, our application has a fixed value of 500 tokens, but we will modify it to include a dropdown menu that provides different options for token sizes.To implement this functionality, we will make use of a dropdown menu that presents users with a selection of token length options. By selecting a specific value from the dropdown, users can indicate their desired length for the generated essay. We will integrate this feature seamlessly into the existing application, empowering users to customize their essay-generation experience.Let's delve into the code snippet that will enable users to control the token length. You can add that code inside the initUI() methods, just under the essay_output resizing:self.essay_output.resize(1100, 500) length_label = QLabel('Select Essay Length:', self) length_label.move(327, 40) self.length_dropdown = QComboBox(self) self.length_dropdown.move(320, 60) self.length_dropdown.addItems(["500", "1000", "2000", "3000", "4000"])The code above introduces a QLabel, length_label, which serves as a visual indication for the purpose of the dropdown menu. It displays the text Select Essay Length to inform users about the functionality.Next, we create a QcomboBox length_dropdown which provides users with a dropdown menu to choose the desired token length. It is positioned below the length_label using the move() method. The addItems() function is then used to populate the dropdown menu with a list of token length options, ranging from 500 to 4000 tokens. Users can select their preferred length from this list.The final step is to implement the functionality that allows users to control the number of tokens used when generating the essay, we need to modify the generate_essay() function. The modified code should be the following:def generate_essay(self):    topic = self.topic_input.text()    length = int(self.length_dropdown.currentText())    engine = "text-davinci-003"    prompt = f"Write an {length/1.5} words essay on the following topic: {topic} \n\n"    response = openai.Completion.create(engine=engine, prompt=prompt, max_tokens=length)    essay = response.choices[0].text   self.essay_output.setText(essay)In the modified code, the length variable is updated to retrieve the selected token length from the length_dropdown dropdown menu. The currentText() method is used to obtain the currently selected option as a string, which is then converted to an integer using the int() function. This allows the chosen token length to be assigned to the length variable dynamically.By making this modification, the generate_essay() the function will utilize the user-selected token length when making the request to the ChatGPT API for essay generation. This ensures that the generated essay will have the desired length specified by the user through the dropdown menu.We can now click on the Run button in PyCharm and verify that the Dropdown menu works properly. As shown in Figure 8.4, a click on the Dropdown menu will show users all options specified by the addItems() function.Figure 8.4: Controlling essay length.The user will be able to choose a token amount between 500 and 4000. Now you can select the 4000 tokens option, resulting in a longer length of the generated essay. We can follow the steps from our previous example and verify that the ChatGPT API generates a longer essay for “Ancient Egypt” when the number of tokens is increased from 500 to 4000.This is how you can enhance the functionality of an essay generator application by allowing users to control the number of tokens used when communicating with ChatGPT. By selecting a specific value from the dropdown menu, users can now indicate their desired length for the generated essay. We achieved that by using the QComboBox to create the dropdown menu itself. The modified generate_essay() method retrieved the selected token length from the dropdown menu and dynamically assigned it to the length variable.SummaryIn conclusion, leveraging the capabilities of ChatGPT API to enhance essay generation opens up a world of interactive creativity. By incorporating practical examples and step-by-step instructions, we have explored how to generate essay-generating elements and make them interact seamlessly with ChatGPT. This powerful combination allows for the production of compelling, coherent, and engaging essays effortlessly. With the ever-evolving potential of AI, the future of essay generation holds immense possibilities. By embracing these techniques, writers and researchers can unlock their full creative potential and revolutionize the way we generate written content.Author BioMartin Yanev is an experienced Software Engineer who has worked in the aerospace and medical industries for over 8 years. He specializes in developing and integrating software solutions for air traffic control and chromatography systems. Martin is a well-respected instructor with over 280,000 students worldwide, and he is skilled in using frameworks like Flask, Django, Pytest, and TensorFlow. He is an expert in building, training, and fine-tuning AI systems with the full range of OpenAI APIs. Martin has dual master's degrees in Aerospace Systems and Software Engineering, which demonstrates his commitment to both practical and theoretical aspects of the industry.LinkedInUdemy
Read more
  • 0
  • 0
  • 136
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-how-to-work-with-langchain-python-modules
Avratanu Biswas
22 Jun 2023
13 min read
Save for later

How to work with LangChain Python modules

Avratanu Biswas
22 Jun 2023
13 min read
This article is the second part of a series of articles, please refer to Part 1 for learning how to Get to grips with LangChain framework and how to utilize it for building LLM-powered AppsIntroductionIn this section, we dive into the practical usage of LangChain modules. Building upon the previous overview of LangChain components, we will work within a Python environment to gain hands-on coding experience. However, it is important to note that this overview is not a substitute for the official documentation, and it is recommended to refer to the documentation for a more comprehensive understanding.Choosing the Right Python EnvironmentWhen working with Python, Jupyter Notebook and Google Colab are popular choices for quickly getting started in the Python environment. Additionally, Visual Studio Code (VSCode) Atom, PyCharm, or Sublime Text integrated with a conda environment are also excellent options. While many of these can be used, Google Colab is used here for its convenience in quick testing and code sharing. Find the code link here.PrerequisitesBefore we begin, make sure to install the necessary Python libraries. Use the pip command within a notebook cell to install them.Installing LangChain: In order to install the "LangChain" library, which is essential for this section, you can conveniently use the following command:!pip install langchainRegular Updates: Personally, I would recommend taking advantage of LangChain’s frequent releases by frequently upgrading the packages. Use the following command for this purpose:!pip install langchain  - -  upgradeIntegrating LangChain with LLMs: Previously, we discussed how the LangChain library facilitates interaction with Large Language Models (LLMs) provided by platforms such as OpenAI, Cohere, or HuggingFace. To integrate LangChain with these models, we need to follow these steps:Obtain API Keys: In this tutorial, we will use OpenAI. We need to sign up; to easily access the API keys for the various endpoints which Open AI provides. The key must be confidential. You can obtain the API via this link.Install Python Package: Install the required Python package associated with your chosen LLM provider. For OpenAI language models, execute the command:!pip install openaiConfiguring the API Key for OpenAI: To initialize the API key for the OpenAI library, we will use the getpass Python Library. Alternatively, you can set the API key as an environment variable.# Importing the library OPENAI_API_KEY = getpass.getpass() import getpass # In order to double check # print(OPENAI_API_KEY) # not recommendedRunning the above lines of code will create a secure text input widget where we can enter the API key, obtained for accessing OpenAI LLMs endpoints. After hitting enter, the inputted value will be stored as the assigned variable OPENAI_API_KEY, allowing it to be used for subsequent operations throughout our notebook.We will explore different LangChain modules in the section below:Prompt TemplateWe need to import the necessary module, PromptTemplate, from the langchain library. A multi-line string variable named template is created - representing the structure of the prompt and containing placeholders for the context, question, and answer which are the crucial aspects of any prompt template.Image by Author | Key components of a prompt template is shown in the figure. A PromptTemplate the object is instantiated using the template variable. The input_variables parameter is provided with a list containing the variable names used in the template, in this case, only the query.:from langchain import PromptTemplate template = """ You are a Scientific Chat Assistant. Your job is to answer scientific facts and evidence, in a bullet point wise. Context: Scientific evidence is necessary to validate claims, establish credibility, and make informed decisions based on objective and rigorous investigation. Question: {query} Answer: """ prompt = PromptTemplate(template=template, input_variables=["query"])The generated prompt structure can be further utilized to dynamically fill in the question placeholder and obtain responses within the specified template format. Let's print our entire prompt! print(prompt) lc_kwargs={'template': ' You are an Scientific Chat Assistant.\nYour job is to reply scientific facts and evidence in a bullet point wise.\n\nContext: Scientific evidence is necessary to validate claims, establish credibility, \nand make informed decisions based on objective and rigorous investigation.\n\nQuestion: {query}\n\nAnswer: \n', 'input_variables': ['query']} input_variables=['query'] output_parser=None partial_variables={} template=' You are an Scientific Chat Assistant.\nYour job is to reply scientific facts and evidence in a bullet point wise.\n\nContext: Scientific evidence is necessary to validate claims, establish credibility, \nand make informed decisions based on objective and rigorous investigation.\n\nQuestion: {query}\n\nAnswer: \n' template_format='f-string' validate_template=TrueChainsThe LangChain documentation covers various types of LLM chains, which can be effectively categorized into two main groups: Generic chains and Utility chains.Image 2: ChainsChains can be broadly classified into Generic Chains and Utility Chains. (a) Generic chains are designed to provide general-purpose language capabilities, such as generating text, answering questions, and engaging in natural language conversations by leveraging LLMs. On the other contrary, (b) Utility Chains: are specialized to perform specific tasks or provide targeted functionalities. These chains are fine-tuned and optimized for specific use cases. Note, although Index-related chains can be classified into a sub-group, here we keep such chains under the banner of utility chains. They are often considered to be very useful while working with Vector databases.Since this is the very first time we are running the LLM chain, we will walk through the code in detail.We need to import the OpenAI LLM module from langchain.llms and the LLMChain module from langchain Python package.Then, an instance of the OpenAI LLM is created, using the arguments such as temperature (affects the randomness of the generated responses), openai_api_key (the API key for OpenAI which we just assigned before), model (the specific OpenAI language model to be used - other models are available here), and streaming. Note the verbose argument is pretty useful to understand the abstraction that LangChain provides under the hood, while executing our query.Next, an instance of LLMChain is created, providing the prompt (the previously defined prompt template) and the LLM (the OpenAI LLM instance).The query or question is defined as the variable query.Finally, the llm_chain.run(query) line executes the LLMChain with the specified query, generating the response based on the defined prompt and the OpenAI LLM:# Importing the OpenAI LLM module from langchain.llms import OpenAI # Importing the LLMChain module from langchain import LLMChain # Creating an instance of the OpenAI LLM llm = OpenAI(temperature=0.9, openai_api_key=OPENAI_API_KEY, model="text-davinci-003", streaming=True) # Creating an instance of the LLMChain with the provided prompt and OpenAI LLM llm_chain = LLMChain(prompt=prompt,llm=llm, verbose=True) # Defining the query or question to be asked query = "What is photosynthesis?" # Running the LLMChain with the specified query print(llm_chain.run(query)) Let's have a look at the response that is generated after running the chain with and without verbose,a) with verbose = True;Prompt after formatting:You are an Scientific Chat Assistant. Your job is to reply scientific facts and evidence in a bullet point wise.Context: Scientific evidence is necessary to validate claims, establish credibility, and make informed decisions based on objective and rigorous investigation. Question: What is photosynthesis?Answer:> Finished chain.• Photosynthesis is the process used by plants, algae and certain bacteria to convert light energy from the sun into chemical energy in the form of sugars.• Photosynthesis occurs in two stages: the light reactions and the Calvin cycle. • During the light reactions, light energy is converted into ATP and NADPH molecules.• During the Calvin cycle, ATP and NADPH molecules are used to convert carbon dioxide into sugar molecules.  b ) with verbose = False;• Photosynthesis is a process used by plants and other organisms to convert light energy, normally from the sun, into chemical energy which can later be released to fuel the organisms' activities.• During photosynthesis, light energy is converted into chemical energy and stored in sugars.• Photosynthesis occurs in two stages: light reactions and the Calvin cycle. The light reactions trap light energy and convert it into chemical energy in the form of the energy-storage molecule ATP. The Calvin cycle uses ATP and other molecules to create glucose.Seems like our general-purpose LLMChain has done a pretty decent job and given a reasonable output by leveraging the LLM.Now let's move onto the utility chain and understand it, using a simple code snippet:from langchain import OpenAI from langchain import LLMMathChain llm = OpenAI(temperature=0.9,openai_api_key= OPENAI_API_KEY) # Using the LLMMath Chain / LLM defined in Prompt Template section llm_math = LLMMathChain.from_llm(llm = llm, verbose = True) question = "What is 4 times 5" llm_math.run(question) # You know what the response would be 🎈Here the utility chain serves a specific function, i.e. to solve a fundamental maths question using the LLMMathChain. It's crucial to look at the prompt used under the hood for such chains. However , in addition, a few more notable utility chains are there as well,BashChain: A utility chain designed to execute Bash commands and scripts.SQLDatabaseChain: This utility chain enables interaction with SQL databasesSummarizationChain: The SummarizationChain is designed specifically for text summarization tasks.Such utility chains, along with other available chains in the LangChain framework, provide specialized functionalities and ready-to-use tools that can be utilized to expedite and enhance various aspects of the language processing pipeline.MemoryUntil now, we have seen, each incoming query or input to the LLMs or to its subsequent chain is treated as an independent interaction, meaning it is "stateless" (in simpler terms, information IN, information OUT). This can be considered as one of the major drawbacks, as it hinders the ability to provide a seamless and natural conversational experience for users who are seeking reasonable responses further on. To overcome this limitation and enable better context retention, LangChain offers a broad spectrum of memory components that are extremely helpful.Image by Author | The various types of Memory modules that LangChain provides.By utilizing the memory components supported, it becomes possible to remember the context of the conversation, making it more coherent and intuitive. These memory components allow for the storage and retrieval of information, enabling the LLMs to have a sense of continuity. This means they can refer back to previous relevant contexts, which greatly enhances the conversational experience for users. A typical example of such memory-based interaction is the very popular chatbot - ChatGPT, which remembers the context of our conversations.Let's have a look at how we can leverage such a possibility using LangChain:from langchain.llms import OpenAI from langchain.chains import ConversationChain from langchain.memory import ConversationBufferMemory llm = OpenAI(temperature=0, openai_api_key= OPENAI_API_KEY) conversation = ConversationChain( llm=llm, verbose=True, memory = ConversationBufferMemory() ) In the above code, we have initialized an instance of the ConversationChain class, configuring it with the OpenAI language model, enabling verbose mode for detailed output, and utilizing a ConversationBufferMemory for memory management during conversations. Now, let's begin our conversation,conversation.predict(input="Hi there!I'm Avra") Prompt after formatting:The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know.Current conversation:Human: Hi there! I'm AvraAI:> Finished chain.' Hi, Avra! It's nice to meet you. My name is AI. What can I do for you today?Let's add a few more contexts to the chain, so that later we can test the context memory of the chain.conversation.predict(input="I'm interested in soccer and building AI web apps.")Prompt after formatting:The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know.Current conversation:Human: Hi there!I'm AvraAI:  Hi Avra! It's nice to meet you. My name is AI. What can I do for you today?Human: I'm interested in soccer and building AI web apps.AI:> Finished chain.' That's great! Soccer is a great sport and AI web apps are a great way to explore the possibilities of artificial intelligence. Do you have any specific questions about either of those topics?Now, we make a query, which requires the chain to trace back to its memory storage and provide a reasonable response based on it.conversation.predict(input="Who am I and what's my interest ?")Prompt after formatting:The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know. Current conversation:Human: Hi there!I'm AvraAI:  Hi Avra! It's nice to meet you. My name is AI. What can I do for you today?Human: I'm interested in soccer and building AI web apps.AI:  That's great! Soccer is a great sport and AI web apps are a great way to explore the possibilities of artificial intelligence. Do you have any specific questions about either of those topics?Human: Who am I and what's my interest ?AI:> Finished chain.' That's a difficult question to answer. I don't have enough information to answer that question. However, based on what you've told me, it seems like you are Avra and your interests are soccer and building AI web apps.The above response highlights the significance of the ConversationBufferMemory chain in retaining the context of the conversation. It would be worthwhile to try out the above example without a buffer memory to get a clear perspective of the importance of the memory module. Additionally, LangChain provides several memory modules that can enhance our understanding of memory management in different ways, to handle conversational contexts.Moving forward, we will delve into the next section, where we will focus on the final two components called the “Indexes” and the "Agent." During this section, we will not only gain a hands-on understanding of its usage but also build and deploy a web app using an online workspace called Databutton.ReferencesLangChain Official Docs - https://python.langchain.com/en/latest/index.htmlCode available for this section here (Google Collab) - https://colab.research.google.com/drive/1_SpAvehzfbYYdDRnhU6v9-KHwIHMC1yj?usp=sharingPart 1: Using LangChain for Large Language Model — powered Applications : https://www.packtpub.com/article-hub/using-langchain-for-large-language-model-powered-applicationsPart 3 : Building and deploying Web App using LangChain <Insert Link>How to build a Chatbot with ChatGPT API and a Conversational Memory in Python: https://medium.com/@avra42/how-to-build-a-chatbot-with-chatgpt-api-and-a-conversational-memory-in-python-8d856cda4542Databutton - https://www.databutton.io/Author BioAvratanu Biswas, Ph.D. Student ( Biophysics ), Educator, and Content Creator, ( Data Science, ML & AI ).Twitter    YouTube    Medium     GitHub
Read more
  • 0
  • 0
  • 1297

article-image-exploring-openai-chatgpt-as-a-researcher-a-writer-and-a-software-user
Urszula Witherell
22 Jun 2023
6 min read
Save for later

Exploring OpenAI ChatGPT as a Researcher, a Writer and a Software User

Urszula Witherell
22 Jun 2023
6 min read
Artificial Intelligence is now a hot topic. The concept is not that new. In translations from one language to another, it was used since the 1960s to prepare rough drafts with the idea that the text will be refined by human editors. We interviewed Urszula Witherell, the author of the book Adobe Acrobat Ninja for her perspectives on using AI, specifically freely available ChatGPT: as a Reader, a Writer, and a Software User. What is your experience using ChatGPT as a Reader/Researcher?I am a curious person, and my mind constantly questions where and why. I love exploring new ideas, learning about people living in different cultures, and exploring different perspectives on topics that matter to me. AI as a research tool is wonderful, although very quickly I learned about its limitations. A simple test was to search for my own name. It knew nothing. My ego was not bruised as I am not a celebrity or want to be one. On the other hand, a simple search on Google very quickly found my name, my address, my family members, and everything I did online going back at least a decade. ChatGPT insisted that it knew nothing about me since its training ended in September 2021, 2 years ago. Similarly, when asked about a company website that has been active for at least 20 years it gave this reply: “I apologize for the confusion, but as an AI language model, I don't have the capability to browse the internet or visit specific websites. My responses are based on pre-existing knowledge up until September 2021. Therefore, I'm unable to provide a real-time description or analysis of the website “ or “Without specific information, I can't provide details about the website's content, purpose, or any other features it may have. It's always a good idea to visit the website directly to gather accurate and up-to-date information.” This made me ask a question how is ChatGPT trained? Where does the data come from?Once I manually copied information from the website and asked ChatGPT to generate a new response. It expanded on dry data and formulated a nice description of the website content.Image 1: ChatGPT cannot figure out what is the content of the websiteFigure 2: ChatGPT expanded the descriptionWhen I asked for descriptions of cities, landmarks, or public figures ChatGPT did provide sufficient information in full sentences with rich vocabulary. When asked to regenerate responses it provided fresh and even better-formulated descriptions. Again, not knowing where the data for AI come from, I depended on what I already knew about the locations to evaluate the accuracy of the response. It made me think about the famous quote: “Trust but verify”. So, compare results from multiple other sources to get the results that you can trust. Asked about liking a city, this is what I got: “As an AI language model, I don't have personal opinions or emotions, so I don't have the ability to like or dislike anything. However, I can provide information.” Not quite a chatty, opinionated friend, at least not yet.What is your experience using ChatGPT as a Writer?I think this is where the AI can be very helpful as it can be a thesaurus, spell-checker, and grammar checker all at once. It can re-write sentences; from just a rough draft it can expand descriptions and literally be your personal assistant during the writing process. You can see this in the screenshot example. As a bilingual person I often wonder how to express ideas in Polish for my friends in Poland. I discovered that translations of English text to Polish are very good. Grammar is correct, vocabulary is rich enough and sentences are full and mostly sound natural. Much better than what I had experienced in the past using Google Translate, when translated text needed a real human translator to be understood by a Polish-only speaking person (my parents). Of course, there are some flaws, such as the assumption of the male gender as an author. In Polish all nouns are assigned one of three genders: feminine (yes, ladies first), masculine and neutral. This is where corrections were needed. Additionally, since an example to be translated was a chapter of the software guide, it used many English terms in Polish grammatical form. My English teacher would faint at such a transgression, as we were always taught to use native expressions in place of foreign adoptions. But since technology vocabulary originated in English, so the rest of the world must just deal with it. I can live with that. I will depend on ChatGPT as a writer from now on. Definitely thumbs up!What about the perspective of a software user?Well, this is where the results vary, and it simply depends on what you are looking for. I tested a question about authoring Adobe InDesign documents for PDF accessibility and compliance with Section 508 requirements in the USA. ChatGPT provided a very detailed, step-by-step description of how to proceed, but in responses regenerated multiple times it missed the most advanced method when automated alt-text tags can be generated from image metadata using Object Styles. So again, AI helped, but I needed to know already what I could possibly do. Great help from a writer’s perspective, but not so good for someone who is exploring more advanced software functions.It is exciting to explore new platforms and technology. Like any new frontier, it brings excitement but also fear. And again, I am reminded that we as humans have not changed at all. Our toys changed, and we became more sophisticated and powerful, but we continue to need guidance on how to use yet another newly found power and knowledge. History is not a strong enough deterrent from misuse.SummaryIn conclusion, the interview with Urszula sheds light on the versatile applications of ChatGPT in the roles of a researcher, writer, and software engineer. Urszula highlights the efficiency and creativity enabled by ChatGPT, allowing her to tackle complex research problems, generate high-quality content, and even assist in software development tasks. Through her experiences, we gain insights into the immense potential of ChatGPT as a powerful tool for augmenting productivity and innovation across various domains. As the technology continues to evolve, researchers and professionals can leverage ChatGPT to unlock new possibilities and achieve greater efficiency in their work.Author BioUrszula is an expert graphic designer, software instructor, and consultant. As an instructor, she taught thousands of Adobe software users in government agencies and private corporations. Her easy-going and effective presentation style of complex features earned her many loyal clients. Her consulting work related to PDF included creating templates, editing files, and providing recommendations on the document production process. The final goal was to ensure that each publication met the highest publishing and accessibility standards, which was achieved successfully. Urszula also designed training curriculum and reference materials on proper document construction prior to conversion to PDF in a relevant authoring software, such as MS Word, Adobe InDesign, and FrameMaker.Author of the book: Adobe Acrobat Ninja
Read more
  • 0
  • 0
  • 146

article-image-democratizing-ai-with-stability-ais-initiative-stablelm
Julian Melanson
22 Jun 2023
6 min read
Save for later

Democratizing AI with Stability AI’s Initiative, StableLM

Julian Melanson
22 Jun 2023
6 min read
Artificial Intelligence is becoming a cornerstone of modern technology, transforming our work, lives, and communication. However, its development has largely remained in the domain of a handful of tech giants, limiting accessibility for smaller developers or independent researchers. A potential shift in this paradigm is visible in Stability AI's initiative - StableLM, an open-source language model aspiring to democratize AI. Developed by Stability AI, StableLM leverages a colossal dataset, "The Pile," comprising 1.5 trillion tokens of content. It encompasses models with parameters from 3 billion to 175 billion, facilitating diverse research and commercial applications. Furthermore, this open-source language model employs an assortment of datasets from recent models like Alpaca, GPT4All, Dolly, ShareGPT, and HH for fine-tuning.StableLM represents a paradigm shift towards a more inclusive and universally accessible AI technology. In a bid to challenge dominant AI players and foster innovation, Stability AI plans to launch StableChat, a chat model devised to compete with OpenAI's ChatGPT. The democratization of AI isn't a novel endeavor for Stability AI. Their earlier project, Stable Diffusion, an open-source alternative to OpenAI’s DALL-E 2, rejuvenated the generative content market and spurred the conception of new business ideas. This accomplishment set the stage for the launch of StableLM in a market rife with competition.Comparing StableLM with models like ChatGPT and LLama reveals unique advantages. While both ChatGPT and StableLM are designed for natural language processing (NLP) tasks, StableLM emphasizes transparency and accessibility. ChatGPT, developed by OpenAI, boasts a parameter count of 1 trillion, far exceeding StableLM's highest count of 175 billion. Furthermore, using ChatGPT entails costs, unlike the open-source StableLM. On the other hand, LLama, another open-source language model, relies on a different training dataset than StableLM's "The Pile." Regardless of the disparities, all three models present valuable alternatives for AI practitioners.A potential partnership with AWS Bedrock, a platform providing a standard approach to building, training, and deploying machine learning models, could bolster StableLM's utility. Integrating StableLM with AWS Bedrock's infrastructure could allow developers to leverage StableLM's performance and AWS Bedrock's robust tools.Enterprises favor open-source models like StableLM for their transparency, flexibility, and cost-effectiveness. These models promote rapid innovation, offer technology control, and lead to superior performance and targeted results. They are maintained by large developer communities, ensuring regular updates and continual innovation. StableLM demonstrates Stability AI's commitment to democratizing AI, and fostering diversity in the AI market. It brings forth a multitude of options, refined applications, and tools for users. The core of StableLM's value proposition lies in its dedication to transparency, accessibility, and user support.Following the 2022 public release of the Stable Diffusion model, Stability AI continued its mission to democratize AI with the introduction of the StableLM set of models. Trained on an experimental dataset three times larger than "The Pile," StableLM shows excellent performance in conversational and coding tasks, despite having fewer parameters than GPT-3. In addition to this, Stability AI has introduced research models optimized for academic research. These models utilize data from recently released open-source conversational agent datasets such as Alpaca, GPT4All, Dolly, ShareGPT, and HH.StableLM's vision revolves around fostering transparency, accessibility, and supportiveness. By focusing on enhancing AI's effectiveness in real-world tasks rather than chasing superhuman intelligence, Stability AI opens up innovative and practical applications of AI. This approach augments AI's potential to drive innovation, boost productivity, and expand economic prospects.A Guide to Installing StableLMStableLM can be installed using two different methods: one with a text generation web UI and the other with llama.cpp. Both of these methods provide a straightforward process for setting up StableLM on various operating systems including Windows, Linux, and macOS.Installing StableLM with Text Generation Web UIThe installation process with the one-click installer involves a simple three-step procedure that works across Windows, Linux, and macOS. First, download the zip file and extract it. Then double-click on "start". These zip files are provided directly by the web UI's developer. Following this, the model can be downloaded from Hugging Face, completing the installation process.Installing StableLM with llama.cppThe installation procedure with llama.cpp varies slightly between Windows and Linux/macOS. For Windows, start by downloading the latest release and extracting the zip file. Next, create a "models" folder inside the extracted folder. After this, download the model and place it inside the model's folder. Lastly, run the following command, replacing 'path\to' with the actual directory path of your files: 'path\to\main.exe -m models\7B\ggml-model-stablelm-tuned-alpha-7b-q4_0.bin -n 128'.For Linux and macOS, the procedure involves a series of commands run through the Terminal. Start by installing the necessary libraries with the'python3 -m pip install torch numpy sentencepiece'. Next, clone the llama.cpp repository from GitHub with 'git clone https://github.com/ggerganov/llama.cpp' and navigate to the llama.cpp directory with 'cd llama.cpp'. Compile the program with the 'make' command. Finally, download the pre-quantized model, or convert the original following the documentation provided in the llama.cpp GitHub page. To run StableLM, use the command './main -m ./models/7B/ggml-model-stablelm-tuned-alpha-7b-q4_0.bin -n 128'.In sum, StableLM's introduction signifies a considerable leap in democratizing AI. Stability AI is at the forefront of a new AI era characterized by openness, scalability, and transparency, widening AI's economic benefits and making it more inclusive and accessible.SummaryIn this article, we have introduced StabilityLM, a new language model that is specifically designed to be more stable and robust than previous models. We have shown how to install StabilityLM using the Text Generation Web UI, as well as by compiling the llama.cpp code. We have also discussed some of the benefits of using StabilityLM, such as its improved stability and its ability to generate more creative and informative text. StabilityLM can be used for a variety of tasks, including text generation, translation, and summarization.Overall, StabilityLM is a promising new language model that offers a number of advantages over previous models. If you are looking for a language model that is stable, robust, and creative, then StabilityLM is a good option to consider.Author BioJulian Melanson is one of the founders of Leap Year Learning. Leap Year Learning is a cutting-edge online school that specializes in teaching creative disciplines and integrating AI tools. We believe that creativity and AI are the keys to a successful future and our courses help equip students with the skills they need to succeed in a continuously evolving world. Our seasoned instructors bring real-world experience to the virtual classroom and our interactive lessons help students reinforce their learning with hands-on activities.No matter your background, from beginners to experts, hobbyists to professionals, Leap Year Learning is here to bring in the future of creativity, productivity, and learning!
Read more
  • 0
  • 0
  • 144

article-image-principles-for-fine-tuning-llms
Justin Chan - Lazy Programmer
22 Jun 2023
8 min read
Save for later

Principles for Fine-tuning LLMs

Justin Chan - Lazy Programmer
22 Jun 2023
8 min read
Language Models (LMs) have transformed the field of natural language processing (NLP), exhibiting impressive language understanding and generation capabilities. Pre-trained Large Language Models (LLMs) trained on large corpora have become the backbone of various NLP applications. However, to fully unlock their potential, these models often require additional fine-tuning to adapt them to specific tasks or domains.In this article, we demystify the dark art of fine-tuning LLMs and explore the advancements in techniques such as in-context learning, classic fine-tuning methods, parameter-efficient fine-tuning, and Reinforcement Learning with Human Feedback (RLHF). These approaches provide novel ways to enhance LLMs' performance, reduce computational requirements, and enable better adaptation to diverse applications. Do you even need fine-tuning? Fine-tuning generally involves updating the weights of a neural network to improve its performance on a given task. With the advent of large language models, researchers have discovered that LLMs are capable of performing tasks that they were not explicitly trained to do (for example, language translation). In other words, LLMs can be used to perform specific tasks without any need for fine-tuning at all.In-Context LearningIn-context learning allows users to influence an LLM to perform a specific task by providing a few input-output examples in the prompt. As an example, such a prompt may be written as follows:"Translate the following sentences from English to French: English: <English Sentence 1> French: <French Sentence 1> English: <English Sentence 2> French: <French Sentence 2> English: <English Sentence 3> French: <French Sentence 3> English: <English Sentence You Want To Translate> French:" The LLM is able to pick up on the "pattern" and continue by completing the French translation. Classic Fine-Tuning Approaches"Classic" fine-tuning methods are widely used in the field of natural language processing (NLP) to adapt pre-trained language models for specific tasks. These methods include feature-based fine-tuning and full fine-tuning, each with its own advantages and considerations. I use the term "classic" in quotes because this is a fast-moving field, so what is old and what is new is relative. These approaches were only developed and widely utilized within the past decade. Feature-based fine-tuning involves keeping the body of the neural network frozen while updating only the task-specific layers or additional classification layers attached to the pre-trained model. By freezing the lower layers of the network, which capture general linguistic features and representations, feature-based fine-tuning aims to preserve the knowledge learned during pre-training while allowing for task-specific adjustments. This approach is fast, especially when the features (a.k.a. "embeddings") are precomputed for each input and then re-used later, without the need to go through the full neural network on each pass. This approach is particularly useful when the target task has limited training data. By leveraging the pre-trained model's generalization capabilities, feature-based fine-tuning can effectively transfer the knowledge captured by the lower layers to the task-specific layers. This method reduces the risk of overfitting on limited data and provides a practical solution for adapting language models to new tasks. On the other hand, full fine-tuning involves updating all parameters of the neural network, including both the lower layers responsible for general linguistic knowledge and the task-specific layers. Full fine-tuning allows the model to learn task-specific patterns and nuances by making adjustments to all parts of the network. It provides more flexibility in capturing specific task-related information and has been shown to lead to better performance. The choice between feature-based fine-tuning and full fine-tuning depends on the specific requirements of the target task, the availability of training data, and the computational resources. Feature-based fine-tuning is a practical choice when dealing with limited data and a desire to leverage pre-trained representations, while full fine-tuning is beneficial for tasks with more data and distinct characteristics that warrant broader updates to the model's parameters. Both approaches have their merits and trade-offs, and the decision on which fine-tuning method to employ should be based on a careful analysis of the task requirements, available resources, and the desired balance between transfer learning and task-specific adaptation. In my course, Data Science: Transformers for Natural Language Processing, we go into detail about how everyday users like yourself can fine-tune LLMs on modest hardware.Modern Approaches: Parameter-Efficient Fine-TuningParameter-efficient fine-tuning is a set of techniques that aim to optimize the performance of language models while minimizing the computational resources and time required for fine-tuning. Traditional fine-tuning approaches involve updating all parameters of the pre-trained model, which can be computationally expensive and impractical for large-scale models. Parameter-efficient fine-tuning strategies focus on identifying and updating only a small number of parameters. One popular technique for parameter-efficient fine-tuning is Low-Rank Adaptation (LoRA). Lora is a technique designed to improve the efficiency and scalability of large language models for various downstream tasks. It addresses the challenges of high computational costs and memory requirements associated with full fine-tuning. LoRA leverages low-rank matrices to reduce the number of trainable parameters while maintaining model performance. By freezing a shared pre-trained model and replacing specific matrices, LoRA enables efficient task-switching and significantly reduces storage requirements. This approach also lowers the hardware barrier to entry by up to 3 times when using adaptive optimizers. One of the key advantages of LoRA is its simplicity, allowing for seamless integration with other methods such as prefix-tuning. Additionally, LoRA's linear design enables the merging of trainable matrices with frozen weights during deployment, resulting in no inference latency compared to fully fine-tuned models. Empirical investigations have shown that LoRA outperforms prior approaches, including full fine-tuning, in terms of scalability, task performance, and model quality. Overall, LoRA offers a promising solution for the efficient and effective adaptation of large language models, making them more accessible and practical for a wide range of applications.Reinforcement Learning with Human Feedback (RLHF)Reinforcement Learning (RL) is a branch of machine learning that involves training agents to make decisions in an environment to maximize cumulative rewards. Traditionally, RL relies on trial-and-error interactions with the environment to learn optimal policies. However, this process can be time-consuming, especially in complex environments. RLHF aims to accelerate this learning process by incorporating human feedback, reducing the exploration time required.The combination of RL with Human Feedback involves a two-step process. First, an initial model is trained using supervised learning, with human experts providing annotated data or demonstrations. This process helps the model grasp the basics of the task. In the second step, RL takes over, using the initial model as a starting point and iteratively improving it through interactions with the environment and human evaluators. RLHF was an important step in fine-tuning today's most popular OpenAI models, including ChatGPT and GPT-4. Fine-tuning LLMs with RLHFApplying RLHF to fine-tune Language Models (LLMs) involves using human feedback to guide the RL process. The initial LLM is pre-trained on a large corpus of text to learn general language patterns. However, pre-training alone might not capture domain-specific nuances or fully align with a specific task's requirements. RLHF comes into play by using human feedback to refine the LLM's behavior and optimize it for specific tasks. Human feedback can be obtained in various ways, such as comparison-based ranking. In the comparison-based ranking, evaluators rank different model responses based on their quality. The RL algorithm then uses these rankings to update the model's parameters and improve its performance.Benefits of RLHF for LLM Fine-tuning Enhanced Performance: RLHF enables LLMs to learn from human expertise and domain-specific knowledge, leading to improved performance on specific tasks. The iterative nature of RLHF allows the model to adapt and refine its behavior based on evaluators' feedback, resulting in more accurate and contextually appropriate responses.Mitigating Bias and Safety Concerns: Human feedback in RLHF allows for the incorporation of ethical considerations and bias mitigation. Evaluators can guide the model toward generating unbiased and fair responses, helping to address concerns related to misinformation, hate speech, or other sensitive content.SummaryIn summary, the article explores various approaches to fine-tuning Language Models (LLMs). It delves into classical and modern approaches, highlighting their strengths and limitations in optimizing LLM performance. Additionally, the article provides a detailed examination of the reinforcement learning with the human feedback approach, emphasizing its potential for enhancing LLMs by incorporating human knowledge and expertise. By understanding the different approaches to fine-tuning LLMs, researchers, and practitioners can make informed decisions when selecting the most suitable method for their specific applications and objectives.Author BioThe Lazy Programmer is an AI/ML engineer focusing on deep learning with experience in data science, big data engineering, and full-stack development. With a background in computer engineering and specialization in ML, he holds two master’s degrees in computer engineering and statistics with finance applications. His online advertising and digital media expertise include data science and big data.He has created DL models for prediction and has experience in recommender systems using reinforcement learning and collaborative filtering. He is a skilled instructor who has taught at universities including Columbia, NYU, Hunter College, and The New School. He is a web programmer with experience in Python, Ruby/Rails, PHP, and Angular.
Read more
  • 0
  • 0
  • 510
article-image-create-an-ai-powered-coding-project-generator
Luis Sobrecueva
22 Jun 2023
8 min read
Save for later

Create an AI-Powered Coding Project Generator.

Luis Sobrecueva
22 Jun 2023
8 min read
OverviewMaking a smart coding project generator can be a game-changer for developers. With the help of large language models (LLM), we can generate entire code projects from a user-provided prompt.In this article, we are developing a Python program that utilizes OpenAI's GPT-3.5 to generate code projects and slide presentations based on user-provided prompts. The program is designed as a command-line interface (CLI) tool, which makes it easy to use and integrate into various workflows. Image 1: Weather App Features Our project generator will have the following features:Generates entire code projects based on user-provided promptsGenerates entire slide presentations based on user-provided prompts (watch a demo here)Uses OpenAI's GPT-3.5 for code generationOutputs to a local project directoryExample Usage Our tool will be able to generate a code project from a user-provided prompt, for example, this line will create a snake game:maiker "a snake game using just html and js"; We can then open the generated project in our browser: open maiker-generated-project/index.htmlImage 2: Generated ProjectImplementation To ensure a comprehensive understanding of the project, let's break down the process of creating the AI-powered coding project generator step by step: 1. Load environment variables: We use the `dotenv` package to load environment variables from a `.env` file. This file should contain your OpenAI API key.from dotenv import load_dotenv load_dotenv()2. Set up OpenAI API client: We set up the OpenAI API client using the API key loaded from the environment variables.import openai openai.api_key = os.getenv("OPENAI_API_KEY")3. Define the `generate_project` function: This function is responsible for generating code projects or slide presentations based on the user-provided prompt. Let's break down the function in more detail.def generate_project(prompt: str, previous_response: str = "", type: str = "code") -> Dict[str, str]: The function takes three arguments:prompt: The user-provided prompt describing the project to be generated.previous_response: A string containing the previously generated files, if any. This is used to avoid generating the same files again if it does more than one loop.type: The type of project to generate, either "code" or "presentation". Inside the function, we first create the system and user prompts based on the input type (code or presentation). if type == "presentation":      # ... (presentation-related prompts) else:      # ... (code-related prompts) For code projects, we create a system prompt that describes the role of the API as a code generator and a user prompt that includes the project description and any previously generated files. For presentations, we create a system prompt that describes the role of the API as a reveal.js presentation generator and a user prompt that includes the presentation description. Next, we call the OpenAI API to generate the code or presentation using the created system and user prompts. completion = openai.ChatCompletion.create(      model="gpt-3.5-turbo",      messages=[    {           "role": "system",           "content": system_prompt,    },    {           "role": "user",           "content": user_prompt,    },      ],      temperature=0, ) We use the openai.ChatCompletion.create method to send a request to the GPT-3.5 model. The `messages` parameter contains an array of two messages: the system message and the user message. The `temperature` parameter is set to 0 to encourage deterministic output. Once we receive the response from the API, we extract the generated code from the response. generated_code = completion.choices[0].message.contentGenerating the files to disk: We then attempt to parse the generated code as a JSON object. If the parsing is successful, we return the parsed JSON object, which is a dictionary containing the generated files and their content. If the parsing fails, we raise an exception with an error message.try:      if generated_code:    generated_code = json.loads(generated_code) except json.JSONDecodeError as e:      raise click.ClickException(    f"Code generation failed. Please check your prompt and try again. Error: {str(e)}, generated_code: {generated_code}"      ) return generated_code This dictionary is then used by the `main` function to save the generated files to the specified output directory.```4. Define the `main` function: This function is the entry point of our CLI tool. It takes a project prompt, an output directory, and the type of project (code or presentation) as input. It then calls the `generate_project` function to generate the project and saves the generated files to the specified output directory.def main(prompt: str, output_dir: str, type: str):      # ... (rest of the code) Inside the main function, we ensure the output directory exists, generate the project, and save the generated files.# ... (inside main function) os.makedirs(output_dir, exist_ok=True) for _loop in range(max_loops):      generated_code = generate_project(prompt, ",".join(generated_files), type)      for filename, contents in generated_code.items():    # ... (rest of the code) 5. **Create a Click command**: We use the `click` package to create a command-line interface for our tool. We define the command, its arguments, and options using the `click.command`, `click.argument`, and `click.option` decorators.import click @click.command() @click.argument("prompt") @click.option(      "--output-dir",      "-o",      default="./maiker-generated-project",      help="The directory where the generated code files will be saved.", ) @click.option('-t', '--type', required=False, type=click.Choice(['code', 'presentation']), default='code') def main(prompt: str, output_dir: str, type: str):      # ... (rest of the code) 6. Run the CLI tool: Finally, we run the CLI tool by calling the `main` function when the script is executed.if __name__ == "__main__":      main() In this article, we have used the`... (rest of the code)` as a placeholder to keep the explanations concise and focused on specific parts of the code. The complete code for the AI-powered coding project generator can be found in the GitHub repository at the following link: https://github.com/lusob/maiker-cliBy visiting the repository, you can access the full source code, which includes all the necessary components and functions to create the CLI tool. You can clone or download the repository to your local machine, install the required dependencies, and start using the tool to generate code projects and slide presentations based on user-provided prompts.   ConclusionWith the current AI-powered coding project generator, you can quickly generate code projects and slide presentations based on user-provided prompts. By leveraging the power of OpenAI's GPT-3.5, you can save time and effort in creating projects and focus on other important aspects of your work. However, it is important to note that the complexity of the generated projects is currently limited due to the model's token limitations. GPT-3.5 has a maximum token limit, which restricts the amount of information it can process and generate in a single API call. As a result, the generated projects might not be as comprehensive or sophisticated as desired for more complex applications. The good news is that with the continuous advancements in AI research and the development of new models with larger context windows (e.g., models with more than 100k context tokens), we can expect significant improvements in the capabilities of AI-powered code generators. These advancements will enable the generation of more complex and sophisticated projects, opening up new possibilities for developers and businesses alike.Author BioLuis Sobrecueva is a software engineer with many years of experience working with a wide range of different technologies in various operating systems, databases, and frameworks. He began his professional career developing software as a research fellow in the engineering projects area at the University of Oviedo. He continued in a private company developing low-level (C / C ++) database engines and visual development environments to later jump into the world of web development where he met Python and discovered his passion for Machine Learning, applying it to various large-scale projects, such as creating and deploying a recommender for a job board with several million users. It was also at that time when he began to contribute to open source deep learning projects and to participate in machine learning competitions and when he took several ML courses obtaining various certifications highlighting a MicroMasters Program in Statistics and Data Science at MIT and a Udacity Deep Learning nano degree. He currently works as a Data Engineer at a ride-hailing company called Cabify, but continues to develop his career as an ML engineer by consulting and contributing to open-source projects such as OpenAI and Autokeras.Author of the book: Automated Machine Learning with AutoKeras
Read more
  • 0
  • 0
  • 364

article-image-creating-stunning-images-from-text-prompt-using-craiyon
Julian Melanson
21 Jun 2023
6 min read
Save for later

Creating Stunning Images from Text Prompt using Craiyon

Julian Melanson
21 Jun 2023
6 min read
In an era marked by rapid technological progress, Artificial Intelligence continues to permeate various fields, fostering innovation and transforming paradigms. One area that has notably experienced a surge of AI integration is the world of digital art. A range of AI-powered websites and services have emerged, enabling users to transform text descriptions into images, artwork, and drawings. Among these innovative platforms, Craiyon, formerly known as DALL-E mini, stands out as a compelling tool that transforms the artistic process through its unique text-to-image generation capabilities.Developed by Boris Dayma, Craiyon was born out of an aspiration to create a free, accessible AI tool that could convert textual descriptions into corresponding visuals. The concept of Craiyon was not conceived in isolation; rather, it emerged as a collaborative effort, with contributions from the expansive open-source community playing a significant role in the evolution of its capabilities.The AI behind Craiyon leverages the computing power of Google's PU Research Cloud (TRC). It undergoes rigorous training that imbues it with the ability to generate novel images based on user-provided textual inputs. In addition, Craiyon houses an expansive library of existing images that can further assist users in refining their queries.While the abilities of such an image generation model are undeniably impressive, certain limitations do exist. For instance, the model sometimes generates unexpected or stereotypical imagery, reflecting inherent biases in the datasets it was trained on. Notwithstanding these constraints, Craiyon's innovative technology holds substantial promise for the future of digital art.Image 1: Craiyon home page Getting Started with CraiyonIf you wish to test the waters with Craiyon's AI, the following steps can guide you through the process:Accessing Craiyon: First, navigate to the Craiyon website: https://www.craiyon.com. While you have the option to create a free account, you might want to familiarize yourself with the platform before doing so.Describing Your Image: Upon landing on the site, you will find a space where you can type a description of the image you wish to generate. To refine your request, consider specifying elements you want to be excluded from the image. Additionally, decide whether you want your image to resemble a piece of art, a drawing, or a photo.Initiating the Generation Process: Once you are satisfied with your description, click the "Draw" button. The generation process might take a moment, but it will eventually yield multiple images that match your description.Selecting and Improving Your Image: Choose an image that catches your eye. For a better viewing experience, you can upscale the resolution and quality of the image. If you wish to save the image, use the "Screenshot" button.Revising Your Prompt: If you are not satisfied with the generated images, consider revising your prompt. Craiyon might suggest alternative prompts to help you obtain improved results.Viewing and Saving Your Images: If you have created a free account, you can save the images you like by clicking the heart icon. You can subsequently access these saved images through the "My Collection" option under the "Account" button.Use CasesCreating art and illustrations: Craiyon can be used to create realistic and creative illustrations, paintings, and other artworks. This can be a great way for artists to explore new ideas and techniques, or to create digital artworks that would be difficult or time-consuming to create by hand.Generating marketing materials: Craiyon can be used to create eye-catching images and graphics for marketing campaigns. This could include social media posts, website banners, or product illustrations.Designing products: Craiyon can be used to generate designs for products, such as clothing, furniture, or toys. This can be a great way to get feedback on new product ideas, or to create prototypes of products before they are manufactured.Educating and communicating: Craiyon can be used to create educational and informative images. This could include diagrams, charts, or infographics. Craiyon can also be used to create images that communicate complex ideas in a more accessible way.Personalizing experiences: Craiyon can be used to personalize experiences, such as creating custom wallpapers, avatars, or greeting cards. This can be a great way to add a touch of individuality to your devices or communicationCraiyon is still under development, but it has the potential to be a powerful tool for a variety of uses. As the technology continues to improve, we can expect to see even more creative and innovative use cases for Craiyon in the future.Here are some additional use cases for Craiyon:Generating memes: Craiyon can be used to generate memes, which are humorous images that are often shared online. This can be a fun way to express yourself or to join in on a current meme trend.Creating custom content: Craiyon can be used to create custom content for your website, blog, or social media channels. This could include images, graphics, or even videos.Experimenting with creative ideas: Craiyon can be used to experiment with creative ideas. If you have an idea for an image or illustration, you can use Craiyon to see how it would look. This can be a great way to get feedback on your ideas or to explore new ways of thinking about art. SummaryThe emergence of AI tools like Craiyon marks a pivotal moment in the intersection of art and technology. While it's crucial to acknowledge the limitations of such tools, their potential to democratize artistic expression and generate creative inspiration is indeed remarkable. As Craiyon and similar platforms continue to evolve, we can look forward to a future where the barriers between language and visual expression are further blurred, opening up new avenues for creativity and innovation.Author BioJulian Melanson is one of the founders of Leap Year Learning. Leap Year Learning is a cutting-edge online school that specializes in teaching creative disciplines and integrating AI tools. We believe that creativity and AI are the keys to a successful future and our courses help equip students with the skills they need to succeed in a continuously evolving world. Our seasoned instructors bring real-world experience to the virtual classroom and our interactive lessons help students reinforce their learning with hands-on activities.No matter your background, from beginners to experts, hobbyists to professionals, Leap Year Learning is here to bring in the future of creativity, productivity, and learning!
Read more
  • 0
  • 0
  • 151

article-image-chatgpt-on-your-hardware-gpt4all
Valentina Alto
20 Jun 2023
7 min read
Save for later

ChatGPT on Your Hardware: GPT4All

Valentina Alto
20 Jun 2023
7 min read
We all got familiar with conversational UI powered by Large Language Models: ChatGPT has been the first and most powerful example of how LLMs can boost our productivity daily. To be so accurate, LLMs are, by design, “large”, meaning that they are made of billions of parameters, hence they are hosted in powerful infrastructure, typically in the public cloud (namely, OpenAI’s models including ChatGPT are hosted in Microsoft Azure). As such, those models are accessible with an internet connection.But what if you could run those powerful models on your local PC, having a ChatGPT-like experience?Introducing GPT4AllGTP4All is an ecosystem to train and deploy powerful and customized large language models that run locally on consumer-grade CPUs – no GPU is required. A GPT4All model is a 3GB - 8GB file that you can download and plug into the GPT4All open-source ecosystem software, which is optimized to host models of size between 7 and 13 billion of parameters.To start working with the GPT4All Desktop app, you can download it from the official website hereàhttps://gpt4all.io/index.html.At the moment, there are 3 main LLMs families you can use within GPT4All:LlaMa - a collection of foundation language models ranging from 7B to 65B parameters. They were developed by Meta AI, Facebook’s parent company, and trained on trillions of tokens from 20 languages that use Latin or Cyrillic scripts. LLaMA can generate human-like conversations and outperforms GPT-3 on most benchmarks despite being 10x smaller. LLaMA is designed to run on less computing power and to be versatile and adaptable to many different use cases. However, LLaMA also faces challenges such as bias, toxicity, and hallucinations in large language models. Meta AI has released all the LLaMA models to the research community for open science.GPT-J- An open-source artificial intelligence language model developed by EleutherAI. It is a GPT-2-like causal language model trained on the Pile dataset, which is an open-source 825-gigabyte language modeling data set that is split into 22 smaller datasets. GPT-J comes in sizes ranging from 7B to 65B parameters and can generate creative text, solve mathematical theorems, predict protein structures, answer reading comprehension questions, and more. GPT-J performs very similarly to similarly-sized OpenAI’s GPT-3 versions on various zero-shot down-streaming tasks and can even outperform it on code generation tasks. GPT-J is designed to run on less computing power and to be versatile and adaptable to many different use cases. MPT - series of open source, commercially usable large language models developed by MosaicML. MPT-7B is a decoder-style transformer pretrained from scratch on 1T tokens of English text and code. MPT-7B is part of the family of MosaicPretrainedTransformer (MPT) models, which use a modified transformer architecture optimized for efficient training and inference. These architectural changes include performance-optimized layer implementations and the elimination of context length limits by replacing positional embeddings with Attention to Linear Biases (ALiBi). MPT-7B can handle highly long inputs thanks to ALiBi (up to 84k tokens vs. 2k-4k for other open-source models). MPT-7B also has several finetuned variants for different tasks, such as story writing, instruction following, and dialogue generation.You can download various versions of these models’ families directly within the GPT4All app:Image 1: Download section in GPT 4Image 2: Download PathIn my case, I downloaded a GPT-J with 6 billion parameters. You can select the model you want to use in the upper menu list in your application. Once selected the model, you can use it via the well-known ChatGPT UI, with the difference that it is now running on your local PC:Image 3: GPT4All responseAs you can see, the user experience is almost identical to the well-known ChatGPT, with the difference that we are running it locally and with different underlying LLMs.Chatting with your own dataAnother great thing about GPT4All is its integration with your local docs via plugins (currently in beta). To do so, you can go to settings (in the upper bar menu) and select LocalDocs Plugin. Here, you can browse the folder path you want to connect to and then “attach” it to the model knowledge base via the database icon in the upper right. In this example, I used the SQL licensing documentation in PDF format. Image 4: SQL documentationIn this case, the model is going to answer taking into consideration also (but not only) the attached documentation, which will be quoted in case the answer is based on it:Image 5: Automatically generated computer descriptionImage 6: Computer description with medium confidenceThe technique used to store and index the knowledge provided by our document is called Retrieval Augmented Generation (RAG), a type of language generation model that combines two types of memories:Pre-trained parametric memoryàthe one stored in the model’s parameters, derived from the training dataset;Non-parametric memoryàthe one derived from the attached knowledge provided, which is in the form of a vector database where the documents’ embeddings are stored.Finally, the LocalDocs plugin supports a variety of data formats including txt, docx, pdf, html and xlsx. For a comprehensive list of the supported formats, you can visit the page https://docs.gpt4all.io/gpt4all_chat.html#localdocs-capabilities.Using GPT4All with APIIn addition to the Desktop app mode, GPT4All comes with two additional ways of consumption, which are: Server mode- once enabled the server mode in the settings of the Desktop app, you can start using the API key of GPT4All at localhost 4891, embedding in your app the following code:import openai openai.api_base = "http://localhost:4891/v1" openai.api_key = "not needed for a local LLM" prompt = "What is AI?" # model = "gpt-3.5-turbo" #model = "mpt-7b-chat" model = "gpt4all-j-v1.3-groovy" # Make the API request response = openai.Completion.create(    model=model,    prompt=prompt,    max_tokens=50,    temperature=0.28,    top_p=0.95,    n=1,    echo=True,    stream=False )Python API-It comes with a lightweight SDK of GPT4All which you can easily install via pip install gpt4all. Here you can find a sample notebook on how to use this SDK: https://colab.research.google.com/drive/1QRFHV5lj1Kb7_tGZZGZ-E6BfX6izpeMI?usp=sharingConclusionsRunning LLMs on local hardware opens a new spectrum of possibilities, especially when we think about disconnected scenarios. Plus, having the possibility to chat with local documents with an easy-to-use interface adds the custom non-parametric memory to the model in such a way that we can use it already as a sort of copilot.Henceforth, even though it is in an initial phase, this ecosystem is paving the way for new interesting scenarios.Referenceshttps://docs.gpt4all.io/GPT4All Chat UI - GPT4All Documentation[2005.11401] Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks (arxiv.org)Author BioValentina Alto graduated in 2021 in Data Science. Since 2020 she has been working in Microsoft as Azure Solution Specialist and, since 2022, she focused on Data&AI workloads within the Manufacturing and Pharmaceutical industry. She has been working on customers’ projects closely with system integrators to deploy cloud architecture with a focus on datalake house and DWH, data integration and engineering, IoT and real-time analytics, Azure Machine Learning, Azure cognitive services (including Azure OpenAI Service), and PowerBI for dashboarding. She holds a BSc in Finance and an MSc degree in Data Science from Bocconi University, Milan, Italy. Since her academic journey, she has been writing Tech articles about Statistics, Machine Learning, Deep Learning, and AI in various publications. She has also written a book about the fundamentals of Machine Learning with Python. LinkedIn  Medium
Read more
  • 0
  • 0
  • 3216
article-image-the-transformative-potential-of-google-bard-in-healthcare
Julian Melanson
20 Jun 2023
6 min read
Save for later

The Transformative Potential of Google Bard in Healthcare

Julian Melanson
20 Jun 2023
6 min read
The integration of artificial intelligence and cloud computing in healthcare has revolutionized various aspects of patient care. Telemedicine, in particular, has emerged as a valuable resource for remote care, and the Google Bard platform is at the forefront of making telemedicine more accessible and efficient. With its comprehensive suite of services, secure connections, and advanced AI capabilities, Google Bard is transforming healthcare delivery. This article explores the applications of AI in healthcare, the benefits of Google Bard in expanding telemedicine, its potential to reduce costs and improve access to care, and its impact on clinical decision-making and healthcare quality.AI in HealthcareArtificial intelligence has made significant strides in revolutionizing healthcare. It has proven to be instrumental in disease diagnosis, treatment protocol development, new drug discovery, personalized medicine, and healthcare analytics. AI algorithms can analyze vast amounts of medical data, detect patterns, and identify potential risks or effective treatments that may go unnoticed by human experts. This technology holds immense promise for improving patient outcomes and enhancing healthcare delivery.Understanding Google BardGoogle's AI chat service, Google Bard, now employs its latest large language model, PaLM 2, launched at Google I/O 2023. As a progression from the original PaLM released in April 2022, it elevates Bard's efficacy and performance. Initially, Bard operated on a scaled-down version of LaMDA for broader user accessibility with minimal computational needs. Leveraging Google's extensive database, Bard aims to provide precise text responses to a variety of inquiries, poised to become a dominant player in the AI chatbot arena. Despite its later launch than ChatGPT, Google's broader data access and Bard's lower computational requirements potentially position it as a more efficient and user-friendly contender.Expanding Telemedicine with Google Bard:Google Bard has the ability to play a pivotal role in expanding telemedicine and remote care. Its secure connections enable doctors to diagnose and treat patients without being physically present. Additionally, it provides easy access to patient records, and medical history, and facilitates seamless communication through appointment scheduling, messaging, and sharing medical images. These features empower doctors to provide better-informed care and enhance patient engagement.The Impact on Healthcare Efficiency and Patient Outcomes:Google Bard's integration of AI and machine learning capabilities elevate healthcare efficiency and patient outcomes. The platform's AI system quickly and accurately analyzes patient records, identifies patterns and trends, and aids medical professionals in developing effective treatment plans.For example, here is a prompt based on a sample HPI text:Image 1: Diagnosis from this clinical HPIHere is Google Bard’s response:Image 2: main problems of the Clinical HPIGoogle Bard effectively identified key diagnoses and the primary issue from the HPI sample text. Beyond simply listing diagnoses, it provided comprehensive details on the patient's primary concerns, aiding healthcare professionals in grasping the depth and complexity of the condition.When prompted further, Google Bard also suggested a treatment strategy:Image 3: Suggesting treatment planThis can ultimately assist medical practitioners in developing fitting interventions.Google Bard also serves as a safety net by detecting potential medical errors and alerting healthcare providers. Furthermore, Google Bard's user-friendly interface expedites access to patient records, enabling faster and more effective care delivery. The platform also grants medical professionals access to the latest medical research and clinical trials, ensuring they stay updated with advancements in healthcare. Ultimately, Google Bard's secure platform and powerful analytics tools contribute to better patient outcomes and informed decision-making.Reducing Healthcare Costs and Improving Access to CareOne of the key advantages of Google Bard lies in its potential to reduce healthcare costs and improve access to care. The platform's AI-based technology identifies cost-efficient treatment options, optimizes resource allocation, and enhances care coordination. By reducing wait times and unnecessary visits, Google Bard minimizes missed appointments, repeat visits, and missed diagnoses, resulting in lower costs and improved access to care. Additionally, the platform's comprehensive view of patient health, derived from aggregated data, enables more informed treatment decisions and fewer misdiagnoses. This integrated approach ensures better care outcomes while controlling costs.Supporting Clinical Decision-Making and Healthcare QualityGoogle Bard's launch signifies a milestone in the use of AI to improve healthcare. The platform provides healthcare providers with a suite of computational tools, empowering them to make more informed decisions and enhance the quality of care. Its ability to quickly analyze vast amounts of patient data enables the identification of risk factors, potential diagnoses, and treatment recommendations. Moreover, Google Bard supports collaboration and comparisons of treatment options among healthcare teams. By leveraging this technology, healthcare professionals can provide personalized care plans, improving outcomes and reducing medical errors. The platform's data-driven insights and analytics also support research and development efforts, allowing researchers to identify trends and patterns and develop innovative treatments.Google Bard, with its AI-driven capabilities and secure cloud-based platform, holds immense potential to revolutionize healthcare delivery. By enhancing efficiency, accessibility, and patient outcomes, it is poised to make a significant impact on the healthcare industry. The integration of AI, machine learning, and cloud computing in telemedicine enables more accurate diagnoses, faster treatments, and improved care coordination. Moreover, Google Bard's ability to reduce healthcare costs and improve access to care reinforces its value as a transformative tool. As the platform continues to evolve, it promises to shape the future of healthcare, empowering medical professionals and benefiting patients worldwide.SummaryGoogle Bard is revolutionizing healthcare with its practical applications. For instance, it enhances healthcare efficiency and improves patient outcomes through streamlined data management and analysis. Reducing administrative burdens and optimizing workflows, it reduces healthcare costs and ensures better access to care. Furthermore, it supports clinical decision-making by providing real-time insights, aiding healthcare professionals in delivering higher-quality care. Overall, Google Bard's transformative technology is reshaping healthcare, benefiting patients, providers, and the industry as a whole.Author BioJulian Melanson is one of the founders of Leap Year Learning. Leap Year Learning is a cutting-edge online school that specializes in teaching creative disciplines and integrating AI tools. We believe that creativity and AI are the keys to a successful future and our courses help equip students with the skills they need to succeed in a continuously evolving world. Our seasoned instructors bring real-world experience to the virtual classroom and our interactive lessons help students reinforce their learning with hands-on activities.No matter your background, from beginners to experts, hobbyists to professionals, Leap Year Learning is here to bring in the future of creativity, productivity, and learning!
Read more
  • 0
  • 0
  • 285

article-image-use-bard-to-implement-data-science-projects
Darren Broemmer
20 Jun 2023
7 min read
Save for later

Use Bard to Implement Data Science Projects

Darren Broemmer
20 Jun 2023
7 min read
Bard is a large language model (LLM) from Google AI, trained on a massive dataset of text and code. Bard can be used to generate Python code for data science projects. This can be extremely helpful for data scientists who want to save time on coding or are unfamiliar with Python. It also empowers those of us who are not full-time data scientists but have an interest in leveraging machine learning (ML) technologies.The first step is to define the problem you are trying to solve. In this article, we will use Bard to create a binary text classifier. It will take a news story as input and classify it as either fake or real. Given a problem to solve, you can brainstorm solutions. If you are familiar with machine learning technologies, you are able to do this yourself. Alternatively, you can ask Bard for help in finding an appropriate algorithm that meets your requirements. The classification of text documents often uses term frequency techniques. We don’t need to know more than that at this point, as we can have Bard help us with the details of the implementation.The overall design of your project could also involve feature engineering and visualization methods. As in most software engineering efforts, you will likely need to iterate on the design and implementation. However, Bard can help you do this much faster.Reading and parsing the training dataAll of the code from this article can be found on GitHub. Bard will guide you, but to run this code, you will need to install a few packages using the following commands.python -m pip install pandas python -m pip install scikit-learn To train our model, we can use the news.csv data set within this project found here, originally sourced from a Data Flair training exercise. It contains the title and text of almost 8,000 news articles labeled as REAL or FAKE.To get started , Bard can help us write code to parse and read this data file. Pandas is a popular open-source data analysis and manipulation tool for Python. We can prompt Bard to use this library to read the file.Image 1: Using Pandas to read the fileRunning the code shows the format of the csv file and the first few data rows, just as Bard described. It has an unnamed article id in the first column followed by the title, text, and classification label.broemmerd$ python test.py   Unnamed: 0                                              title                                              text label 0        8476                       You Can Smell Hillary’s Fear  Daniel Greenfield, a Shillman Journalism Fello...  FAKE 1       10294  Watch The Exact Moment Paul Ryan Committed Pol...  Google Pinterest Digg Linkedin Reddit Stumbleu...  FAKE 2        3608        Kerry to go to Paris in gesture of sympathy  U.S. Secretary of State John F. Kerry said Mon...  REAL 3       10142  Bernie supporters on Twitter erupt in anger ag...  — Kaydee King (@KaydeeKing) November 9, 2016 T...  FAKE 4         875   The Battle of New York: Why This Primary Matters  It's primary day in New York and front-runners...  REALTraining our machine learning modelNow that we can read and understand our training data, we can prompt Bard to write code to train an ML model using this data. Our prompt is detailed regarding the input columns in the file used for training. However, it specifies a general ML technique we believe is applicable to the solution.The text column in the news.csv contains a string with the content from a new article. The label column contains a classifier label of either REAL or FAKE. Modify the Python code to train a machine-learning model using term frequency based on these two columns of data. Image 2: Bard for training ML modelsWe can now train our model. The output of this code is shown below: broemmerd$ python test.py Accuracy: 0.9521704814522494We have our model working. Now we just need a function that will apply it to a given input text. We use the following prompt. Modify this code to include a function called classify_news that takes a text string as input and returns the classifier, either REAL or FAKE.Bard generates the following code for this function. Note that it also refactored the previous code to include the use of the TfidfVectorizor in order to support this function.Image 3: Including classify_news function Testing the classifierTo test the classifier with a fake story, we will use an Onion article entitled “Chill Juror Good With Whatever Group Wants To Do For Verdict.” The Onion is a satirical news website known for its humorous and fictional content. Articles in The Onion are intentionally crafted to appear as genuine news stories, but they contain fictional, absurd elements for comedic purposes.Our real news story is a USA Today article entitled “House blocks push to Censure Adam Schiff for alleging collusion between Donald Trump and Russia.”Here is the code that reads the two articles and uses our new function to classify each one. The results are shown below.with open("article_the_onion.txt", "r") as f:   article_text = f.read()   print("The Onion article: " + classify_news(article_text)) with open("article_usa_today.txt", "r") as f:   article_text = f.read()   print("USA Today article: " + classify_news(article_text)) broemmerd$ python news.py Accuracy: 0.9521704814522494 The Onion article: FAKE USA Today article: REALOur classifier worked well on these two test cases.Bard can be a helpful tool for data scientists who want to save time on coding or who are not familiar with Python. By following a process similar to the one outlined above, you can use Bard to generate Python code for data science projects.Additional guidance on coding with BardWhen using Bard to generate Python code for data science projects, be sure to use clear and concise prompts. Provide the necessary detail regarding the inputs and the desired outputs. Where possible, use specific examples. This can help Bard generate more accurate code. Be patient and expect to go through a few iterations until you get the desired result. Test the generated code at each step in the process. It will be difficult to determine the cause of errors if you wait until the end to start testing.Once you get familiar with the process, you can use Bard to generate Python code that can help you solve data science problems more quickly and easily.Author BioDarren Broemmer is an author and software engineer with extensive experience in Big Tech and Fortune 500. He writes on topics at the intersection of technology, science, and innovation.LinkedIn  
Read more
  • 0
  • 0
  • 177