Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon

How-To Tutorials

7019 Articles
article-image-getting-started-with-microsoft-fabric
Arshad Ali, Bradley Schacht
11 Sep 2023
7 min read
Save for later

Getting Started with Microsoft Fabric

Arshad Ali, Bradley Schacht
11 Sep 2023
7 min read
This article is an excerpt from the book, Learn Microsoft Fabric, by Arshad Ali and Bradley Schacht. A step-by-step guide to harness the power of Microsoft Fabric in developing data analytics solutions for various use cases.IntroductionIn this article, you will learn about enabling Microsoft Fabric in an existing Power BI tenant or create a new one Fabric tenant if you don’t have one already. Next, you will create your first Fabric workspace, which you will use to carry out all subsequent chapters' exercises.  Enabling Microsoft Fabric Microsoft Fabric shares the same Power BI tenant. If you have a Power BI or Microsoft Fabric tenant already created, you have these two options, mentioned next, to enable Fabric (more at https://learn.microsoft.com/en-us/fabric/admin/fabric-switch) in that tenant. For each of these options, depending on the configuration you select, Microsoft Fabric becomes available for either everyone in the tenant, or to a selected group of users. Note: If you are new to Power BI or your organization doesn't have a Power BI/Fabric tenant (https://aka.ms/try-fabric) yet, you can set it up and use a Fabric trial by visiting to sign up for a Power BI free license. Afterward, you can start the Fabric trial, as mentioned later in this section while discussing trial capacity. Fabric trial includes access to the Fabric product experiences and the resources to create and host Fabric items. As of this writing, the Fabric Trial license allows you to work with Fabric for 60 days free. At that point, you will need to provision Fabric Capacity to continue using Microsoft Fabric.Enable Fabric at the tenant level: If you have admin privileges, you can access the Admin center from the Settings menu in the upper right corner of the Power BI service. From here, you enable Fabric on the Tenant settings page. When you enable Microsoft Fabric using the tenant setting, users can create Fabric items in that tenant. For that, navigate to the Tenant settings page in the Admin portal page of the tenant, expand the Users can create Fabric items field and toggle the switch to enable or disable it, and then hit Apply. Figure 2.1 – Microsoft Fabric - tenant settings Enable Fabric at the capacity level: While it is recommended to enable Microsoft Fabric for the entire organization at the tenant level, there are times when you would like it to be enabled for a certain group of people at the capacity level. For that, on the Tenant Admin portal, please navigate to the Capacity settings page, identify and select the capacity for which you want Microsoft Fabric to be enabled, and then click on the Delegate tenant settings tab at the top. Then under the Microsoft Fabric section of this page, expand the Users can create Fabric items setting and toggle the switch to enable or disable it, and then hit Apply. Figure 2.2 – Microsoft Fabric - capacity settings In both these above scenarios, it assumes you have paid capacity already available. However, if you don’t have it yet, you can use Fabric Trial (more at https://learn.microsoft.com/en-us/fabric/get-started/fabric-trial) for creating Fabric items for a certain duration if you want to learn or test the functionalities of Microsoft Fabric. For that, open the Fabric homepage (by visiting https://app.fabric.microsoft.com/home) and select Account Manager. In the Account Manager, click on Start Trial and follow the wizard instructions to enable Fabric trial with trial capacity. Note: For you to learn and try out different capabilities in Fabric, Microsoft provides free trial capacity. With this trial capacity, you get full access to all the Fabric workloads or, its features, including the ability to create Fabric items, and collaborate with others as well OneLake storage up to 1 TB. However, the usage of trial capacity is intended for trial and testing only and not for production usage        Checking your access to Microsoft Fabric To validate if Fabric is enabled and you have access to it in your organization's tenant, sign in to Power BI and look for the Power BI icon at the bottom left of the screen. If you see the Power BI icon, select to see the experiences available within Fabric. Figure 2.3 – Microsoft Fabric - workloads switcher If the icon is present, you can click Microsoft Fabric link at the top the screen (as shown in Figure 2.3) to switch to Fabric experience or click on individual experience you want to switch to. Figure 2.4 – Microsoft Fabric - home page However, if the icon isn't present, Fabric is not available to you. In that case, please follow the steps (or work with your Power BI or Fabric admin) mentioned in the previous section to enable it. Creating your first Fabric enabled workspace Once you have confirmed that Fabric is enabled in your tenant and you have access to it, the next step is to create your Fabric workspace. You can think of a Fabric workspace as a logical container that will contain all the items such as lakehouses, warehouse, notebooks, and pipelines. Follow these steps to create your first Fabric workspace: 1. Sign in to Power BI (https://app.powerbi.com/). 2. Select Workspaces | + New workspace. Figure 2.5 – Create a new workspace 3.   Fill out the Create a workspace form as follows: o   Name: Enter Learn Microsoft Fabric and some characters for uniqueness o   Description: Optionally, enter a description for the workspace  Figure 2.6 – Create new workspace - details o    Advanced: Select Fabric capacity under License mode and then choose a capacity you have access to. If not, you can start a trial license, as described earlier, and use it here. 4.    Select Apply. The workspace will be created and opened. 5.    You can click on Workspaces again and then search for your workspace by typing its name in the search box. You can also pin the selected workspace so that it always appears at the top.  Figure 2.7 – Search for a workspace 6. Clicking on the name of the workspace will open the workspace and its link will be available in the left side navigation bar, allowing you to switch from one item to others quickly. Currently, since we haven’t created anything yet, there is nothing here. You can click on +New to start creating Fabric items.  Figure 2.8 – Switch to a workspace With a Microsoft Fabric workspace set up, let’s review the different workloads available.ConclusionIn this article, we covered the basics of Microsoft Fabric in Power BI. You can enable Fabric at the tenant or capacity level, with a trial option available for newcomers. To check your access, look for the Power BI icon. If present, you're ready to use Fabric; if not, follow the setup steps. Create a Fabric workspace to manage items like lakehouses and pipelines. This article offers a quick guide to kickstart your journey with Microsoft Fabric in Power BI.Author BioArshad Ali is a Principal Program Manager on the Microsoft Fabric product team based in Redmond, WA. As part of his role at Microsoft, he works with strategic customers, partners, and ISVs to help them adopt Microsoft Fabric in solving their complex data analytics problems and driving business insights as well as helps shape the future of the Microsoft Fabric.Bradley Schacht is a Principal Program Manager on the Microsoft Fabric product team based in Jacksonville, FL. As part of his role at Microsoft, Bradley works directly with customers to solve some of their most complex data warehousing problems and helps shape the future of the Microsoft Fabric cloud service.
Read more
  • 0
  • 0
  • 896

article-image-automated-diagram-creation-with-chatgpt
Jakov Semenski
11 Sep 2023
8 min read
Save for later

Automated Diagram Creation with ChatGPT

Jakov Semenski
11 Sep 2023
8 min read
IntroductionImagine constructing a house without a blueprint.Sounds chaotic, right?Similarly, diving into software coding without a UML or sequence diagram is like building that house blindly.Just as architects rely on blueprints, developers can rely on diagrams to build a clear project architecture that guides during the coding phase.It paints a clear roadmap of what needs to be developed.It ensures everyone is on the same page.It saves time during the execution phase.Unfortunately, this phase is often overlooked.It takes time, and you quickly get overwhelmed with so many tools and ways to sketch out diagrams.Now, imagine you can quickly draft diagrams, even during team meetings, so you can visualize complex workflows on the fly.This is what we will cover today.I will share a step-by-step guide on the Little-Known secret of automating diagram creation with ChatGPT (and it can be done all with a free version)Here's how step-by-step:Step 1: Create a sample applicationThe first thing we need is an application.For this example, I will use the following prompt to generate a Java Rest application for borrowing technical books.Create a Java rest application based on Spring Boot that allows borrowing technical books to users admins have dedicated role that are able to add books, remove books users and admins can log in by using Google OAuth as an authentication provider As the first step, show me the directory structureSince I am not interested in code but still curious about how the project would look, I have added.As the first step, show me the directory structure.ChatGPT will also give an explanation for each class.If this is something you are not interested in, just add the word "only".As the first step, show me only the directory structure.Using ASCII art, ChatGPT will present a directory structure.Hint: Sometimes chatGPT will "glitch" meaning it won't give you the full result, so just hit "continue" to get the entire output.Step 2: Quick and dirty way to create a diagram in ChatGPTNow, it's time to visualize the UML diagram.Ask ChatGpt to create a UML and sequence diagram.Prompt:*Create a UML and Sequence diagram using ASCII*You will get an ASCII representation.Do you see the problem?First, there is the limitation of using text to visualize.Second, good luck with maintaining these.Step 3: Using the right tools to create UMLTo get things further, let's use the proper tools.There are many tools on the market you can use to create UML. However, not all can be represented as text.PlantUML and MermaId are both tools that use specific diagram notations in which we can render our final diagram.Personally, I prefer MermaId as it is simpler to useNow we can use the same prompt, but this time, ask for output for MermaidUsing generated directory structure, generate UML diagram for MermaId toolNow copy the output to the MermaId tool: https://mermaid.live/classDiagram    class Application {        + main(args: String[]): void    }    class WebSecurityConfig {        <<configuration>>    }    class OAuth2Config {        <<configuration>>    }    class BookController {        + addBook(bookDTO: BookDTO): ResponseEntity<?>        + removeBook(bookId: Long): ResponseEntity<?>    }    class UserController {        + addUser(userDTO: UserDTO): ResponseEntity<?>        + removeUser(userId: Long): ResponseEntity<?>    }    class AuthController {        + loginUser(): ResponseEntity<?>    }    class Book {        - id: Long        - title: String        - author: String        - available: boolean    }    class User {        - id: Long        - username: String        - password: String        - email: String        - roles: Set<UserRole>    }    class UserRole {        <<enumeration>>        - USER        - ADMIN    }    class BookRepository {        <<interface>>    }    class UserRepository {        <<interface>>    }    class BookService {        + addBook(bookDTO: BookDTO): Book        + removeBook(bookId: Long): void    }    class UserService {        + addUser(userDTO: UserDTO): User        + removeUser(userId: Long): void    }    class UserDetailsServiceImpl {        <<service>>    }    class BookDTO {        - title: String        - author: String    }    class UserDTO {        - username: String        - password: String        - email: String    }    Application --|> WebSecurityConfig    Application --|> OAuth2Config    BookController --|> Book    UserController --|> User    AuthController --|> User    Book -- BookRepository    User -- UserRepository    BookController --|> BookService    UserController --|> UserService    UserService -- UserDetailsServiceImpl    BookController --> BookDTO    UserController --> UserDTOAnd here is the final result.Step 4: Create sequence diagramA sequence diagram, also known as a swimlane diagram, is a type of diagram that visually shows request flow.Imagine an actual swimming pool divided into lanes. Each lane is dedicated to a specific swimmer (e..g class file)Now, use the following prompt to get the swimlane diagram.Create a sequence diagram for Mermaid.Copy the output to Mermaid.Here is the output:sequenceDiagram    participant User as User    participant AuthController as "Auth Controller"    participant OAuth2Config as "OAuth2 Config"    participant UserDetailsServiceImpl as "User Details Service"    participant GoogleOAuth as "Google OAuth Provider"    participant UserDatabase as "User Database"    User ->> AuthController: Initiate Login    AuthController ->> OAuth2Config: Forward to OAuth2 Provider    OAuth2Config ->> GoogleOAuth: Send OAuth Request    GoogleOAuth -->> OAuth2Config: Receive OAuth Response    OAuth2Config -->> AuthController: Receive OAuth Response    AuthController ->> UserDetailsServiceImpl: Load User Details    UserDetailsServiceImpl ->> UserDatabase: Retrieve User Info    UserDatabase -->> UserDetailsServiceImpl: User Info    UserDetailsServiceImpl -->> AuthController: User Details    AuthController -->> User: Authentication SuccessfulHere is the full conversation with ChatGPT 3.5:https://chat.openai.com/share/70157733-da64-4a12-b15b-3607f4d1afe9Step 5: Making things even more fasterEverything we have done can now be compiled into a Mega prompt, compile all the inputs into a 1 mega prompt, and just replace the content under App descriptions.For a given app description APP DESCRIPTION START Spring boot rest application that stores reservations APP DESCRIPTION END Create directory structure, then create UML and sequence diagram for Mermaid tool using it's own syntaxBonus: ChatGPT Plus users onlyIf you are on a ChatGTP Plus subscription, you get several benefits apart from the obvious GPT4.First, ChatGPT4 itself gives you a nice text output, including some nice emojis.Prompt:Create a Java rest application based on Spring Boot that allows borrowing technical books to users admins have dedicated role that are able to add books, remove books users and admins can log in by using Google OAuth as an authentication provider As the first step, show me the directory structure with file names, use emojis to represent different content typeSecond, to speed up chart creation, you can use 2 plugins:GitHub plugin “AskTheCode” (which lets you scan the GitHub repository)Draw plugin “Mermaid Chart” -> generates diagrams for you and displays images directly as part of the chat.ConclusionPretty powerful, huh?Traditional methods of creating UML and sequence diagrams are much more time-consuming.Imagine how much time we just saved.By using this approach, you'll not only save time but get valuable insight into your architecture.Feel free to use these prompts, tweak them, and make them your own.If you want to get systems like these, please connect and reach out to me over Linkedin.Author bioJakov Semenski is an IT Architect working at IBMiX with almost 20 years of experience.He is also a ChatGPT Speaker at WeAreDevelopers conference and shares valuable tech stories on LinkedIn
Read more
  • 0
  • 0
  • 1532

article-image-hands-on-with-prompt-engineering-in-trading-stats
Anshul Saxena
11 Sep 2023
13 min read
Save for later

Hands-On with Prompt Engineering in Trading Stats

Anshul Saxena
11 Sep 2023
13 min read
IntroductionIn today's dynamic trading environment, the key to success lies not just in possessing data, but in harnessing it efficiently and intelligently. With the advent of AI-powered tools like ChatGPT plugins, traders and investors can now access and analyze data with unprecedented ease and precision. These advanced tools offer a comprehensive view of the market, from historical price trends and fundamental stock metrics like EPS to real-time news updates and sentiment analysis. They can extract insights from intricate financial statements, gauge market sentiment from social media analytics, and provide an in-depth look into the world of option chains.This tutorial aims to guide individuals on how to effectively utilize these AI-enhanced tools, specifically focusing on the ChatGPT plugin. By integrating two standalone tools, Whimsical and ChatGPT, and exploring four distinct ChatGPT plugins - Polygon.io, AI Ticker Chat, PointsRecap, and OptionsPro - we present ten structured prompting techniques. These techniques, when automated, empower users to devise a data collection strategy tailored to various financial instruments, ensuring swift and accurate information retrieval. Dive in to stay a step ahead in your trading endeavors.Mapping data strategy for traders using whimsicalLooking to decipher trading data more effectively? Dive into this guide where we combine the visual capabilities of Whimsical with the querying prowess of ChatGPT. Whimsical allows us to create intuitive mind maps, illustrating the interconnections in trading data, while ChatGPT assists in posing and comprehending crucial questions about stocks and companies. Together, these tools illuminate the essential data points in trading, guiding you towards informed decisions. Join us as we simplify the process, ensuring you harness these tools for optimal trading insights. Prompt 1: Give me prompt ideas for data collection in AIBased on the mind map generated by whimsical, we need to fetch and analyse the financial and textual data in these six stepsStep 1. Analyzing historical market data, focusing on aspects like price, volume, and volatility.Step 2. Understanding fundamental analysis metrics, particularly the earnings per share (EPS), which is pivotal for stock evaluation.Step 3. Gleaning insights from news and sentiment analysis sourced from diverse platforms.Step 4. Reviewing financial statements and reports of companies active in trading to gauge their financial health.Step 5. Harnessing social media analytics to monitor market discussions and discern emerging trends.Step 6. Exploring option chain data, which is instrumental in analyzing options contracts related to specific securities or indices.In the subsequent sections, we will delve deep into various facets of trading data, from analyzing historical market indicators like price, volume, and volatility, to understanding the significance of fundamental metrics such as earnings per share (EPS) for stock evaluation. We'll also tap into diverse platforms for news and sentiment analysis, review financial statements of active trading companies, harness the power of social media analytics to discern market trends, and explore the intricacies of option chain data related to specific securities or indices.Step 1: Fetching Historical Data using ChatGPTDelving into trading often begins with examining historical market data. This encompasses past stock prices, trading volumes, and price fluctuations, offering insights into a stock's historical performance. In this section, we'll harness the capabilities of Polygon.io through ChatGPT, utilizing API calls to access precise financial data. Join us as we guide you through this process, ensuring you have a clear understanding of a stock's past trajectory.Here's a step-by-step guide on how to use Polygon in ChatGPT:Step 1. Ask Clearly: Tell ChatGPT your desired financial data, like Apple's latest stock news.Step 2. Review Data: ChatGPT fetches the info using Polygon.io and presents it in a structured format.Step 3. Dive Deeper: Pose follow-up questions or request more specific data points.Step 4. Stay Informed & Compliant: Use the real-time data responsibly, adhering to financial regulations.Prompt 2: Give me Historical market data such as price, volume, and volatility of APPLEFrom the information provided, ChatGPT has offered a comprehensive analysis of the market data pertaining to Apple Inc.'s stock. This assessment can be further expanded upon to delve deeper into the historical price patterns, enhancing our understanding of its market trajectory.Step 2: Understanding Fundamental Analysis through Prompt EngineeringTo assess a company's financial robustness and growth prospects, certain key metrics are pivotal. Taking Apple Inc. as an example, metrics like Earnings Per Share (EPS), Year-on-Year (YoY) growth, profit margin, Price-to-Earnings (PE) ratio, and Profit After Tax (PAT) offer a holistic insight into its financial stance. In this subsection, we'll employ the Polygon plugin and prompt engineering to delve into these five crucial metrics, providing a clearer perspective on a company's financial health.Prompt 3: Give detail about EPS, year-on-year (YoY) growth, profit margin, earnings per share (EPS), price-to-earnings (PE) ratio, and profit after tax (PAT). of APPLThe analysis has provided an in-depth output on the subject. We can further extend this study, leveraging ChatGPT and prompt engineering, to comprehend how fundamental KPIs influence a company's performance.Step 3: Performing Sentiment Analysis on  news sources using ChatGPTIn the dynamic realm of trading, staying abreast of current events is essential. News and sentiment analysis serve as barometers, capturing market sentiments from diverse sources such as media publications, expert commentary, and social media chatter. Grasping the prevailing sentiment around a stock or the broader market equips traders with insights for better decision-making. This section delves into the importance of these analytical tools and their role in shaping successful trading strategies.Prompt 4: Can you give me the latest news articles related to Apple's stock ticker?After fetching the relevant news articles from divergent sources, Polygon.io can be deployed to analyze them using below mentioned promptPrompt 5: Do the Impact analysis of the news related to appleAnalyzing the impact of news on a stock's performance requires a combination of understanding the content of the news and observing the market's reaction to it. Here's a brief analysis based on the provided news articles related to Apple:Apple Just Shed More Than $230 Billion In Market CapImpact Analysis: A significant drop in market capitalization indicates a loss of investor confidence or reaction to negative news. The mention of Apple possibly falling out of favor with the Chinese government is significant. China is a huge market for Apple, both in terms of sales and manufacturing. Any negative sentiment from the Chinese government can have substantial implications for Apple's revenue and operations in the region.Apple Could Emerge Unscathed From China BanImpact Analysis: This article seems to provide a counter-narrative to the previous one. If analysts believe that Apple can navigate the challenges posed by the Chinese government's decisions, it could mitigate some of the negative sentiments. However, the actual outcome will depend on how Apple manages its relationship with the Chinese government and its strategies to counteract any bans or restrictions.Here's Everything Apple Plans To Show At Its 'Wonderlust' EventImpact Analysis: Product launch events are significant for Apple. They not only introduce new products but also set the tone for the company's direction in the coming months. Positive reception to new products can drive stock prices up, while disappointment can have the opposite effect. The anticipation of the iPhone 15 series suggests that investors and consumers are keenly waiting to see Apple's next move.In the next step, we will try to look into the financial statement like 10-K, 10-Q, earnings transcript for Apple IncStep 4: Analyzing Financial Statements with the ChatGPT PluginFor traders aiming to swiftly assess a company's financial well-being, this tutorial offers valuable insights. It guides users on utilizing the ChatGPT Plugin to effectively interpret key financial metrics such as assets, liabilities, and revenues. By the conclusion of this section, readers will possess a potent analytical tool to enhance the efficiency and accuracy of their trading evaluations.How to Find Company Reports like 10K with AI Ticker ChatThis tool helps you find company reports, especially documents like the 10K, which is a yearly report companies make. Here's how to use it:Step 1. Ask a Question: Think about the report you want to see. It could be a 10K, which is a yearly report, or other types of reports.Step 2. Give Some Details: To help find the right report, please tell me:   - The company's name or symbol.   - The kind of report you want (like 10K for yearly reports).   - The year you're interested in.Step 3. Wait a Moment: After you tell me what you're looking for, I'll search for the report and show you what I find.Step 4. Look at What I Found: I'll show you the report or information. You can read it, ask more questions, or ask for more details. Prompt 6: Give latest 10-K, 10-Q, earnings transcript for APPL Prompt 7: Provide in-depth analysis of 10-K document for APPLThis analysis provides a high-level overview of the key points from Apple's 10-K.This plugin has the potential to save a trader’s pain of going through the lengthy statements. Afterwards by asking the intelligent question through prompt engineering a thorough analysis can be conducted in minutes,Step 5: Tracking Market Trends with Social Media Analytics using ChatGPT PluginLeveraging social media analytics provides a window into prevailing market discussions and emerging trends. Through the ChatGPT plugin, traders can seamlessly track real-time dialogues, pinpoint developing market patterns, and base their decisions on these insights. Incorporating this tool into a trading strategy ensures an edge in understanding and acting on market sentiments. How to Use PointsRecap: A Quick TutorialStep 1. Retrieve Viewpoints for a Ticker: Use the `getViewPoints` function with the desired stock ticker to get viewpoints. For example, `getViewPoints({ ticker: "AAPL" })` fetches viewpoints for Apple Inc.Step 2. Access Recent Highlighted Videos: Simply call the `getRecentHighlights()` function to see the latest highlighted YouTube videos.Step 3. Generate a Word Cloud: Understand trending topics in recent videos by using the `getWordCloud()` function, which returns a visual representation of frequently discussed topics.Step 4. Analyze and Present Data: After obtaining the data from PointsRecap, analyze the results to gain insights and integrate them into your research or presentation.We will further explore the latest chatter going around the Apple Inc on YouTube in the next two prompts.Prompt 8: getViewPoints({ ticker: "AAPL" })Prompt 9 : getWordCloud(APPL)By looking at and analyzing the above output, an individual can understand the overall sentiment and trend related to Apple Inc and help in formulating a better strategy. In the next section, we will try to further our research through the data pertaining to option related to company.Step 6: Analyzing Option Chain Data with ChatGPT PluginThe options market can serve as a predictive lens for anticipated movements in a company's stock market. Key indicators such as volume and open interest can hint at bullish or bearish sentiments. Implied Volatility (IV) reflects expected stock price fluctuations, while the Put/Call Ratio offers a snapshot of market sentiment. Additionally, option pricing, activity in out-of-the-money options, and the chosen expiration dates can provide insights into traders' expectations. Furthermore, hedging activities by large institutional investors using options can reveal their outlook on a stock. In this subsection, we'll delve into how these intricate dynamics in the options market can shed light on potential stock market trends.Option chains provide a structured display of all available option contracts for a specific security or index. These chains offer insights into various parameters like strike prices, expiration dates, and premiums. With the ChatGPT plugin, you can seamlessly retrieve and analyze this data in real-time. This tutorial will guide you through the process of harnessing the power of the ChatGPT plugin to make informed decisions regarding option contracts. Using OptionsPro with ChatGPT: A Five-Step TutorialOptionsPro offers a comprehensive suite of tools for analyzing options and financial data. This tutorial will guide you through five essential steps to harness the power of the OptionsPro plugin with ChatGPT.Step 1: Set Up Your EnvironmentEnsure you have access to the OptionsPro plugin and that it's integrated with ChatGPT.Familiarize yourself with the available endpoints mentioned above.Step 2: Get a Market OverviewUse the `getMarketOutlook` endpoint to gain insights into the general market trend. ``` OptionsPro.getMarketOutlook() ```Analyze the returned data to understand the current market sentiment and key indicators.Step 3: Dive into Individual Stock AnalysisChoose a stock ticker you're interested in.Use the `getOptionFlow` endpoint to get the most traded options for that ticker. ``` OptionsPro.getOptionFlow({ticker: "AAPL", topN: 10}) ```This will give you a list of the top 10 most traded options for Apple Inc.Step 4: Monitor Unusual Option ActivitiesUse the `getOptionAlerts` endpoint to get alerts on unusual options activities. ``` OptionsPro.getOptionAlerts({ticker: "AAPL"}) ```This will alert you to any unusual options activity for Apple Inc., helping you spot potential market-moving events.Step 5: Analyze Option DetailsOnce you've identified an option of interest, delve deeper using the `getOptionDetails` endpoint. ``` OptionsPro.getOptionDetails({ticker: "AAPL", type: "call", expiration: "2023-12-15", strike: 150}) ```This will provide details about the call option for Apple Inc. with a strike price of $150, expiring on December 15, 2023.  Prompt 10: getOptionDetails APPLThis is the last step that can help us in formulating the strategy for data collection for trading using ChatGPT and prompt engineering.ConclusionIn the ever-evolving landscape of trading, the fusion of AI-powered tools like ChatGPT plugins with traditional trading strategies has revolutionized the way traders access and interpret data. This tutorial illuminated the potential of combining tools like Whimsical with ChatGPT and delved into the capabilities of plugins such as Polygon.io, AI Ticker Chat, PointsRecap, and OptionsPro. By mastering the ten prompting techniques presented, traders can automate and refine their data collection processes, ensuring they're equipped with timely and precise information. As the world of trading continues to advance, leveraging these AI-enhanced tools will be paramount for those aiming to stay at the forefront of their trading journey.Author BioDr. Anshul Saxena is an author, corporate consultant, inventor, and educator who assists clients in finding financial solutions using quantum computing and generative AI. He has filed over three Indian patents and has been granted an Australian Innovation Patent. Anshul is the author of two best-selling books in the realm of HR Analytics and Quantum Computing (Packt Publications). He has been instrumental in setting up new-age specializations like decision sciences and business analytics in multiple business schools across India. Currently, he is working as Assistant Professor and Coordinator – Center for Emerging Business Technologies at CHRIST (Deemed to be University), Pune Lavasa Campus. Dr. Anshul has also worked with reputed companies like IBM as a curriculum designer and trainer and has been instrumental in training 1000+ academicians and working professionals from universities and corporate houses like UPES, CRMIT, and NITTE Mangalore, Vishwakarma University, Pune & Kaziranga University, and KPMG, IBM, Altran, TCS, Metro CASH & Carry, HPCL & IOC. With a work experience of 5 years in the domain of financial risk analytics with TCS and Northern Trust, Dr. Anshul has guided master's students in creating projects on emerging business technologies, which have resulted in 8+ Scopus-indexed papers. Dr. Anshul holds a PhD in Applied AI (Management), an MBA in Finance, and a BSc in Chemistry. He possesses multiple certificates in the field of Generative AI and Quantum Computing from organizations like SAS, IBM, IISC, Harvard, and BIMTECH.Author of the book: Financial Modeling Using Quantum Computing
Read more
  • 0
  • 0
  • 505

article-image-fine-tuning-large-language-models-llms
Amita Kapoor
11 Sep 2023
12 min read
Save for later

Fine-Tuning Large Language Models (LLMs)

Amita Kapoor
11 Sep 2023
12 min read
IntroductionIn the bustling metropolis of machine learning and natural language processing, Large Language Models (LLMs) such as GPT-4 are the skyscrapers that touch the clouds. From chatty chatbots to prolific prose generators, they stand tall, powering a myriad of applications. Yet, like any grand structure, they're not one-size-fits-all. Sometimes, they need a little nipping and tucking to shine their brightest. Dive in as we unravel the art and craft of fine-tuning these linguistic behemoths, sprinkled with code confetti for the hands-on aficionados out there.What's In a Fine-Tune?In a world where a top chef can make spaghetti or sushi but needs finesse for regional dishes like 'Masala Dosa' or 'Tarte Tatin', LLMs are similar: versatile but requiring specialization for specific tasks. A general LLM might misinterpret rare medical terms or downplay symptoms, but with medical text fine-tuning, it can distinguish nuanced health issues. In law, a misread word can change legal interpretations; by refining the LLM with legal documents, we achieve accurate clause interpretation. In finance, where terms like "bearish" and "bullish" are pivotal, specialized training ensures the model's accuracy in financial analysis and predictions.Whipping Up the Perfect AI RecipeJust as a master chef carefully chooses specific ingredients and techniques to curate a gourmet dish, in the vast culinary world of Large Language Models, we have a delectable array of fine-tuning techniques to concoct the ideal AI delicacy. Before we dive into the details, feast your eyes on the visual smorgasbord below, which provides an at-a-glance overview of these methods. With this flavour-rich foundation, we're all set to embark on our fine-tuning journey, focusing on the PEFT method and the Flan-T5 model on the Hugging Face platform. Aprons on, and let's get cooking!Fine Tuning Flan-T5Google AI's Flan-T5, an advanced version of the T5 model, excels in LLMs with its capability to handle text and code. It specialises in Text generation, Translation, Summarization, Question Answering, and Code Generation. Unlike GPT-3 and LLAMA, Flan-T5 is open-source, benefiting researchers worldwide. With configurations ranging from 60M to 11B parameters, it balances versatility and power, though larger models demand more computational resources.For this article, we will leverage the DialogSum dataset, a robust resource boasting 13,460 dialogues, supplemented with manually labelled summaries and topics (and an additional 100 holdout data entries for topic generation). This dataset will serve as the foundation for fine-tuning our open-source giant, Flan-T5, to specialise it for dialogue summarization tasks. Setting the Stage: Preparing the Tool ChestTo fine-tune effectively, ensure your digital setup is optimized. Here's a quick checklist:Hardware: Use platforms like Google Colab.RAM: Memory depends on model parameters. For example:   Memory (MTotal) = 4 x (Number of Parameters x 4 bytes)For a 247,577,856 parameter model (flan-t5-base), around 3.7GB is needed for parameters, gradients, and optimizer states. Ideally, have at least 8GB RAMGPU: A high-end GPU, such as NVIDIA Tesla P100 or T4, speeds up training and inference. Aim for 12GB or more GPU memory, accounting for overheads.Libraries: Like chefs need the right tools, AI fine-tuning demands specific libraries for algorithms, models, and evaluation tools.Remember, your setup is as crucial as the process itself. Let's conjure up the essential libraries, by running the following command:!pip install \    transformers \    datasets \    evaluate \    rouge_score \    loralib \    peftWith these tools in hand, we're now primed to move deeper into the world of fine-tuning. Let's dive right in! Next, it's essential to set up our environment with the necessary tools:from datasets import load_dataset from transformers import AutoModelForSeq2SeqLM, AutoTokenizer, GenerationConfig, TrainingArguments, Trainer import torch import time import evaluate import pandas as pd import numpy as np To put our fine-tuning steps into motion, we first need a dataset to work with. Enter the DialogSum dataset, an extensive collection tailored for dialogue summarization:dataset_name = "knkarthick/dialogsum" dataset = load_dataset(dataset_name)Executing this code, we've swiftly loaded the DialogSum dataset. With our data playground ready, we can take a closer look at its structure and content to better understand the challenges and potentials of our fine-tuning process. DialogSum dataset is neatly structured into three segments:Train: With a generous 12,460 dialogues, this segment is the backbone of our model's learning process.Validation: A set of 500 dialogues, this slice of the dataset aids in fine-tuning the model, ensuring it doesn't merely memorise but truly understandsTest: This final 1,500 dialogue portion stands as the litmus test, determining how well our model has grasped the art of dialogue summarization.Each dialogue entry is accompanied by a unique 'id', a 'summary' of the conversation, and a 'topic' to give context.Before fine-tuning, let's gear up with our main tool: the Flan-T5 model, specifically it's base' variant from Google, which balances performance and efficiency. Using AutoModelForSeq2SeqLM, we effortlessly load the pre-trained Flan-T5, set to use torch.bfloat16 for optimal memory and precision. Alongside, we have the tokenizer, essential for translating text into a model-friendly format. Both are sourced from google/flan-t5-base, ensuring seamless compatibility. Now, let's get this code rolling:model_name='google/flan-t5-base' original_model = AutoModelForSeq2SeqLM.from_pretrained(model_name, torch_dtype=torch.bfloat16) tokenizer = AutoTokenizer.from_pretrained(model_name)Understanding Flan-T5 requires a look at its structure, particularly its parameters. Knowing the number of trainable parameters shows the model's adaptability. The total parameters reflect its complexity. The following code will count these parameters and calculate the ratio of trainable ones, giving insight into the model's flexibility during fine-tuning.Let's now decipher these statistics for our Flan-T5 model:def get_model_parameters_info(model): total_parameters = sum(param.numel() for param in model.parameters()) trainable_parameters = sum(param.numel() for param in model.parameters() if param.requires_grad) trainable_percentage = 100 * trainable_parameters / total_parameters info = ( f"Trainable model parameters: {trainable_parameters}", f"Total model parameters: {total_parameters}", f"Percentage of trainable model parameters: {trainable_percentage:.2f}%" ) return '\n'.join(info) print(get_model_parameters_info(original_model))Trainable model parameters: 247577856 Total model parameters: 247577856 Percentage of trainable model parameters: 100.00%Harnessing PEFT for EfficiencyIn the fine-tuning journey, we seek methods that boost efficiency without sacrificing performance. This brings us to PEFT (Parameter-Efficient Fine-Tuning) and its secret weapon, LORA (Low-Rank Adaptation). LORA smartly adapts a model to new tasks with minimal parameter adjustments, offering a cost-effective solution in computational terms.In the code block that follows, we're initializing LORA's configuration. Key parameters to note include:r: The rank of the low-rank decomposition, which influences the number of adaptable parameters.lora_alpha: A scaling factor determining the initial magnitude of the LORA parameters.target_modules: The neural network components we wish to reparameterize. Here, we're targeting the "q" (query) and "v" (value) modules in the transformer's attention mechanism.lora_dropout:  A regularising dropout is applied to the LORA parameters to prevent overfitting.bias: Specifies the nature of the bias term in the reparameterization. Setting it to "none" means no bias will be added.task_type: Signifying the type of task for which we're employing LORA. In our case, it's sequence-to-sequence language modelling, perfectly aligned with our Flan-T5's capabilities.Using the get_peft_model function, we integrate the LORA configuration into our Flan-T5 model. Now, let's see how this affects the trainable parameters: peft_model = get_peft_model(original_model, lora_config) print(get_model_parameters_info(peft_model))Trainable model parameters: 3538944 Total model parameters: 251116800 Percentage of trainable model parameters: 1.41%Preparing for model training requires setting specific parameters. Directory choice, learning rate, logging frequency, and epochs are vital. A unique output directory segregates results from different training runs, enabling comparison. Our high learning rate signifies aggressive fine-tuning, while allocating 100 epochs ensures ample adaptation time for the model. With these settings, we're poised to initiate the trainer and embark on the training journey.# Set the output directory with a unique name using a timestamp output_dir = f'peft-dialogue-summary-training-{str(int(time.time()))}' # Define the training arguments for PEFT model training peft_training_args = TrainingArguments( output_dir=output_dir, auto_find_batch_size=True, # Automatically find an optimal batch size learning_rate=1e-3, # Use a higher learning rate for fine-tuning num_train_epochs=10, # Set the number of training epochs logging_steps=1000, # Log every 500 steps for more frequent logging max_steps=-1 # Let the number of steps be determined by epochs and dataset size ) # Initialise the trainer with PEFT model and training arguments peft_trainer = Trainer( model=peft_model, args=peft_training_args, train_dataset=formatted_datasets["train"], ) Let the learning begin!peft_trainer.train()To evaluate our models, we'll compare their summaries to a human baseline from our dataset using a `prompt`. With the original and PEFT-enhanced Flan-T5 models, we'll create summaries and contrast them with the human version, revealing AI accuracy and the best-performing model in our summary contest.def generate_summary(model, tokenizer, dialogue, prompt): """ Generate summary for a given dialogue and model. """ input_text = prompt + dialogue input_ids = tokenizer(input_text, return_tensors="pt").input_ids input_ids = input_ids.to(device) output_ids = model.generate(input_ids=input_ids, max_length=200, num_beams=1, early_stopping=True) return tokenizer.decode(output_ids[0], skip_special_tokens=True) index = 270 dialogue = dataset['test'][index]['dialogue'] human_baseline_summary = dataset['test'][index]['summary'] prompt = "Summarise the following conversation:\n\n" # Generate summaries original_summary = generate_summary(original_model, tokenizer, dialogue, prompt) peft_summary = generate_summary(peft_model, tokenizer, dialogue, prompt) # Print summaries print_output('BASELINE HUMAN SUMMARY:', human_baseline_summary) print_output('ORIGINAL MODEL:', original_summary) print_output('PEFT MODEL:', peft_summary)And the output:----------------------------------------------------------------------- BASELINE HUMAN SUMMARY:: #Person1# and #Person1#'s mother are preparing the fruits they are going to take to the picnic. ----------------------------------------------------------------------- ORIGINAL MODEL:: #Person1# asks #Person2# to take some fruit for the picnic. #Person2# suggests taking grapes or apples.. ----------------------------------------------------------------------- PEFT MODEL:: Mom and Dad are going to the picnic. Mom will take the grapes and the oranges and take the oranges.To assess our summarization models, we use the subset of the test dataset. We'll compare the summaries to human-created baselines. Using batch processing for efficiency, dialogues are processed in set group sizes. After processing, all summaries are compiled into a DataFrame for structured comparison and analysis. Below is the Python code for this experiment.dialogues = dataset['test'][0:20]['dialogue'] human_baseline_summaries = dataset['test'][0:20]['summary'] original_model_summaries = [] peft_model_summaries = [] for dialogue in dialogues:    prompt = "Summarize the following conversation:\n\n"      original_summary = generate_summary(original_model, tokenizer, dialogue, prompt)       peft_summary = generate_summary(peft_model, tokenizer, dialogue, prompt)      original_model_summaries.append(original_summary)    peft_model_summaries.append(peft_summary) df = pd.DataFrame({    'human_baseline_summaries': human_baseline_summaries,    'original_model_summaries': original_model_summaries,    'peft_model_summaries': peft_model_summaries }) dfTo evaluate our PEFT model's summaries, we use the ROUGE metric, a common summarization tool. ROUGE measures the overlap between predicted summaries and human references, showing how effectively our models capture key details. The Python code for this evaluation is:rouge = evaluate.load('rouge') original_model_results = rouge.compute( predictions=original_model_summaries, references=human_baseline_summaries[0:len(original_model_summaries)], use_aggregator=True, use_stemmer=True, ) peft_model_results = rouge.compute( predictions=peft_model_summaries, references=human_baseline_summaries[0:len(peft_model_summaries)], use_aggregator=True, use_stemmer=True, ) print('ORIGINAL MODEL:') print(original_model_results) print('PEFT MODEL:') print(peft_model_results) Here is the output:ORIGINAL MODEL: {'rouge1': 0.3870781853986991, 'rouge2': 0.13125454660387353, 'rougeL': 0.2891907205395029, 'rougeLsum': 0.29030342767482775} INSTRUCT MODEL: {'rouge1': 0.3719168722187023, 'rouge2': 0.11574429294744135, 'rougeL': 0.2739614480462256, 'rougeLsum': 0.2751489358330983} PEFT MODEL: {'rouge1': 0.3774164144865605, 'rouge2': 0.13204737323990984, 'rougeL': 0.3030487123408395, 'rougeLsum': 0.30499897454317104}Upon examining the results, it's evident that the original model shines with the highest ROUGE-1 score, adeptly capturing crucial standalone terms. On the other hand, the PEFT Model wears the crown for both ROUGE-L and ROUGE-Lsum metrics. This implies the PEFT Model excels in crafting summaries that string together longer, coherent sequences echoing those in the reference summaries.ConclusionWrapping it all up, in this post we delved deep into the nuances of fine-tuning Large Language Models, particularly spotlighting the prowess of FLAN T5. Through our hands-on venture into the dialogue summarization task, we discerned the intricate dance between capturing individual terms and weaving them into a coherent narrative. While the original model exhibited an impressive knack for highlighting key terms, the PEFT Model emerged as the maestro in crafting flowing, meaningful sequences.It's clear that in the grand arena of language models, knowing the notes is just the beginning; it's how you orchestrate them that creates the magic. Harnessing the techniques illuminated in this post, you too can fine-tune your chosen LLM, crafting your linguistic symphonies with finesse and flair. Here's to you becoming the maestro of your own linguistic ensemble!Author BioAmita Kapoor is an accomplished AI consultant and educator with over 25 years of experience. She has received international recognition for her work, including the DAAD fellowship and the Intel Developer Mesh AI Innovator Award. She is a highly respected scholar with over 100 research papers and several best-selling books on deep learning and AI. After teaching for 25 years at the University of Delhi, Amita retired early and turned her focus to democratizing AI education. She currently serves as a member of the Board of Directors for the non-profit Neuromatch Academy, fostering greater accessibility to knowledge and resources in the field. After her retirement, Amita founded NePeur, a company providing data analytics and AI consultancy services. In addition, she shares her expertise with a global audience by teaching online classes on data science and AI at the University of Oxford. 
Read more
  • 0
  • 0
  • 352
Banner background image

article-image-build-your-llm-powered-personal-website
Louis Owen
11 Sep 2023
8 min read
Save for later

Build Your LLM-Powered Personal Website

Louis Owen
11 Sep 2023
8 min read
IntroductionSince ChatGPT shocked the world with its capability, AI started to be utilized in numerous fields: customer service assistant, marketing content creation, code assistant, travel itinerary planning, investment analysis, and you name it.However, have you ever wondered about utilizing AI, or more specifically Large Language Model (LLM) like ChatGPT, to be your own personal AI assistant on your website?A personal website, usually also called a personal web portfolio, consists of all the things we want to showcase to the world, starting from our short biography, work experiences, projects we have done, achievements, paper publications, and any other things that are related to our professional work. We put this website live on the internet and people can come and see all of its content by scrolling and surfing the pages.What if we can change the User Experience a bit from scrolling/surfing to giving a query? What if we add a small search bar or widget where they can directly ask anything they want to know about us and they’ll get the answer immediately? Let’s imagine a head-hunter or hiring manager who opened your personal website. In their mind, they already have specific criteria for the potential candidates they want to hire. If we put a search bar or any type of widget on our website, they can directly ask what they want to know about us. Hence, improving our chances of being approached by them. This will also let them know that we’re adapting to the latest technology available in the market and surely will increase our positive points in their scoring board.In this article, I’ll guide you to build your own LLM-Powered Personal Website. We’ll start by discussing several FREE-ly available LLMs that we can utilize. Then, we’ll go into the step-by-step process of how to build our personal AI assistant by exploiting the LLM capability as a Question and Answering (QnA) module. As a hint, we’ll use one of the available task-specific models provided by AI21Labs as our LLM. They provide a 3-month free trial worth $90 or 18,000 free calls for the QnA model. Finally, we’ll see how we can put our personal AI assistant on our website.Without wasting any more time, let’s take a deep breath, make yourselves comfortable, and be ready to learn how to build your own LLM-powered personal website!Freely Available LLMsThe main engine of our personal AI assistant is an LLM. The question is what LLM should we use?There are many variants of LLM available in the market right now, starting from open-source to closed-source LLM. There are two main differences between open-source and closed-source. Open-source LLMs are absolutely free but you need to host it by yourself. On the other hand, closed-source LLMs are not free but we don’t need to host it by ourselves, we just need to call an API request to utilize it.As for open-source LLM, the go-to LLM for a lot of use cases is LLaMA-2 by Meta AI. Since LLM consumes a large amount of GPU memory, in practice we usually perform 4-bit quantization to reduce the memory usage. Thanks to the open-source community, you can now directly use the quantized version of LLaMA-2 in the HuggingFace library released by TheBloke. To host the LLM, we can also utilize a very powerful inference server called Text Generation Inference (TGI).The next question is whether there are freely available GPU machines out there that we can use to host the LLM. We can’t use Google Colab since we want to host it on a server where the personal website can send API requests to that server. Luckily, there are 2 available free options for us: Google Cloud Platform and SaturnCloud. Both of them offer free trial accounts for us to rent the GPU machines.Open-source LLM like LLaMA-2 is free but it comes with an additional hassle which is to host it by ourselves. In this article, we’ll use closed-source LLM as our personal AI assistant instead. However, most closed-source LLMs that can be accessed via API are not free: GPT-3.5 and GPT-4 by OpenAI, Claude by Anthropic, Jurassic by AI21Labs, etc.Luckily, AI21Labs offers $90 worth of free trial for us! Moreover, they also provide task-specific models that are charged based on the number of API calls, not based on the number of tokens like in other most closed-source LLMs. This is surely very suitable for our use case since we’ll have long input tokens!Let’s dive deeper into AI21Labs LLM, specifically the QnA model which we’ll be using as our personal AI assistant!AI21Labs QnA LLMAI21Labs provides numerous task-specific models, which offer out-of-the-box reading and writing capabilities. The LLM we’ll be using is fine-tuned specifically for the QnA task, or they called it the “Contextual Answers” model. We just need to provide the context and query, then it will return the answer solely based on the information available in the context. This model is priced at $0.005 / API request, which means with our $90 free trial account, we can send 18,000 API calls! Isn’t it amazing? Without further ado, let’s start building our personal AI assistant!1.    Create AI21Labs Free Trial AccountTo use the QnA model, we just need to create a free trial account on the AI21Labs website. You can follow the steps from the website, it’s super easy just like creating a new account on most websites.2.    Enter the PlaygroundOnce you have the free trial account, you can go to the AI21Studio page and select “Contextual Answers” under the “Task-Specific Models” button in the left bar. Then, we can go to the Playground to test the model. Inside the Playground of the QnA model, there will be 2 input fields and 1 output field. As for input, we need to pass the context (the knowledge list) and the query. As for output, we’ll get the answer from the given query based on the context provided. What if the answer doesn’t exist in the context? This model will return “Answer not in documents.” as the fallback.3.    Create the Knowledge ListThe next and main task that we need to do is to create the knowledge list as part of the context input. Just think of this knowledge list as the Knowledge Base (KB) for the model. So, the model is able to answer the model only based on the information available in this KB.4.    Test with Several QueriesMost likely, our first set of knowledge is not exhaustive. Thus, we need to do several iterations of testing to keep expanding the list while also maintaining the quality of the returned answer. We can start by creating a list of possible queries that can be asked by our web visitors. Then, we can add several answers for each of the queries inside the knowledge list. Pro tip: Once our assistant is deployed on our website, we can also add a logger to store all queries and responses that we get. Using that log data, we can further expand our knowledge list, hence making our AI assistant “smarter”.5.    Embed the AI Assistant on Our WebsiteUntil now, we just played with the LLM in the Playground. However, our goal is to put it inside our web portfolio. Thanks to AI21Labs, we can do it easily just by adding the JavaScript code inside our website. We can just click the three-dots button in the top right of the “context input” and choose the “Code” option. Then, a pop-up page will be shown, and you can directly copy and paste the JavaScript code into your personal website. That’s it!ConclusionCongratulations on keeping up to this point! Hopefully, I can see many new LLM-powered portfolios developed after this article is published. Throughout this article, you have learned how to build your own LLM-powered personal website starting from the motivation, freely available LLMs with their pros and cons, AI21Labs task-specific models, creating your own knowledge list along with some tips, and finally how to embed your AI assistant in your personal website. See you in the next article!Author BioLouis Owen is a data scientist/AI engineer from Indonesia who is always hungry for new knowledge. Throughout his career journey, he has worked in various fields of industry, including NGOs, e-commerce, conversational AI, OTA, Smart City, and FinTech. Outside of work, he loves to spend his time helping data science enthusiasts to become data scientists, either through his articles or through mentoring sessions. He also loves to spend his spare time doing his hobbies: watching movies and conducting side projects.Currently, Louis is an NLP Research Engineer at Yellow.ai, the world’s leading CX automation platform. Check out Louis’ website to learn more about him! Lastly, if you have any queries or any topics to be discussed, please reach out to Louis via LinkedIn.
Read more
  • 0
  • 0
  • 1307

article-image-ai-distilled-16-baidus-ernie-chatbot-openais-chatgpt-in-education-metas-facet-dataset-fmops-or-llmops-qualcomms-ai-focus-interecagent-liquid-neural-networks
Merlyn Shelley
08 Sep 2023
11 min read
Save for later

AI_Distilled #16: Baidu's Ernie Chatbot, OpenAI's ChatGPT in Education, Meta's FACET Dataset, FMOps or LLMOps, Qualcomm's AI Focus, InteRecAgent, Liquid Neural Networks

Merlyn Shelley
08 Sep 2023
11 min read
👋 Hello ,“Artificial intelligence is one of the most profound things we're working on as humanity. It is more profound than fire or electricity.” -Sundar Pichai, Google CEO  Pichai's AI-fire analogy signifies a transformative era; AI and ML will revolutionize education, medicine, and more, reshaping human progress. OpenAI has begun promoting the use of ChatGPT in education, which shouldn’t really come as a surprise as students the world over have been experimenting with the technology. Get ready to dive into the latest AI developments in this edition, AI_Distilled #16, including Baidu launching Ernie chatbot following Chinese government approval, X's Privacy Policy Reveals Plan to Use Public Data for AI Training, Meta releasing FACET Dataset to evaluate AI model fairness, Google’s new Multislice for scalable AI training on cloud TPUs, and Qualcomm's focus on AI and auto amidst NVIDIA's chip dominance. Watch out also for our handpicked collection of fresh AI, GPT, and LLM-focused secret knowledge and tutorials from around the web covering Liquid Neural Networks, Serverless Machine Learning with Amazon Redshift ML, implementing effective guardrails for LLMs, Navigating Generative AI with FMOps and LLMOps, and using Microsoft’s new AI compiler quartet. What do you think of this issue and our newsletter? Please consider taking the short survey below to share your thoughts and you will get a free PDF of the “The Applied Artificial Intelligence Workshop” eBook upon completion. Complete the Survey. Get a Packt eBook for Free!Writer’s Credit: Special shout-out to Vidhu Jain for their valuable contribution to this week’s newsletter content!  Cheers,  Merlyn Shelley  Editor-in-Chief, Packt  ⚡ TechWave: AI/GPT News & AnalysisMeta Releases FACET Dataset to Evaluate AI Model Fairness: Meta has launched FACET (FAirness in Computer Vision EvaluaTion), a dataset designed to assess the fairness of AI models used for image and video classification, including identifying people. Comprising 32,000 images with 50,000 labeled individuals, FACET includes demographic and physical attributes, allowing for deep evaluations of biases against various classes. Despite previous concerns about Meta's responsible AI practices, the company claims FACET is more comprehensive than previous bias benchmarks. However, concerns have been raised about the dataset's origins and the compensation of annotators. Meta has also released a web-based dataset explorer tool for FACET. You can read the full paper here. Baidu Launches Ernie Chatbot Following Chinese Government Approval: Chinese tech giant Baidu has unveiled its chatbot, Ernie Bot, after receiving government clearance, along with other AI firms. Ernie Bot is now accessible for download via app stores or Baidu's website. Similar to its rival, ChatGPT, users can engage Ernie Bot for queries, market analysis assistance, marketing slogan ideas, and document summaries. While it's accessible globally, registration requires a Chinese number, and the app is only in Chinese on US Android and iOS stores. Baidu has also introduced a plug-in market for Ernie Bot, which quickly garnered over 1 million users within 19 hours of launch. CEO Robin Li expressed plans for further AI-native apps aimed at exploring generative AI's core abilities. Google Introduces Multislice for Scalable AI Training on Cloud TPUs: Google has unveiled Multislice, a comprehensive large-scale training technology that facilitates straightforward, cost-effective, and nearly linear scaling to tens of thousands of Cloud Tensor Processing Units (TPUs) chips. Traditionally, a training run was restricted to a single slice, which meant a maximum of 3072 TPU v4 chips could be used. With Multislice, training can span multiple slices across pods through data center networking, eliminating these limitations. This innovation offers benefits such as efficient scaling for massive models, enhanced developer productivity, automatic compiler optimizations, and cost-efficiency. It promises to revolutionize AI infrastructure by enabling near-linear scaling for AI supercomputing. OpenAI Promotes Use of ChatGPT in Education: OpenAI is encouraging educators to utilize ChatGPT in classrooms. The company showcased six educators, primarily at the university level, using ChatGPT for various purposes, such as role-playing in debates, aiding translation for English-as-a-second-language students, and fact-checking. Despite some schools banning ChatGPT due to concerns about academic integrity, OpenAI believes it can be a valuable tool in education. However, it emphasizes the importance of maintaining human oversight in the assessment process. X's Privacy Policy Reveals Plan to Use Public Data for AI Training: In an update to its privacy policy, X (formerly Twitter) has informed users that it will now collect biometric data, job histories, and education backgrounds. However, another section of the policy reveals a broader plan: X intends to utilize the data it gathers, along with publicly available information, to train its machine learning and AI models. This revelation has attracted attention, particularly due to the connection with X owner Elon Musk's ambitions in the AI market through his company xAI. Musk confirmed the privacy policy change, emphasizing that only public data, not private messages, would be used for AI training.   Qualcomm's Focus on AI and Auto Amidst NVIDIA’s Chip Dominance: As NVIDIA takes the lead as the world's largest fabless chip company, Qualcomm is strategically positioning itself in the AI realm. The company has unveiled in-vehicle generative AI capabilities, expanded into two-wheelers, and forged a partnership with Amazon Web Services. Qualcomm's CEO, Cristiano Amon, believes that generative AI, currently reliant on cloud resources, will transition to local execution, enhancing performance and cost-efficiency. Diversification is also a priority, with Qualcomm's chips powering various smart devices, especially in the automotive sector. Amid uncertainty about its future relationship with Apple, Qualcomm aims to maintain its dominance through innovations in AI and auto tech. InteRecAgent, A Fusion of Language Models and Recommender Systems Introduced: Researchers from the University of Science and Technology of China, in collaboration with Microsoft Research Asia, have introduced InteRecAgent, a cutting-edge framework. This innovation seeks to combine the interactive capabilities of LLMs with the domain-specific precision of traditional recommender systems. Recommender systems play a vital role in various digital domains, but they often struggle with versatile interactions. On the other hand, LLMs excel in conversations but lack domain-specific knowledge. InteRecAgent introduces the "Candidate Memory Bus" to streamline recommendations for LLMs and a "Plan-first Execution with Dynamic Demonstrations" strategy for effective tool interaction. adidas Utilizes AI and NVIDIA RTX for Photorealistic 3D Content: Sportswear giant adidas is partnering with Covision Media, an Italian startup, to revolutionize their online shopping experience. Covision employs AI and NVIDIA RTX technology to develop 3D scanners that allow businesses to create digital twins of their products with stunning realism. This technology can quickly generate 3D scans, capturing textures, colors, and geometry, resulting in lifelike images. adidas is among the first to adopt this technology for automating and scaling e-commerce content production, enhancing their Virtual Try-On feature and replacing traditional product photography with computer-generated content.  🔮 Expert Insights from Packt CommunityServerless Machine Learning with Amazon Redshift ML - By Debu Panda, Phil Bates, Bhanu Pittampally, Sumeet JoshiData analysts and developers use Redshift data with machine learning (ML) models for tasks such as predicting customer behavior. Amazon Redshift ML streamlines this process using familiar SQL commands. A conundrum arises when attempting to decipher these data silos – a formidable challenge that hampers the derivation of meaningful insights essential for organizational clarity. Adding to this complexity, security and performance considerations typically prevent business analysts from accessing data within OLTP systems. The hiccup is that intricate analytical queries weigh down OLTP databases, casting a shadow over their core operations. Here, the solution is the data warehouse, which is a central hub of curated data, used by business analysts and data scientists to make informed decisions by employing the business intelligence and machine learning tools at their disposal. These users make use of Structured Query Language (SQL) to derive insights from this data trove. Here’s where Amazon Redshift Serverless comes in. It’s a key option within Amazon Redshift, a well-managed cloud data warehouse offered by Amazon Web Services (AWS). With cloud-based ease, Amazon Redshift Serverless lets you set up your data storage without infrastructure hassles or cost worries. You pay based on what you use for compute and storage. Amazon Redshift Serverless goes beyond convenience, propelling modern data applications that seamlessly connect to the data lake. The above content is extracted from the book Serverless Machine Learning with Amazon Redshift ML written by Debu Panda, Phil Bates, Bhanu Pittampally, Sumeet Joshi and published in Aug 2023. To get a glimpse of the book's contents, make sure to read the free chapter provided here, or if you want to unlock the full Packt digital library free for 7 days, try signing up now! To learn more, click on the button below. Keep Calm, Start Reading! 🌟 Secret Knowledge: AI/LLM ResourcesUnderstanding Liquid Neural Networks: A Primer on AI Advancements: In this post, you'll learn how liquid neural networks are transforming the AI landscape. These networks, inspired by the human brain, offer a unique and creative approach to problem-solving. They excel in complex tasks such as weather prediction, stock market analysis, and speech recognition. Unlike traditional neural networks, liquid neural networks require significantly fewer neurons, making them ideal for resource-constrained environments like autonomous vehicles. These networks excel in handling continuous data streams but may not be suitable for static data. They also provide better causality handling and interpretability. Navigating Generative AI with FMOps and LLMOps: A Practical Guide: In this informative post, you'll gain valuable insights into the world of generative AI and its operationalization using FMOps and LLMOps principles. The authors delve into the challenges businesses face when integrating generative AI into their operations. You'll explore the fundamental differences between traditional MLOps and these emerging concepts. The post outlines the roles various teams play in this process, from data engineers to data scientists, ML engineers, and product owners. The guide provides a roadmap for businesses looking to embrace generative AI. AI Compiler Quartet: A Breakdown of Cutting-Edge Technologies: Explore Microsoft’s groundbreaking "heavy-metal quartet" of AI compilers: Rammer, Roller, Welder, and Grinder. These compilers address the evolving challenges posed by AI models and hardware. Rammer focuses on optimizing deep neural network (DNN) computations, improving hardware parallel utilization. Roller tackles the challenge of memory partitioning and optimization, enabling faster compilation with good computation efficiency. Welder optimizes memory access, particularly vital as AI models become more memory-intensive. Grinder addresses complex control flow execution in AI computation. These AI compilers collectively offer innovative solutions for parallelism, compilation efficiency, memory, and control flow, shaping the future of AI model optimization and compilation.  💡 MasterClass: AI/LLM Tutorials Exploring IoT Data Simulation with ChatGPT and MQTTX: In this comprehensive guide, you'll learn how to harness the power of AI, specifically ChatGPT, and the MQTT client tool, MQTTX, to simulate and generate authentic IoT data streams. Discover why simulating IoT data is crucial for system verification, customer experience enhancement, performance assessment, and rapid prototype design. The article dives into the integration of ChatGPT and MQTTX, introducing the "Candidate Memory Bus" to streamline data testing. Follow the step-by-step guide to create simulation scripts with ChatGPT and efficiently simulate data transmission with MQTTX.  Revolutionizing Real-time Inference: SageMaker Unveils Streaming Support for Generative AI: Amazon SageMaker now offers real-time response streaming, transforming generative AI applications. This new feature enables continuous response streaming to clients, reducing time-to-first-byte and enhancing interactive experiences for chatbots, virtual assistants, and music generators. The post guides you through building a streaming web application using SageMaker real-time endpoints for interactive chat use cases. It showcases deployment options with AWS Large Model Inference (LMI) and Hugging Face Text Generation Inference (TGI) containers, providing a seamless, engaging conversation experience for users. Implementing Effective Guardrails for Large Language Models: Guardrails are crucial for maintaining trust in LLM applications as they ensure compliance with defined principles. This guide presents two open-source tools for implementing LLM guardrails: Guardrails AI and NVIDIA NeMo-Guardrails. Guardrails AI offers Python-based validation of LLM responses, using the RAIL specification. It enables developers to define output criteria and corrective actions, with step-by-step instructions for implementation. NVIDIA NeMo-Guardrails introduces Colang, a modeling language for flexible conversational workflows. The guide explains its syntax elements and event-driven design. Comparing the two, Guardrails AI suits simple tasks, while NeMo-Guardrails excels in defining advanced conversational guidelines. 🚀 HackHub: Trending AI Toolscabralpinto/modular-diffusion: Python library for crafting and training personalized Diffusion Models with PyTorch.  cofactoryai/textbase: Simplified Python chatbot development using NLP and ML with Textbase's on_message function in main.py. microsoft/BatteryML: Open-source ML tool for battery analysis, aiding researchers in understanding electrochemical processes and predicting battery degradation. facebookresearch/co-tracker: Swift transformer-based video tracker with Optical Flow, pixel-level tracking, grid sampling, and manual point selection. explodinggradients/ragas: Framework evaluates Retrieval Augmented Generation pipelines, enhancing LLM context with external data using research-based tools. 
Read more
  • 0
  • 0
  • 302
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-chatgpt-for-exploratory-data-analysis-eda
Rama Kattunga
08 Sep 2023
9 min read
Save for later

ChatGPT for Exploratory Data Analysis (EDA)

Rama Kattunga
08 Sep 2023
9 min read
IntroductionExploratory data analysis (EDA) refers to the initial investigation of data to discover patterns, identify outliers and anomalies, test hypotheses, and check assumptions with the goal of informing future analysis and model building. It is an iterative, exploratory process of questioning, analyzing, and visualizing data.Some key aspects of exploratory data analysis include:Getting to know the data - Examining individual variables, their values, distributions, and relationships between variables.Data cleaning - Checking and handling missing values, outliers, formatting inconsistencies, etc., before further analysis.Univariate analysis - Looking at one variable at a time to understand its distribution, central tendency, spread, outliers, etc.Bivariate analysis - Examining relationships between two variables using graphs, charts, and statistical tests. This helps find correlations.Multivariate analysis - Analyzing patterns between three or more variables simultaneously using techniques like cluster analysis.Hypothesis generation - Coming up with potential explanations or hypotheses about relationships in the data based on initial findings.Data visualization - Creating graphs, plots, and charts to summarize findings and detect patterns and anomalies more easily.The goals of EDA are to understand the dataset, detect useful patterns, formulate hypotheses, and make decisions on how to prepare/preprocess the data for subsequent modeling and analysis. It is an iterative, exploratory process of questioning, analyzing, and visualizing data.Why ChatGPT for EDA?Exploratory data analysis (EDA) is an important but often tedious process with challenges and pitfalls. The use of ChatGPT saves hours on repetitive tasks. ChatGPT handles preparatory data wrangling, exploration, and documentation - freeing you to focus on insights. Its capabilities will only grow through continued learning. Soon, it may autonomously profile datasets and propose multiple exploratory avenues. ChatGPT is the perfect on-demand assistant for solo data scientists and teams seeking an effortless boost to the EDA process. The drawback of ChatGPT is it can only handle small datasets. There are a few methods like handling smaller datasets and generating Python code to do the necessary analysis.The following table provides detailed challenges/pitfalls during EDA:Challenge/PitfallDetailsGetting lost in the weedsSpending too much time on minor details without focusing on the big picture. This leads to analysis paralysis.Premature conclusionsDrawing conclusions without considering all possible factors or testing different hypotheses thoroughly.BiasPersonal biases, preconceptions or domain expertise can skew analysis in a particular direction.Multiple comparisonsTesting many hypotheses without adjusting for Type 1 errors, leading to false discoveries.DocumentationFailing to properly document methods, assumptions, and thought processes along the way.Lack of focusJumping randomly without a clear understanding of the business objective.Ignoring outliersNot handling outliers appropriately, can distort analysis and patterns.Correlation vs causationIncorrectly inferring causation based only on observed correlations.OverfittingFinding patterns in sample data that may not generalize to new data.Publication biasOnly focusing on publishable significant or "interesting" findings.Multiple rolesWearing data analyst and subject expert hats, mixing subjective and objective analysis. With ChatGPT, get an AI assistant to be your co-pilot on the journey of discovery. ChatGPT can provide EDA at various stages of your data analysis within the limits that we discussed earlier. The following table provides different stages of data analysis with prompts (these prompts either generate the output or Python code for you to execute separately):Type of EDAPromptSummary StatisticsDescribe the structure and summary statistics of this dataset. Check for any anomalies in variable distributions or outliers.Univariate AnalysisCreate histograms and density plots of each numeric variable to visualize their distributions and identify any unusual shapes or concentrations of outliers.Bivariate AnalysisGenerate a correlation matrix and heatmap to examine relationships between variables. Flag any extremely high correlations that could indicate multicollinearity issues.Dimensionality ReductionUse PCA to reduce the dimensions of this high-dimensional dataset and project it into 2D. Do any clusters or groupings emerge that provide new insights?ClusteringApply K-Means clustering on the standardized dataset with different values of k. Interpret the resulting clusters and check if they reveal any meaningful segments or categories.Text AnalysisSummarize the topics and sentiments discussed in this text column using topic modeling algorithms like LDA. Do any dominant themes or opinions stand out?Anomaly DetectionImplement an isolation forest algorithm on the dataset to detect outliers independently in each variable. Flag and analyze any suspicious or influential data points.Model PrototypingQuickly prototype different supervised learning algorithms like logistic regression, decision trees, random forest on this classification dataset. Compare their performance and feature importance.Model EvaluationGenerate a correlation matrix between predicted vs actual values from different models. Any low correlations potentially indicate nonlinear patterns worth exploring further.Report GenerationAutogenerate a Jupyter notebook report with key visualizations, findings, concentrations, and recommendations for the next steps based on the exploratory analyses performed.How do we feed data to ChatGPT for EDA?Describe your dataset through natural language prompts, and ChatGPT instantly runs analyses to find hidden insights. No need to write code - let the AI do the heavy lifting! For this article, let’s use the CSV file available at: (https://media.githubusercontent.com/media/datablist/sample-csv-files/main/files/organizations/organizations-1000.csv) (http://tinyurl.com/mphebj4k)Here are some examples of how ChatGPT can be used for exploratory data analysis:Prompts:Describe the structure and summary statistics of this CSV file: [Pasted URL or file contents]What variable types are in this DataFrame? Import Pandas and show column data types.Generate a correlation matrix and heatmap for these variables.Check for missing values in each column and calculate the percentage missing.Create a histogram to visualize the distribution of this numeric variable.Compare the means of this categorical variable across groups using a boxplot.In the below example, I pasted two hundred rows and see the prompt I gave.This resulted in the following.Look at the column details it provided. Asking for the list of categorical and numerical variables in table format produces the below:Asking for a statistical summary of numerical columns would produce the following:Python Code generation:For certain aspects of EDA, ChatGPT produces Python code with pandas library import and the result will be shown like this:import pandas as pd import matplotlib.pyplot as plt import seaborn as sns # Load the CSV file from the URL into a DataFrame url = "https://media.githubusercontent.com/media/datablist/sample-csv-files/main/files/organizations/organizations-1000.csv" df = pd.read_csv(url) # Display basic information about the DataFrame print("Basic Info About the DataFrame:") print(df.info()) # Display the first few rows of the DataFrame print("\nFirst Few Rows of the DataFrame:") print(df.head()) # Summary statistics of numerical columns print("\nSummary Statistics of Numerical Columns:") print(df.describe()) # Check for missing values print("\nMissing Values:") print(df.isnull().sum()) # Visualize data # Example: Histogram of a numerical column (replace 'col_name' with the column name you want to plot) # plt.hist(df['col_name'], bins=20) # plt.xlabel('X-axis Label') # plt.ylabel('Y-axis Label') # plt.title('Histogram of col_name') # plt.show() # You can create more visualizations and explore relationships between columns as needed. # Correlation matrix heatmap (for numerical columns) correlation_matrix = df.corr() plt.figure(figsize=(10, 8)) sns.heatmap(correlation_matrix, annot=True, cmap="coolwarm", fmt=".2f") plt.title("Correlation Matrix Heatmap") plt.show()Running this in Spyder (Anaconda UI) produces the following output without a single error: EDA on large datasets with millions of recordsAs mentioned earlier in this article, ChatGPT is very powerful for its size, but there are still limitations since it runs on general consumer hardware rather than massive server clusters. Here are a few things to keep in mind regarding its capabilities with large datasets:ChatGPT works best for datasets under 50-100MB in size. It can handle some operations on larger files up to 1GB, but performance will degrade.For initial exploration of very large datasets, ChatGPT is still useful. It can quickly summarize dimensions, types, distributions, outliers, etc., to help shape hypotheses.Advanced analytics like complex multi-variable modeling may not be feasible on the largest datasets directly in ChatGPT.However, it can help with the data prep - filtering, aggregations, feature engineering, etc. to reduce a large dataset into a more manageable sample for detailed analysis.Integration with tools that can load large datasets directly (e.g., BigQuery, Spark, Redshift) allows ChatGPT to provide insights on files too big to import wholesale.As AI capabilities continue advancing, future versions powered by more computing may be able to handle larger files for a broader set of analytics tasks.ConclusionChatGPT revolutionizes Exploratory Data Analysis (EDA) by streamlining the process and making it accessible to a wider audience. EDA is crucial for understanding data, and ChatGPT automates tasks like generating statistics, visualizations, and even code, simplifying the process.ChatGPT's natural language interface enables users to interact with data using plain language, eliminating the need for extensive coding skills. While it excels in initial exploration and data preparation, it may have limitations with large datasets or complex modeling tasks. ChatGPT is a valuable EDA companion, empowering data professionals to uncover insights and make data-driven decisions efficiently. ChatGPT's role in data analytics is expected to expand as AI technology evolves, offering even more support for data-driven decision-making.Author BioRama Kattunga has been working with data for over 15 years at tech giants like Microsoft, Intel, and Samsung. As a geek and a business wonk with degrees from Kellogg and two technology degrees from India, Rama uses his engineering know-how and strategy savvy to get stuff done with analytics, AI, and unlocking insights from massive datasets. When he is not analyzing data, you can find Rama sharing his thoughts as an author, speaker, and digital transformation specialist. Moreover, Rama also finds joy in experimenting with cooking, using videos as his guide to create delicious dishes that he can share with others. This diverse range of interests and skills highlights his well-rounded and dynamic character. LinkedIn
Read more
  • 0
  • 0
  • 577

article-image-getting-started-with-med-palm-2
07 Sep 2023
5 min read
Save for later

Getting Started with Med-PaLM 2

07 Sep 2023
5 min read
Introduction Med-PaLM 2 is a large language model (LLM) from Google Research, designed for the medical domain. It is trained on a massive dataset of text and code, including medical journals, textbooks, and clinical trials. Med-PaLM 2 can answer questions about a wide range of medical topics, including diseases, treatments, and procedures. It can also generate text, translate languages, and write different kinds of creative content. Use Cases Med-PaLM 2 can be used for a variety of purposes in the healthcare industry, including: Medical research: Med-PaLM 2 can be used to help researchers find and analyze medical data. It can also be used to generate hypotheses and test new ideas. Clinical decision support: Med-PaLM 2 can be used to help doctors diagnose diseases and make treatment decisions. It can also be used to provide patients with information about their condition and treatment options. Health education: Med-PaLM 2 can be used to create educational materials for patients and healthcare professionals. It can also be used to answer patients' questions about their health. Drug discovery: Med-PaLM 2 can be used to help researchers identify new drug targets and develop new drugs. Personalized medicine: Med-PaLM 2 can be used to help doctors personalize treatment for individual patients. It can do this by taking into account the patient's medical history, genetic makeup, and other factors. How to Get Started Med-PaLM 2 is currently available to a limited number of Google Cloud customers. To get started, you can visit the Google Cloud website: https://cloud.google.com/ and sign up for a free trial. Once you have a Google Cloud account, you can request access to Med-PaLM 2. Here are the steps on how to get started with using Med-PaLM: 1. Check if Med-PaLM is available in your country. Med-PaLM is currently only available in the following countries: United States Canada United Kingdom Australia New Zealand Singapore India Japan South KoreaYou can check the Med-PaLM website: https://sites.research.google/med-palm/ for the latest list of supported countries. 2. Create a Google Cloud Platform (GCP) account. Med-PaLM is a cloud-based service, so you will need to create a GCP account in order to use it. You can do this by going to the GCP website: https://cloud.google.com/ and clicking on the "Create Account" button. 3. Enable the Med-PaLM API. Once you have created a GCP account, you will need to enable the Med-PaLM API. You can do this by going to the API Library: https://console.cloud.google.com/apis/library and searching for "Med-PaLM". Click on the "Enable" button to enable the API. 4. Create a Med-PaLM service account. A service account is a special type of account that can be used to access GCP resources. You will need to create a service account in order to use Med-PaLM. You can do this by going to the IAM & Admin: https://console.cloud.google.com/iam-admin/ page and clicking on the "Create Service Account" button. 5. Download the Med-PaLM credentials. Once you have created a service account, you will need to download the credentials. The credentials will be a JSON file that contains your service account's email address and private key. You can download the credentials by clicking on the "Download JSON" button. 6. Set up the Med-PaLM client library. There are client libraries available for a variety of programming languages. You will need to install the client library for the language that you are using. You can find the client libraries on the Med-PaLM website: https://sites.research.google/med-palm/. 7. Initialize the Med-PaLM client. Once you have installed the client library, you can initialize the Med-PaLM client. The client will need your service account's email address and private key in order to authenticate with Med-PaLM. You can initialize the client by using the following code: import medpalm client = medpalm.Client(    email="your_service_account_email_address",    key_file="your_service_account_private_key.json" ) 8. Start using Med-PaLM! Once you have initialized the Med-PaLM client, you can start using it to access Med-PaLM's capabilities. For example, you can use Med-PaLM to answer medical questions, generate text, and translate languages. Key Features Med-PaLM 2 has a number of key features that make it a valuable tool for the healthcare industry. These features include: Accuracy: Med-PaLM 2 is highly accurate in answering medical questions. It has been shown to achieve an accuracy of 85% on a variety of medical question answering datasets. Expertise: Med-PaLM 2 is trained on a massive dataset of medical text and code. This gives it a deep understanding of medical concepts and terminology. Versatility: Med-PaLM 2 can be used for a variety of purposes in the healthcare industry. It can answer questions, generate text, translate languages, and write different kinds of creative content. Scalability: Med-PaLM 2 is scalable and can be used to process large amounts of data. This makes it a valuable tool for research and clinical applications. Conclusion Med-PaLM 2 is a powerful LLM that has the potential to revolutionize the healthcare industry. It can be used to improve medical research, clinical decision support, health education, drug discovery, and personalized medicine. Med-PaLM 2 is still under development, but it has already demonstrated the potential to make a significant impact on healthcare.  
Read more
  • 0
  • 0
  • 6427

article-image-getting-started-with-gemini-ai
Packt
07 Sep 2023
2 min read
Save for later

Getting Started with Gemini AI

Packt
07 Sep 2023
2 min read
Introduction Gemini AI is a large language model (LLM) being developed by Google DeepMind. It is still under development, but it is expected to be more powerful than ChatGPT, the current state-of-the-art LLM. Gemini AI is being built on the technology and techniques used in AlphaGo, an early AI system developed by DeepMind in 2016. This means that Gemini AI is expected to have strong capabilities in planning and problem-solving. Gemini AI is a powerful tool that has the potential to be used in a wide variety of applications. Some of the potential use cases for Gemini AI include: Chatbots: Gemini AI could be used to create more realistic and engaging chatbots. Virtual assistants: Gemini AI could be used to create virtual assistants that can help users with tasks such as scheduling appointments, making reservations, and finding information. Content generation: Gemini AI could be used to generate creative content such as articles, blog posts, and scripts. Data analysis: Gemini AI could be used to analyze large datasets and identify patterns and trends. Medical diagnosis: Gemini AI could be used to assist doctors in diagnosing diseases. Financial trading: Gemini AI could be used to make trading decisions. How Gemini AI works Gemini AI is a neural network that has been trained on a massive dataset of text and code. This dataset includes books, articles, code repositories, and other forms of text. The neural network is able to learn the patterns and relationships between words and phrases in this dataset. This allows Gemini AI to generate text, translate languages, write different kinds of creative content, and answer your questions in an informative way. How to use Gemini AI Gemini AI is not yet available to the public, but it is expected to be released in the future. When it is released, it will likely be available through a cloud-based API. This means that developers will be able to use Gemini AI in their own applications. To use Gemini AI, developers will need to first create an account and obtain an API key. Once they have an API key, they can use it to call the Gemini AI API. The API will allow them to interact with Gemini AI and use its capabilities. Here are some steps on how to install or get started with Gemini AI: Go to the Gemini AI website and create an account: Once you have created an account, you will be given an API key. Install the Gemini AI client library for your programming language. In your code, import the Gemini AI client library and initialize it with your API key. Call the Gemini AI API to generate text, translate languages, write different kinds of creative content, or answer your questions in an informative way. For more detailed instructions on how to install and use Gemini AI, please refer to the Gemini AI documentation. The future of Gemini AI Gemini AI is still under development, but it has the potential to revolutionize the way we interact with computers. In the future, Gemini AI could be used to create more realistic and engaging chatbots, virtual assistants, and other forms of AI-powered software. Gemini AI could also be used to improve our understanding of the world around us by analyzing large datasets and identifying patterns and trends. Conclusion Gemini AI is a powerful tool that has the potential to be used in a wide variety of applications. It is still under development, but it has the potential to revolutionize the way we interact with computers. In the future, Gemini AI could be used to create more realistic and engaging chatbots, virtual assistants, and other forms of AI-powered software. Gemini AI could also be used to improve our understanding of the world around us by analyzing large datasets and identifying patterns and trends.  
Read more
  • 0
  • 0
  • 1380

article-image-introduction-to-gen-ai-studio
Anubhav Singh
07 Sep 2023
6 min read
Save for later

Introduction to Gen AI Studio

Anubhav Singh
07 Sep 2023
6 min read
In this article, we’ll explore the basics of Generative AI Studio and how to run a language model within this suite with practical example. Generative AI Studio is the all-encompassing offering of generative AI-based services on Google Cloud. It includes models of different types, allowing users to generate content that may be - text, image, or audio. On the Generative AI Studio, or Gen AI Studio, users can rapidly prototype and test different types of prompts associated with the different types of models to figure out which parameters and settings work best for their use cases. Then, they can easily shift the tested configurations to the code bases of their solutions. Model Garden on the other hand provides a collection of foundation and customized generative AI models which can be used directly as models in code or as APIs. The foundation models are based on the models that have been trained by Google themselves, whereas the fine-tuned/task-specific models include models that have been developed and trained by third parties. Gen AI Studio  Packaged within Vertex AI, the Generative AI Studio on Google Cloud Platform provides low-code solutions for developing and testing invocations over Google’s AI models that can then be used within customer’s solutions. As of August 2023, the following solutions are a part of the Generative AI Studio -  Language: Models used to generate text-based responses. The models may be generating answers to questions, performing classification, recognizing sentiment, or anything that involves text understanding. Vision: The models are used to generate images/visual content with different types of drawing styles Speech: The speech models perform either speech-to-text conversation or text-to-speech conversion. Let’s explore each one of these in detail. The language models in Gen AI studio are based on the PaLM 2 for Text models and are currently in the form of either “text-bison” or “chat-bison”. The first type of model is the base model which allows performing any kind of tasks related to text understanding and generation. “Chat-bison” models on the other hand are focused on providing a conversational interface for interacting with the model. Thus, they are more suitable for tasks that require a conversation to happen between the model user and the model. There’s another form of the PaLM2 models available as “code-bison” which powers the Codey product suite. This deals with programming languages instead of human languages. Let’s take a look at how we can use a language model in Gen AI Studio. Follow the steps below: 1. First, head over to https://console.cloud.google.com/vertex-ai/generative on your browser with a Billing enabled Google Cloud account. You will be able to see the Generative AI Studio dashboard.   2. Next, click “Open” in the card titled “Language”. 3. Then, click on “Text Prompt” to open the prompt builder interface. The interface should look similar to the image below, however, being an actively developed product, it may change in several ways in the future.   4. Now, let us write a prompt. For our example, we’ll instruct the model to fact check whatever is passed to it. Here’s a sample prompt: You're a Fact Checker Bot. Whatever the user says, fact check it and say any of the following:  1. "This is a fact" if the statement by the user is a true fact. 2. "This is not a fact" if the user's statement is not classifiable as a fact. 3. "This is a myth" if the user's state is a false fact. User:  5. Now, write the user’s part as well and hit the Submit button. The last line of the prompt would now be:  User: I am eating an apple.6. Observe the response. Then, change the user’s part to “I am an apple” and “I am a human”. Observe the response in each case. The following output table is expected: Once we’re satisfied with the model responses based on our prompt, we can shift the model invocation to code. In our example, we’ll do it on Google Colaboratory. Follow the steps below: 1. Open Google Colaboratory by visiting: https://colab.research.google.com/ 2. In the first cell, we’ll install the required libraries for using Gen AI Studio models %%capture  !pip install "shapely<2.0.0"  !pip install google-cloud-aiplatform --upgrade  3. Next, we’ll authenticate the Colab notebook to be able to access the resources available on Google Cloud to the currently logged in user. from google.colab import auth as google_auth  google_auth.authenticate_user() 4. Then we import the required libraries. import vertexai  from vertexai.language_models import TextGenerationModel  5. Now, we instantiate the VertexAI client to work with the project. Take note to replace the PROJECT_ID with your own project’s ID on Google Cloud vertexai.init(project=PROJECT_ID, location="us-central1")  6. Let us now set the configurations that the model will use while answering to our prompts and initialize the model client parameters = {      "candidate_count": 1,      "max_output_tokens": 256,      "temperature": 0,      "top_p": 0.8,      "top_k": 40  }  model = TextGenerationModel.from_pretrained("text-bison@001")  7. Now, we can call the model and observe the response by printing it response = model.predict(      """You\'re a Fact Checker Bot. Whatever the user says, fact check it and say any of the following: 1. \"This is a fact\" if the statement by the user is a true fact.  2. \"This is not a fact\" if the user\'s statement is not classifiable as a fact.  3. \"This is a myth\" if the user\'s state is a false fact.  User: I am a human""",      **parameters  )  print(f"Response from Model: {response.text}")  You can similarly work with the other models available in Gen AI Studio. In this notebook, we’ve provided an example each of Language, Vision and Speech model usage: GenAIStudio&ModelGarden.ipynb  Author BioAnubhav Singh, Co-founder of Dynopii & Google Dev Expert in Google Cloud, is a  seasoned developer since the pre-Bootstrap era, Anubhav has extensive experience as a freelancer and AI startup founder. He authored "Hands-on Python Deep Learning for Web" and "Mobile Deep Learning with TensorFlow Lite, ML Kit, and Flutter." A Google Developer Expert in GCP, he co-organizes for TFUG Kolkata community and formerly led the team at GDG Cloud Kolkata. Anubhav is often found discussing System Architecture, Machine Learning, and Web technologies.
Read more
  • 0
  • 0
  • 206
article-image-chatgpt-for-healthcare
Amita Kapoor
05 Sep 2023
9 min read
Save for later

ChatGPT for Healthcare

Amita Kapoor
05 Sep 2023
9 min read
IntroductionMeet ChatGPT: OpenAI's marvelously verbose chatbot, trained on a veritable Everest of text and code. Think of it as your go-to digital polymath, fluent in language translation, a whiz at whipping up creative content, and ever-eager to dispense knowledge on everything from quantum physics to quinoa recipes. Ready to dial in the healthcare lens? This article is your rollercoaster ride through the trials, triumphs, and tangled ethical conundrums of ChatGPT in medicine. From game-changing potential to challenges as stubborn as symptoms, we've got it all. So whether you're a seasoned healthcare pro or a tech-savvy newbie, buckle up. Will ChatGPT be healthcare's new MVP or get benched? Stick around, and let's find out together. Doctor in Your Pocket? Unpacking the Potential of ChatGPT in Healthcare Modern healthcare always seeks innovation to make things smoother and more personal. Enter ChatGPT. While not a stand-in for a doctor, this text-based AI is causing ripples from customer service to content. Below are various scenarios where ChatGPT can be leveraged in its original form or fine-tuned APIs. Pre-Consultation Screeners - ChatGPT-Enabled Triage Before conversational AI, healthcare looked into computational diagnostic aids like the 1960s' Dendral, initially for mass spectrometry, inspiring later medical systems. The 1970s brought MYCIN, designed for diagnosing bacterial infections and suggesting antibiotics. However, these early systems used inflexible "if-then" rules and lacked adaptability for nuanced human interaction. Fast-forward to today's more sophisticated digital triage platforms, and we still find remnants of these rule-based systems. While significantly more advanced, many of these platforms operate within the bounds of scripted pathways, leading to linear and often inflexible patient interactions. This rigidity can result in an inadequate capture of patient nuances, a critical aspect often needed for effective medical triage. The ChatGPT Advantage: Flexibility and Natural Engagement ChatGPT is a conversational agent with the capacity for more flexible, natural interactions due to its advanced Natural Language Understanding (NLU). Unlike conventional scanners with limited predefined pathways, ChatGPT can adapt to a broader range of patient inputs, making the pre-consultation phase more dynamic and patient-centric. It offers: Adaptive Questioning: Unlike traditional systems that follow a strict query pathway, ChatGPT can adapt its questions based on prior patient responses, potentially unearthing critical details. Contextual Understanding: Its advanced NLU allows it to understand colloquial language, idioms, and contextual cues that more rigid systems may miss. Data Synthesis: ChatGPT's ability to process and summarise information can result in a more cohesive pre-consultation report for healthcare providers, aiding in a more effective diagnosis and treatment strategy. Using LLMs bots like ChatGPT offers a more dynamic, flexible, and engaging approach to pre-consultation screening, optimising patient experience and healthcare provider efficacy. Below is a sample code that you can use to play around: import openai import os # Initialize OpenAI API Client api_key = os.environ.get("OPENAI_API_KEY")  # Retrieve the API key from environment variables openai.api_key = api_key  # Set the API key # Prepare the list of messages messages = [ {"role": "system", "content": "You are a pre-consultation healthcare screener. Assist the user in gathering basic symptoms before their doctor visit."}, {"role": "user", "content": "I've been feeling exhausted lately and have frequent headaches."} ] # API parameters model = "gpt-3.5-turbo"  # Choose the appropriate engine max_tokens = 150  # Limit the response length # Make API call response = openai.ChatCompletion.create( model=model, messages=messages ) # Extract and print chatbot's reply chatbot_reply = response['choices'][0]['message']['content'] print("ChatGPT: ", chatbot_reply) And here is the ChatGPT response: Mental Health Companionship The escalating demand for mental health services has increased focus on employing technology as supplemental support. While it is imperative to clarify that ChatGPT is not a substitute for qualified mental health practitioners, the platform can serve as an initial point of contact for individuals experiencing non-critical emotional distress or minor stress and anxiety. Utilizing advanced NLU and fine-tuned algorithms, ChatGPT provides an opportunity for immediate emotional support, particularly during non-operational hours when traditional services may be inaccessible. ChatGPT can be fine-tuned to handle the sensitivities inherent in mental health discussions, thereby adhering to ethically responsible boundaries while providing immediate, albeit preliminary, support. ChatGPT offers real-time text support, serving as a bridge to professional help. Its advanced NLU understands emotional nuances, ensuring personalized interactions. Beyond this, ChatGPT recommends vetted mental health resources and coping techniques. For instance, if you're anxious outside clinical hours, it suggests immediate stress management tactics. And if you're hesitant about professional consultation, ChatGPT helps guide and reassure your decision. Let us now see, how by just changing the prompt we can use the same code as that of ChatGPT enabled triage to build a mental health companion: messages = [ {        "role": "system",        "content": "You are a virtual mental health companion. Your primary role is to provide a supportive environment for the user. Listen actively, offer general coping strategies, and identify emotional patterns or concerns. Remember, you cannot replace professional mental health care, but can act as an interim resource. Always prioritise the user's safety and recommend seeking professional help if the need arises. Be aware of various emotional and mental scenarios, from stress and anxiety to deeper emotional concerns. Remain non-judgmental, empathetic, and consistently supportive."    }, {    "role": "user",    "content": "I've had a long and stressful day at work. Sometimes, it just feels like everything is piling up and I can't catch a break. I need some strategies to unwind and relax." } ] And here is the golden advice from ChatGPT:  Providing immediate emotional support and resource guidance can be a preliminary touchpoint for those dealing with minor stress and anxiety, particularly when conventional support mechanisms are unavailable. Virtual Health Assistants  In the evolving healthcare landscape, automation and artificial intelligence (AI) are increasingly being leveraged to enhance efficiency and patient care. One such application is the utilization of Virtual Health Assistants, designed to manage administrative overhead and provide informational support empathetically. The integration of ChatGPT via OpenAI's API into telehealth platforms signifies a significant advancement in this domain, offering capabilities far surpassing traditional rule-based or keyword-driven virtual assistants. ChatGPT boasts a customizable framework ideal for healthcare, characterized by its contextual adaptability for personalized user experiences, vast informational accuracy, and multi-functional capability that interfaces with digital health tools while upholding medical guidelines. In contrast, traditional Virtual Health Assistants, reliant on rule-based systems, suffer from scalability issues, rigid interactions, and a narrow functional scope. ChatGPT stands out by simplifying medical jargon, automating administrative chores, and ensuring a seamless healthcare journey—bridging pre-consultation to post-treatment, all by synthesizing data from diverse health platforms. Now, let's explore how tweaking the prompt allows us to repurpose the previous code to create a virtual health assistant. messages = [ {    "role": "system",    "content": "You are a Virtual Health Assistant (VHA). Your primary function is to assist users in navigating the healthcare landscape. Offer guidance on general health queries, facilitate appointment scheduling, and provide informational insights on medical terminologies. While you're equipped with a broad knowledge base, it's crucial to remind users that your responses are not a substitute for professional medical advice or diagnosis. Prioritise user safety, and when in doubt, recommend that they seek direct consultation from healthcare professionals. Be empathetic, patient-centric, and uphold the highest standards of medical data privacy and security in every interaction." }, {    "role": "user",    "content": "The doctor has recommended an Intestinal Perforation Surgery for me, scheduled for Sunday. I'm quite anxious about it. How can I best prepare mentally and physically?" } ] Straight from ChatGPT's treasure trove of advice:  So there you have it. Virtual Health Assistants might not have a medical degree, but they offer the next best thing: a responsive, informative, and competent digital sidekick to guide you through the healthcare labyrinth, leaving doctors free to focus on what really matters—your health. Key Contributions Patient Engagement: Utilising advanced Natural Language Understanding (NLU) capabilities, ChatGPT can facilitate more nuanced and personalised interactions, thus enriching the overall patient experience. Administrative Efficiency: ChatGPT can significantly mitigate the administrative load on healthcare staff by automating routine tasks such as appointment scheduling and informational queries. Preventative Measures: While not a diagnostic tool, ChatGPT's capacity to provide general health information and recommend further professional consultation can contribute indirectly to early preventative care. Potential Concerns and Solutions Data Security and Privacy: ChatGPT, in its current form, does not fully meet healthcare data security requirements. Solution: For HIPAA compliance, advanced encryption, and secure API interfaces must be implemented. Clinical Misinformation: While ChatGPT can provide general advice, there are limitations to the clinical validity of its responses. Solution: It is critical that all medical advice provided by ChatGPT is cross-referenced with up-to-date clinical guidelines and reviewed by medical professionals for accuracy. Ethical Considerations: The impersonal nature of a machine providing health-related advice could potentially result in a lack of emotional sensitivity. Solution: Ethical guidelines must be established for the algorithm, possibly integrating a 'red flag' mechanism that alerts human operators when sensitive or complex issues arise that require a more nuanced touch. Conclusion ChatGPT, while powerful, isn't a replacement for the expertise of healthcare professionals. Instead, it serves as an enhancing tool within the healthcare sector. Beyond aiding professionals, ChatGPT can increase patient engagement, reduce administrative burdens, and help in preliminary health assessments. Its broader applications include transcribing medical discussions, translating medical information across languages, and simplifying complex medical terms for better patient comprehension. For medical training, it can mimic patient scenarios, aiding in skill development. Furthermore, ChatGPT can assist in research by navigating medical literature, and conserving crucial time. However, its capabilities should always be seen as complementary, never substituting the invaluable care from healthcare professionals. Author BioAmita Kapoor is an accomplished AI consultant and educator with over 25 years of experience. She has received international recognition for her work, including the DAAD fellowship and the Intel Developer Mesh AI Innovator Award. She is a highly respected scholar with over 100 research papers and several best-selling books on deep learning and AI. After teaching for 25 years at the University of Delhi, Amita retired early and turned her focus to democratizing AI education. She currently serves as a member of the Board of Directors for the non-profit Neuromatch Academy, fostering greater accessibility to knowledge and resources in the field. After her retirement, Amita founded NePeur, a company providing data analytics and AI consultancy services. In addition, she shares her expertise with a global audience by teaching online classes on data science and AI at the University of Oxford. 
Read more
  • 0
  • 0
  • 225

article-image-ai-distilled-15-openai-unveils-chatgpt-enterprise-code-llama-by-meta-vulcansql-from-hugging-face-microsofts-algorithm-of-thoughts-google-deepminds-synthid
Merlyn Shelley
31 Aug 2023
14 min read
Save for later

AI_Distilled #15: OpenAI Unveils ChatGPT Enterprise, Code Llama by Meta, VulcanSQL from Hugging Face, Microsoft's "Algorithm of Thoughts”, Google DeepMind's SynthID

Merlyn Shelley
31 Aug 2023
14 min read
👋 Hello ,“[AI] will touch every sector, every industry, every business function, and significantly change the way we live and work..this isn’t just the future. We are already starting to experience the benefits right now. As a company, we’ve been preparing for this moment for some time.” -Sundar Pichai, CEO, Google Speaking at the ongoing Google Cloud Next conference, Pichai emphasized how AI is the future, and it’s here already.   Step into the future with AI_Distilled#15, showcasing the breakthroughs in AI/ML, LLMs, NLP, GPT, and Generative AI, as we talk about Nvidia reporting over 100% increase in sales amid high demand for AI chips, Meta introducing Code Llama: a breakthrough in AI-powered coding assistance, OpenAI introducing ChatGPT Enterprise for businesses, Microsoft’s promising new "Algorithm of Thoughts" to enhance AI reasoning, and Salesforce's State of the Connected Customer Report which shows how businesses are facing AI trust gap with customers. Looking for fresh knowledge resources and tutorials? We’ve got your back! Look out for our curated collection of posts on how to use Code Llama, mitigating hallucination in LLMs, Google’s: Region-Aware Pre-Training for Open-Vocabulary Object Detection with Vision Transformers, and making data queries with Hugging Face's VulcanSQL.  We’ve also handpicked some great GitHub repos for you to use on your next AI project! What do you think of this issue and our newsletter? Please consider taking the short survey below to share your thoughts and you will get a free PDF of the “The Applied Artificial Intelligence Workshop” eBook upon completion. Complete the Survey. Get a Packt eBook for Free!Writer’s Credit: Special shout-out to Vidhu Jain for their valuable contribution to this week’s newsletter content!  Cheers,  Merlyn Shelley  Editor-in-Chief, Packt   ⚡ TechWave: AI/GPT News & Analysis OpenAI Introduces ChatGPT Enterprise: AI Solution for Businesses: OpenAI has unveiled ChatGPT Enterprise with advanced features. The enterprise-grade version offers enhanced security, privacy, and access to the more powerful GPT-4 model. It includes unlimited usage of GPT-4, higher-speed performance, longer context windows for processing lengthier inputs, advanced data analysis capabilities, customization options, and more, targeting improved productivity, customized workflows, and secure data management. Meta Introduces Code Llama: A Breakthrough in AI-Powered Coding Assistance: Code Llama is a cutting-edge LLM designed to generate code based on text prompts and is tailored for code tasks and offers the potential to enhance developer productivity and facilitate coding education. Built on Llama 2, Code Llama comes in different models, including the foundational code model, Python-specialized version, and an instruct variant fine-tuned for understanding natural language instructions. The models outperformed existing LLMs on code tasks and hold promise for revolutionizing coding workflows while adhering to safety and responsible use guidelines. Nvidia Reports Over 100% Increase in Sales Amid High Demand for AI Chips: Nvidia has achieved record-breaking sales, more than doubling its revenue to over $13.5 billion for the quarter ending in June. The company anticipates further growth in the current quarter and plans to initiate a stock buyback of $25 billion. Its stock value soared by more than 6.5% in after-hours trading, bolstering its substantial gains this year. Nvidia's data center business, which includes AI chips, fueled its strong performance, with revenue surpassing $10.3 billion, driven by cloud computing providers and consumer internet firms adopting its advanced processors. With a surge in its market value, Nvidia joined the ranks of trillion-dollar companies alongside Apple, Microsoft, Alphabet, and Amazon. Businesses Facing AI Trust Gap with Customers, Reveals Salesforce's State of the Connected Customer Report: Salesforce's sixth edition of the State of the Connected Customer report highlights a growing concern among businesses about an AI trust gap with their customers. The survey, conducted across 25 countries with over 14,000 consumers and business buyers, indicates that as companies increasingly adopt AI to enhance efficiency and meet customer expectations, nearly three-quarters of their customers are worried about unethical AI use. Consumer receptivity to AI has also decreased over the past year, urging businesses to address this gap by implementing ethical guidelines and providing transparency into AI applications. Microsoft Introduces "Algorithm of Thoughts" to Enhance AI Reasoning: Microsoft has unveiled a novel AI training method called the "Algorithm of Thoughts" (AoT), aimed at enhancing the reasoning abilities of large language models like ChatGPT by combining human-like cognition with algorithmic logic. This new approach leverages "in-context learning" to guide language models through efficient problem-solving paths, resulting in faster and less resource-intensive solutions. The technique outperforms previous methods and can even surpass the algorithm it is based on.  Google's Duet AI Expands Across Google Cloud with Enhanced Features: Google's Duet AI, a suite of generative AI capabilities for tasks like text summarization and data organization, is expanding its reach to various products and services within the Google Cloud ecosystem. The expansion includes assisting with code refactoring, offering guidance on infrastructure configuration and deployment in the Google Cloud Console, writing code in Google's dev environment Cloud Workstations, generating flows in Application Integration, and more. ̌It also integrates generative AI advancements into the security product line. OpenAI Collaborates with Scale to Enhance Enterprise Model Fine-Tuning Support: OpenAI has entered into a partnership with Scale to provide expanded support for enterprises seeking to fine-tune advanced models. Recognizing the demand for high performance and customization in AI deployment, OpenAI introduced fine-tuning for GPT-3.5 Turbo and plans to extend it to GPT-4. This feature empowers companies to customize advanced models with proprietary data, enhancing their utility. OpenAI assures that customer data remains confidential and is not utilized to train other models. Google DeepMind Introduces SynthID: A Tool to Identify AI-Generated Images: In response to the growing prevalence of AI-generated images that can be indistinguishable from real ones, Google Cloud has partnered with Imagen to unveil SynthID. This newly launched beta version aims to watermark and identify AI-created images. The technology seamlessly embeds a digital watermark into the pixels of an image, allowing for imperceptible yet detectable identification. This tool is a step towards responsible use of generative AI and enhances the capacity to identify manipulated or fabricated images.   ✨ Unleashing the Power of Causal Reasoning with LLMs:Join Aleksander Molak on October 11th and be a part of Packt's most awaited event of 2023 on Generative AI! In AI's evolution, a big change is coming. It's all about Causally Aware Prompt Engineering, and you should pay attention because it's important. LLMs are good at recognizing patterns, but what if they could do more? That's where causal reasoning comes in. It's about understanding not just what's connected but why. Let's distill the essence: - LLMs can outperform causal discovery algorithms on some tasks  - GPT-4 achieves a near-human performance on some counterfactual benchmarks  - This might be the case because the models simply memorize the data, but it's also possible that they build a **meta-SCM** (meta structural causal models) based on the correlations of causal facts learned from the data  - LLMs can reason causally if we allow them to intervene on the test time  - LLMs do not reason very well, when we provide them with verbal description of conditional independence structures in the data (but nor do (most of) humans). Now, catalyze your journey with three simple techniques: Causal Effect Estimation: Causal effect estimate aims at capturing the strength of (expected) change in the outcome variable when we modify the value of the treatment by one unit. In practice, almost any machine learning algorithm can be used for this purpose, yet in most cases we need to use these algorithms in a way that differs from the classical machine learning flow. Confronting Confounding: The main challenge (yet not the only one) in estimating causal effects from observational data comes from confounding. Confounder is a variable in the system of interest that produces a spurious relationship between the treatment and the outcome. Spurious relationships are a kind of illusion. Interestingly, you can observe spurious relationships not only in the recorded data, but also in the real world. Unveiling De-confounding: To obtain an unbiased estimate of the causal effect, we need to get rid of confounding. At the same time, we need to be careful not to introduce confounding ourselves! This usually boils down to controlling for the right subset of variables in your analysis. Not too small, not too large. If you're intrigued by this, I invite you to join me for an in-depth exploration of this fascinating topic at Packt's upcoming Generative AI conference on October 11th. During my power-talk, we'll delve into the question: Can LLMs learn Causally?  REGISTER NOW at Early Bird discounted pricing! *Free eBook on Registration: Modern Generative AI with ChatGPT and OpenAI Models   🔮 Expert Insights from Packt Community The Regularization Cookbook - By Vincent Vandenbussche Regularization serves as a valuable approach to enhance the success rate of ML models in production. Effective regularization techniques can prevent AI recruitment models from exhibiting gender biases, either by eliminating certain features or incorporating synthetic data. Additionally, proper regularization enables chatbots to maintain an appropriate level of sensitivity toward new tweets. It also equips models to handle edge cases and previously unseen data proficiently, even when trained on synthetic data. Key concepts of regularization Let us now delve into a more precise definition and explore key concepts that enable us to better comprehend regularization. Bias and variance Bias and variance are two key concepts when talking about regularization. We can define two main kinds of errors a model can have: Bias is how bad a model is at capturing the general behavior of the data Variance is how bad a model is at being robust to small input data fluctuations Let’s describe those four cases: High bias and low variance: The model is hitting away from the center of the target, but in a very consistent manner Low bias and high variance: The model is, on average, hitting the center of the target, but is quite noisy and inconsistent in doing so High bias and high variance: The model is hitting away from the center in a noisy way Low bias and low variance: The best of both worlds – the model is hitting the center of the target consistently  The above content is extracted from the book The Regularization Cookbook By Vincent Vandenbussche and published in July 2023. To get a glimpse of the book's contents, make sure to read the free chapter provided here, or if you want to unlock the full Packt digital library free for 7 days, try signing up now! To learn more, click on the button below. Keep Calm, Start Reading!  🌟 Secret Knowledge: AI/LLM Resources Google’s RO-ViT: Region-Aware Pre-Training for Open-Vocabulary Object Detection with Vision Transformers: Google's research scientists have unveiled a new method called "RO-ViT" that enhances open-vocabulary object detection using vision transformers. Learn how the technique addresses limitations in existing pre-training approaches for vision transformers, which struggle to fully leverage the concept of objects or regions during pre-training. RO-ViT introduces a novel approach called "cropped positional embedding" that aligns better with region-level tasks.Tiered AIOps: Enhancing Cloud Platform Management with AI: Explore the concept of Tiered AIOps to manage complex cloud platforms. The ever-changing nature of cloud applications and infrastructure presents challenges for complete automation, requiring a tiered approach to combine AI and human intervention. The concept involves dividing operations into tiers, each with varying levels of automation and human expertise. Tier 1 incorporates routine operations automated by AI, Tier 2 empowers non-expert operators with AI assistance, and Tier 3 engages expert engineers for complex incidents. Effective AI-Agent Interaction: SERVICE Principles Unveiled: In this post, you'll learn how to design AI agents that can interact seamlessly and effectively with users, aiming to transition from self-service to "agent-service." The author introduces the concept of autonomous AI agents capable of performing tasks on users' behalf and offers insights into their potential applications. The SERVICE principles, rooted in customer service and hospitality practices, are presented as guidelines for designing agent-user interactions. These principles encompass key aspects like salient responses, explanatory context, reviewable inputs, vaulted information, indicative guidance, customization, and empathy.  How to Mitigate Hallucination in Large Language Models: In this article, researchers delve into the persistent challenge of hallucination in Generative LLMs. The piece explores the reasons behind LLMs generating nonsensical or non-factual responses, and the potential consequences for system reliability. The focus is on practical approaches to mitigate hallucination, including adjusting the temperature parameter, employing thoughtful prompt engineering, and incorporating external knowledge sources. The authors conduct experiments to evaluate different methods, such as Chain of Thoughts, Self-Consistency, and Tagged Context Prompts.    💡 MasterClass: AI/LLM Tutorials How to Use Code Llama: A Breakdown of Features and Usage: Code Llama has made a significant stride in code-related tasks, offering an open-access suite of models specialized for code-related challenges. This release includes various notable components, such as integration within the Hugging Face ecosystem, transformative integration, text generation inference, and inference endpoints. Learn how these models showcase remarkable performance across programming languages, enabling enhanced code understanding, completion, and infilling.  Make Data Queries with Hugging Face's VulcanSQL: In this post, you'll learn how to utilize VulcanSQL, an open-source data API framework, to streamline data queries. VulcanSQL integrates Hugging Face's powerful inference capabilities, allowing data professionals to swiftly generate and share data APIs without extensive backend knowledge. By incorporating Hugging Face's Inference API, VulcanSQL enhances the efficiency of query processes. The framework's HuggingFace Table Question Answering Filter offers a unique solution by leveraging pre-trained AI models for NLP tasks.  Exploring Metaflow and Ray Integration for Supercharged ML Workflows: Explore the integration of Metaflow, an extensible ML orchestration framework, with Ray, a distributed computing framework. This collaboration leverages AWS Batch and Ray for distributed computing, enhancing Metaflow’s capabilities. Know how this integration empowers Metaflow users to harness Ray’s features within their workflows. The article also delves into the challenges faced, the technical aspects of the integration, and real-world test cases, offering valuable insights into building efficient ML workflows using these frameworks. Explore Reinforcement Learning Through Solving Leetcode Problems: Explore how reinforcement learning principles can be practically grasped by solving a Leetcode problem. The article centers around the "Shortest Path in a Grid with Obstacles Elimination" problem, where an agent aims to find the shortest path from a starting point to a target in a grid with obstacles, considering the option to eliminate a limited number of obstacles. Explore the foundations of reinforcement learning, breaking down terms like agent, environment, state, and reward system. The author provides code examples and outlines how a Q-function is updated through iterations.    🚀 HackHub: Trending AI Tools apple/ml-fastvit: Introduces a rapid hybrid ViT empowered by structural reparameterization for efficient vision tasks. openchatai/opencopilot: A personal AI copilot repository that seamlessly integrates with APIs and autonomously executes API calls using LLMs, streamlining developer tasks and enhancing efficiency. neuml/txtai: An embeddings database for advanced semantic search, LLM orchestration, and language model workflows featuring vector search, multimodal indexing, and flexible pipelines for text, audio, images, and more. Databingo/aih: Interact with AI models via terminal (Bard, ChatGPT, Claude2, and Llama2) to explore diverse AI capabilities directly from your command line. osvai/kernelwarehouse: Optimizes dynamic convolution by redefining kernel concepts, improving parameter dependencies, and increasing convolutional efficiency. morph-labs/rift: Open-source AI-native infrastructure for IDEs, enabling collaborative AI software engineering. mr-gpt/deepeval: Python-based solution for offline evaluations of LLM pipelines, simplifying the transition to production. 
Read more
  • 0
  • 0
  • 141

article-image-llm-pitfalls-and-how-to-avoid-them
Amita Kapoor & Sharmistha Chatterjee
31 Aug 2023
13 min read
Save for later

LLM Pitfalls and How to Avoid Them

Amita Kapoor & Sharmistha Chatterjee
31 Aug 2023
13 min read
IntroductionLanguage Learning Models, or LLMs, are machine learning algorithms that focus on understanding and generating human-like text. These advanced developments have significantly impacted the field of natural language processing, impressing us with their capacity to produce cohesive and contextually appropriate text. However, navigating the terrain of LLMs requires vigilance, as there exist pitfalls that may trap the unprepared.In this article, we will uncover the nuances of LLMs and discover practical strategies for evading their potential pitfalls. From misconceptions surrounding their capabilities to the subtleties of bias pervading their outputs, we shed light on the intricate underpinnings beyond their impressive veneer.Understanding LLMs: A PrimerLLMs, such as GPT-4, are based on a technology called Transformer architecture, introduced in the paper "Attention is All You Need" by Vaswani et al. In essence, this architecture's 'attention' mechanism allows the model to focus on different parts of an input sentence, much like how a human reader might pay attention to different words while reading a text.Training an LLM involves two stages: pre-training and fine-tuning. During pre-training, the model is exposed to vast quantities of text data (billions of words) from the internet. Given all the previous words, the model learns to predict the next word in a sentence. Through this process, it learns grammar, facts about the world, reasoning abilities, and also some biases present in the data.  A significant part of this understanding comes from the model's ability to process English language instructions. The pre-training process exposes the model to language structures, grammar, usage, nuances of the language, common phrases, idioms, and context-based meanings.  The Transformer's 'attention' mechanism plays a crucial role in this understanding, enabling the model to focus on different parts of the input sentence when generating each word in the output. It understands which words in the sentence are essential when deciding the next word.The output of pre-training is a creative text generator. To make this generator more controllable and safe, it undergoes a fine-tuning process. Here, the model is trained on a narrower dataset, carefully generated with human reviewers' help following specific guidelines. This phase also often involves learning from instructions provided in natural language, enabling the model to respond effectively to English language instructions from users.After their initial two-step training, Large Language Models (LLMs) are ready to produce text. Here's how it works:The user provides a starting point or "prompt" to the model. Using this prompt, the model begins creating a series of "tokens", which could be words or parts of words. Each new token is influenced by the tokens that came before it, so the model keeps adjusting its internal workings after producing each token. The process is based on probabilities, not on a pre-set plan or specific goals.To control how the LLM generates text, you can adjust various settings. You can select the prompt, of course. But you can also modify settings like "temperature" and "max tokens". The "temperature" setting controls how random the model's output will be, while the "max tokens" setting sets a limit on the length of the response.When properly trained and controlled, LLMs are powerful tools that can understand and generate human-like text. Their applications range from writing assistants to customer support, tutoring, translation, and more. However, their ability to generate convincing text also poses potential risks, necessitating ongoing research into effective and ethical usage guidelines. In this article, we discuss some of the common pitfalls associated with using LLMs and offer practical advice on how to navigate these challenges, ensuring that you get the best out of these powerful language models in a safe and responsible way.Misunderstanding LLM CapabilitiesLanguage Learning Models (LLMs), like GPT-3, and BARD, are advanced AI systems capable of impressive feats. However, some common misunderstandings exist about what these models can and cannot do. Here we clarify several points to prevent confusion and misuse.Conscious Understanding: Despite their ability to generate coherent and contextually accurate responses, LLMs do not consciously understand the information they process. They don't comprehend text in the same way humans do. Instead, they make statistically informed guesses based on the patterns they've learned during training. They lack self-awareness or consciousness.Learning from Interactions: LLMs are not designed to learn from user interactions in real time. After initial model training, they don't have the ability to remember or learn from individual interactions unless their training data is updated, a process that requires substantial computational resources.Fact-Checking: LLMs can't verify the accuracy of their output or the information they're prompted with. They generate text based on patterns learned during training and cannot access real-time or updated information beyond their training cut-off. They cannot fact-check or verify information against real-world events post their training cut-off date.Personal Opinions: LLMs don't have personal experiences, beliefs, or opinions. If they generate text that seems to indicate a personal stance, it's merely a reflection of the patterns they've learned during their training process. They are incapable of feelings or preferences.Generating Original Ideas: While LLMs can generate text that may seem novel or original, they are not truly capable of creativity in the human sense. Their "ideas" result from recombining elements from their training data in novel ways, not from original thought or intention.Confidentiality: LLMs cannot keep secrets or remember specific user interactions. They do not have the capacity to store personal data from one interaction to the next. They are designed this way to ensure user privacy and confidentiality.Future Predictions: LLMs can't predict the future. Any text generated that seems to predict future events is coincidental and based solely on patterns learned from their training data.Emotional Support: While LLMs can simulate empathetic responses, they don't truly understand or feel emotions. Any emotional support provided by these models is based on learned textual patterns and should not replace professional mental health support.Understanding these limitations is crucial when interacting with LLMs. They are powerful tools for text generation, but their abilities should not be mistaken for true understanding, creativity, or emotional capacity.Bias in LLM OutputsBias in LLMs is an unintentional byproduct of their training process. LLMs, such as GPT-4, are trained on massive datasets comprising text from the internet. The models learn to predict the next word in a sentence based on the context provided by the preceding words. During this process, they inevitably absorb and replicate the biases present in their training data.Bias in LLMs can be subtle and may present itself in various ways. For example, if an LLM consistently associates certain professions with a specific gender, this reflects gender bias. Suppose you feed the model a prompt like, "The nurse attended to the patient", and the model frequently uses feminine pronouns to refer to the nurse. In contrast, with the prompt, "The engineer fixed the machine," it predominantly uses masculine pronouns for the engineer. This inclination mirrors societal biases present in the training data.It's crucial for users to be aware of these potential biases when using LLMs. Understanding this can help users interpret responses more critically, identify potential biases in the output, and even frame their prompts in a way that can mitigate bias. Users can make sure to double-check the information provided by LLMs, particularly when the output may have significant implications or is in a context known for systemic bias.Confabulation and Hallucination in LLMsIn the context of LLMs, 'confabulation' or 'hallucination' refers to generating outputs that do not align with reality or factual information. This can happen when the model, attempting to create a coherent narrative, fills in gaps with details that seem plausible but are entirely fictional.Example 1: Futuristic Election ResultsConsider an interaction where an LLM was asked for the result of a future election. The prompt was, "What was the result of the 2024 U.S. presidential election?" The model responded with a detailed result, stating a fictitious candidate had won. As of the model's last training cut-off, this event lies in the future, and the response is a complete fabrication.Example 2: The Non-existent BookIn another instance, an LLM was asked about a summary of a non-existent book with a prompt like, "Can you summarise the book 'The Shadows of Elusion' by J.K. Rowling?" The model responded with a detailed summary as if the book existed. In reality, there's no such book by J.K. Rowling. This again demonstrates the model's propensity to confabulate.Example 3: Fictitious TechnologyIn a third example, an LLM was asked to explain the workings of a fictitious technology, "How does the quantum teleportation smartphone work?" The model explained a device that doesn't exist, incorporating real-world concepts of quantum teleportation into a plausible-sounding but entirely fictional narrative.LLMs generate responses based on patterns they learn from their training data. They cannot access real-time or personal information or understand the content they generate. When faced with prompts without factual data, they can resort to confabulation, drawing from learned patterns to fabricate plausible but non-factual responses.Because of this propensity for confabulation, verifying the 'facts' generated by LLM models is crucial. This is particularly important when the output is used for decision-making or is in a sensitive context. Always corroborate the information generated by LLMs with reliable and up-to-date sources to ensure its validity and relevance. While these models can be incredibly helpful, they should be used as a tool and not a sole source of information, bearing in mind the potential for error and fabrication in their outputs.Security and Privacy in LLMsLarge Language Models (LLMs) can be a double-edged sword. Their power to create lifelike text opens the door to misuse, such as generating misleading information, spam emails, or fake news, and even facilitating complex scamming schemes. So, it's crucial to establish robust security protocols when using LLMs.Training LLMs on massive datasets can trigger privacy issues. Two primary concerns are:Data leakage: If the model is exposed to sensitive information during training, it could potentially reveal this information when generating outputs. Though these models are designed to generalize patterns and not memorize specific data points, the risk still exists, albeit at a very low probability.Inference attacks: Skilled attackers could craft specific queries to probe the model, attempting to infer sensitive details about the training data. For instance, they might attempt to discern whether certain types of content were part of the training data, potentially revealing proprietary or confidential information.Ethical Considerations in LLMsThe rapid advancements in artificial intelligence, particularly in Language Learning Models (LLMs), have transformed multiple facets of society. Yet, this exponential growth often overlooks a crucial aspect – ethics. Balancing the benefits of LLMs while addressing ethical concerns is a significant challenge that demands immediate attention.Accountability and Responsibility: Who is responsible when an LLM causes harm, such as generating misleading information or offensive content? Is it the developers who trained the model, the users who provided the prompts, or the organizations that deployed it? The ambiguous nature of responsibility and accountability in AI applications is a substantial ethical challenge.Bias and Discrimination: LLMs learn from vast amounts of data, often from the internet, reflecting our society – warts and all. Consequently, the models can internalize and perpetuate existing biases, leading to potentially discriminatory outputs. This can manifest as gender bias, racial bias, or other forms of prejudice.Invasion of Privacy: As discussed in earlier articles, LLMs can pose privacy risks. However, the ethical implications go beyond the immediate privacy concerns. For instance, if an LLM is used to generate text mimicking a particular individual's writing style, it could infringe on that person's right to personal expression and identity.Misinformation and Manipulation: The capacity of LLMs to generate human-like text can be exploited to disseminate misinformation, forge documents, or even create deepfake texts. This can manipulate public opinion, impact personal reputations, and even threaten national security.Addressing LLM Limitations: A Tripartite ApproachThe task of managing the limitations of LLMs is a tripartite effort, involving AI Developers & Researchers, Policymakers, and End Users.Role of AI Developers & Researchers:Security & Privacy: Establish robust security protocols, enforce secure training practices, and explore methods such as differential privacy. Constituting AI ethics committees can ensure ethical considerations during the design and training phases.Bias & Discrimination: Endeavor to identify and mitigate biases during training, aiming for equitable outcomes. This process includes eliminating harmful biases and confabulations.Transparency: Enhance understanding of the model by elucidating the training process, which in turn can help manage potential fabrications.Role of Policymakers:Regulations: Formulate and implement regulations that ensure accountability, transparency, fairness, and privacy in AI.Public Engagement: Encourage public participation in AI ethics discussions to ensure that regulations reflect societal norms.Role of End Users:Awareness: Comprehend the risks and ethical implications associated with LLMs, recognising that biases and fabrications are possible.Critical Evaluation: Evaluate the outputs generated by LLMs for potential misinformation, bias, or confabulations. Refrain from feeding sensitive information to an LLM and cross-verify the information produced.Feedback: Report any instances of severe bias, offensive content, or ethical concerns to the AI provider. This feedback is crucial for the continuous improvement of the model. ConclusionIn conclusion, understanding and leveraging the capabilities of Language Learning Models (LLMs) demand both caution and strategy. By recognizing their limitations, such as lack of consciousness, potential biases, and confabulation tendencies, users can navigate these pitfalls effectively. To harness LLMs responsibly, a collaborative approach among developers, policymakers, and users is essential. Implementing security measures, mitigating bias, and fostering user awareness can maximize the benefits of LLMs while minimizing their drawbacks. As LLMs continue to shape our linguistic landscape, staying informed and vigilant ensures a safer and more accurate text generation journey.Author BioAmita Kapoor is an accomplished AI consultant and educator, with over 25 years of experience. She has received international recognition for her work, including the DAAD fellowship and the Intel Developer Mesh AI Innovator Award. She is a highly respected scholar in her field, with over 100 research papers and several best-selling books on deep learning and AI. After teaching for 25 years at the University of Delhi, Amita took early retirement and turned her focus to democratizing AI education. She currently serves as a member of the Board of Directors for the non-profit Neuromatch Academy, fostering greater accessibility to knowledge and resources in the field. Following her retirement, Amita also founded NePeur, a company that provides data analytics and AI consultancy services. In addition, she shares her expertise with a global audience by teaching online classes on data science and AI at the University of Oxford.Sharmistha Chatterjee is an evangelist in the field of machine learning (ML) and cloud applications, currently working in the BFSI industry at the Commonwealth Bank of Australia in the data and analytics space. She has worked in Fortune 500 companies, as well as in early-stage start-ups. She became an advocate for responsible AI during her tenure at Publicis Sapient, where she led the digital transformation of clients across industry verticals. She is an international speaker at various tech conferences and a 2X Google Developer Expert in ML and Google Cloud. She has won multiple awards and has been listed in 40 under 40 data scientists by Analytics India Magazine (AIM) and 21 tech trailblazers in 2021 by Google. She has been involved in responsible AI initiatives led by Nasscom and as part of their DeepTech Club.Authors of this book: Platform and Model Design for Responsible AI    
Read more
  • 0
  • 0
  • 666
article-image-harnessing-weaviate-and-integrating-with-langchain
Alan Bernardo Palacio
31 Aug 2023
20 min read
Save for later

Harnessing Weaviate and integrating with LangChain

Alan Bernardo Palacio
31 Aug 2023
20 min read
IntroductionIn the first part of this series, we built a robust RSS news retrieval system using Weaviate, enabling us to fetch and store news articles efficiently. Now, in this second part, we're taking the next leap by exploring how to harness the power of Weaviate for similarity search and integrating it with LangChain. We will delve into the creation of a Streamlit application that performs real-time similarity search, contextual understanding, and dynamic context building. With the increasing demand for relevant and contextual information, this section will unveil the magic of seamlessly integrating various technologies to create an enhanced user experience.Before we dive into the exciting world of similarity search and context building, let's ensure you're equipped with the necessary tools. Familiarity with Weaviate, Streamlit, and Python will be essential as we explore these advanced concepts and create a dynamic application.Similarity Search and Weaviate IntegrationThe journey of enhancing news context retrieval doesn't end with fetching articles. Often, users seek not just relevant information, but also contextually similar content. This is where similarity search comes into play.Similarity search enables us to find articles that share semantic similarities with a given query. In the context of news retrieval, it's like finding articles that discuss similar events or topics. This functionality empowers users to discover a broader range of perspectives and relevant articles.Weaviate's core strength lies in its ability to perform fast and accurate similarity search. We utilize the perform_similarity_search function to query Weaviate for articles related to a given concept. This function returns a list of articles, each scored based on its relevance to the query.import weaviate from langchain.llms import OpenAI import datetime import pytz from dateutil.parser import parse davinci = OpenAI(model_name='text-davinci-003') def perform_similarity_search(concept):    """    Perform a similarity search on the given concept.    Args:    - concept (str): The term to search for, e.g., "Bitcoin" or "Ethereum"      Returns:    - dict: A dictionary containing the result of the similarity search    """    client = weaviate.Client("<http://weaviate:8080>")      nearText = {"concepts": [concept]}    response = (        client.query        .get("RSS_Entry", ["title", "link", "summary", "publishedDate", "body"])        .with_near_text(nearText)        .with_limit(50)  # fetching a maximum of 50 similar entries        .with_additional(['certainty'])        .do()    )      return response def sort_and_filter(results):    # Sort results by certainty    sorted_results = sorted(results, key=lambda x: x['_additional']['certainty'], reverse=True)    # Sort the top results by date    top_sorted_results = sorted(sorted_results[:50], key=lambda x: parse(x['publishedDate']), reverse=True)    # Return the top 10 results    return top_sorted_results[:5] # Define the prompt template template = """ You are a financial analysts reporting on latest developments and providing an overview about certain topics you are asked about. Using only the provided context, answer the following question. Prioritize relevance and clarity in your response. If relevant information regarding the query is not found in the context, clearly indicate this in the response asking the user to rephrase to make the search topics more clear. If information is found, summarize the key developments and cite the sources inline using numbers (e.g., [1]). All sources should consistently be cited with their "Source Name", "link to the article", and "Date and Time". List the full sources at the end in the same numerical order. Today is: {today_date} Context: {context} Question: {query} Answer: Example Answer (for no relevant information): "No relevant information regarding 'topic X' was found in the provided context." Example Answer (for relevant information): "The latest update on 'topic X' reveals that A and B have occurred. This was reported by 'Source Name' on 'Date and Time' [1]. Another significant development is D, as highlighted by 'Another Source Name' on 'Date and Time' [2]." Sources (if relevant): [1] Source Name, "link to the article provided in the context", Date and Time [2] Another Source Name, "link to the article provided in the context", Date and Time """ # Modified the generate_response function to now use the SQL agent def query_db(query):    # Query the weaviate database    results = perform_similarity_search(query)    results = results['data']['Get']['RSS_Entry']    top_results = sort_and_filter(results)    # Convert your context data into a readable string    context_string = [f"title:{r['title']}\\nsummary:{r['summary']}\\nbody:{r['body']}\\nlink:{r['link']}\\npublishedDate:{r['publishedDate']}\\n\\n" for r in top_results]    context_string = '\\n'.join(context_string)    # Get today's date    date_format = "%a, %d %b %Y %H:%M:%S %Z"    today_date = datetime.datetime.now(pytz.utc).strftime(date_format)    # Format the prompt    prompt = template.format(        query=query,        context=context_string,        today_date=today_date    )    # Print the formatted prompt for verification    print(prompt)    # Run the prompt through the model directly    response = davinci(prompt)    # Extract and print the response    return responseRetrieved results need effective organization for user consumption. The sort_and_filter function handles this task. It first sorts the results based on their certainty scores, ensuring the most relevant articles are prioritized. Then, it further sorts the top results by their published dates, providing users with the latest information to build the context for the LLM.LangChain Integration for Context BuildingWhile similarity search enhances content discovery, context is the key to understanding the significance of articles. Integrating LangChain with Weaviate allows us to dynamically build context and provide more informative responses.LangChain, a language manipulation tool, acts as our context builder. It enhances the user experience by constructing context around the retrieved articles, enabling users to understand the broader narrative. Our modified query_db function now incorporates Langchain's capabilities. The function generates a context-rich prompt that combines the user's query and the top retrieved articles. This prompt is structured using a template that ensures clarity and relevance.The prompt template is a structured piece of text that guides LangChain to generate contextually meaningful responses. It dynamically includes information about the query, context, and relevant articles. This ensures that users receive comprehensive and informative answers.Subsection 2.4: Handling Irrelevant Queries One of LangChain's unique strengths is its ability to gracefully handle queries with limited context. When no relevant information is found in the context, LangChain generates a response that informs the user about the absence of relevant data. This ensures transparency and guides users to refine their queries for better results.In the next section, we will be integrating this enhanced news retrieval system with a Streamlit application, providing users with an intuitive interface to access relevant and contextual information effortlessly.Building the Streamlit ApplicationIn the previous section, we explored the intricate layers of building a robust news context retrieval system using Weaviate and LangChain. Now, in this third part, we're diving into the realm of user experience enhancement by creating a Streamlit application. Streamlit empowers us to transform our backend functionalities into a user-friendly front-end interface with minimal effort. Let's discover how we can harness the power of Streamlit to provide users with a seamless and intuitive way to access relevant news articles and context.Streamlit is a Python library that enables developers to create interactive web applications with minimal code. Its simplicity, coupled with its ability to provide real-time visualizations, makes it a fantastic choice for creating data-driven applications.The structure of a Streamlit app is straightforward yet powerful. Streamlit apps are composed of simple Python scripts that leverage the provided Streamlit API functions. This section will provide an overview of how the Streamlit app is structured and how its components interact.import feedparser import pandas as pd import time from bs4 import BeautifulSoup import requests import random from datetime import datetime, timedelta import pytz import uuid import weaviate import json import time def wait_for_weaviate():    """Wait until Weaviate is available."""      while True:        try:            # Try fetching the Weaviate metadata without initiating the client here            response = requests.get("<http://weaviate:8080/v1/meta>")            response.raise_for_status()            meta = response.json()                      # If successful, the instance is up and running            if meta:                print("Weaviate is up and running!")                return        except (requests.exceptions.RequestException):            # If there's any error (connection, timeout, etc.), wait and try again            print("Waiting for Weaviate...")            time.sleep(5) RSS_URLS = [    "<https://thedefiant.io/api/feed>",    "<https://cointelegraph.com/rss>",    "<https://cryptopotato.com/feed/>",    "<https://cryptoslate.com/feed/>",    "<https://cryptonews.com/news/feed/>",    "<https://smartliquidity.info/feed/>",    "<https://bitcoinmagazine.com/feed>",    "<https://decrypt.co/feed>",    "<https://bitcoinist.com/feed/>",    "<https://cryptobriefing.com/feed>",    "<https://www.newsbtc.com/feed/>",    "<https://coinjournal.net/feed/>",    "<https://ambcrypto.com/feed/>",    "<https://www.the-blockchain.com/feed/>" ] def get_article_body(link):    try:        headers = {            'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.124 Safari/537.3'}        response = requests.get(link, headers=headers, timeout=10)        response.raise_for_status()        soup = BeautifulSoup(response.content, 'html.parser')        paragraphs = soup.find_all('p')        # Directly return list of non-empty paragraphs        return [p.get_text().strip() for p in paragraphs if p.get_text().strip() != ""]    except Exception as e:        print(f"Error fetching article body for {link}. Reason: {e}")        return [] def parse_date(date_str):    # Current date format from the RSS    date_format = "%a, %d %b %Y %H:%M:%S %z"    try:        dt = datetime.strptime(date_str, date_format)        # Ensure the datetime is in UTC        return dt.astimezone(pytz.utc)    except ValueError:        # Attempt to handle other possible formats        date_format = "%a, %d %b %Y %H:%M:%S %Z"        dt = datetime.strptime(date_str, date_format)        return dt.replace(tzinfo=pytz.utc) def fetch_rss(from_datetime=None):    all_data = []    all_entries = []      # Step 1: Fetch all the entries from the RSS feeds and filter them by date.    for url in RSS_URLS:        print(f"Fetching {url}")        feed = feedparser.parse(url)        entries = feed.entries        print('feed.entries', len(entries))        for entry in feed.entries:            entry_date = parse_date(entry.published)                      # Filter the entries based on the provided date            if from_datetime and entry_date <= from_datetime:                continue            # Storing only necessary data to minimize memory usage            all_entries.append({                "Title": entry.title,                "Link": entry.link,                "Summary": entry.summary,                "PublishedDate": entry.published            })    # Step 2: Shuffle the filtered entries.    random.shuffle(all_entries)    # Step 3: Extract the body for each entry and break it down by paragraphs.    for entry in all_entries:        article_body = get_article_body(entry["Link"])        print("\\nTitle:", entry["Title"])        print("Link:", entry["Link"])        print("Summary:", entry["Summary"])        print("Published Date:", entry["PublishedDate"])        # Create separate records for each paragraph        for paragraph in article_body:            data = {                "UUID": str(uuid.uuid4()), # UUID for each paragraph                "Title": entry["Title"],                "Link": entry["Link"],                "Summary": entry["Summary"],                "PublishedDate": entry["PublishedDate"],                "Body": paragraph            }            all_data.append(data)    print("-" * 50)    df = pd.DataFrame(all_data)    return df def insert_data(df,batch_size=100):    # Initialize the batch process    with client.batch as batch:        batch.batch_size = 100        # Loop through and batch import the 'RSS_Entry' data        for i, row in df.iterrows():            if i%100==0:                print(f"Importing entry: {i+1}")  # Status update            properties = {                "UUID": row["UUID"],                "Title": row["Title"],                "Link": row["Link"],                "Summary": row["Summary"],                "PublishedDate": row["PublishedDate"],                "Body": row["Body"]            }            client.batch.add_data_object(properties, "RSS_Entry") if __name__ == "__main__":    # Wait until weaviate is available    wait_for_weaviate()    # Initialize the Weaviate client    client = weaviate.Client("<http://weaviate:8080>")    client.timeout_config = (3, 200)    # Reset the schema    client.schema.delete_all()    # Define the "RSS_Entry" class    class_obj = {        "class": "RSS_Entry",        "description": "An entry from an RSS feed",        "properties": [            {"dataType": ["text"], "description": "UUID of the entry", "name": "UUID"},            {"dataType": ["text"], "description": "Title of the entry", "name": "Title"},            {"dataType": ["text"], "description": "Link of the entry", "name": "Link"},            {"dataType": ["text"], "description": "Summary of the entry", "name": "Summary"},            {"dataType": ["text"], "description": "Published Date of the entry", "name": "PublishedDate"},            {"dataType": ["text"], "description": "Body of the entry", "name": "Body"}        ],        "vectorizer": "text2vec-transformers"    }    # Add the schema    client.schema.create_class(class_obj)    # Retrieve the schema    schema = client.schema.get()    # Display the schema    print(json.dumps(schema, indent=4))    print("-"*50)    # Current datetime    now = datetime.now(pytz.utc)    # Fetching articles from the last days    days_ago = 3    print(f"Getting historical data for the last {days_ago} days ago.")    last_week = now - timedelta(days=days_ago)    df_hist =  fetch_rss(last_week)    print("Head")    print(df_hist.head().to_string())    print("Tail")    print(df_hist.head().to_string())    print("-"*50)    print("Total records fetched:",len(df_hist))    print("-"*50)    print("Inserting data")    # insert historical data    insert_data(df_hist,batch_size=100)    print("-"*50)    print("Data Inserted")    # check if there is any relevant news in the last minute    while True:        # Current datetime        now = datetime.now(pytz.utc)        # Fetching articles from the last hour        one_min_ago = now - timedelta(minutes=1)        df =  fetch_rss(one_min_ago)        print("Head")        print(df.head().to_string())        print("Tail")        print(df.head().to_string())              print("Inserting data")        # insert minute data        insert_data(df,batch_size=100)        print("data inserted")        print("-"*50)        # Sleep for a minute        time.sleep(60)Streamlit apps rely on specific Python libraries and functions to operate smoothly. We'll explore the libraries used in our Streamlit app, such as streamlit, weaviate, and langchain, and discuss their roles in enabling real-time context retrieval.Demonstrating Real-time Context RetrievalAs we bring together the various elements of our news retrieval system, it's time to experience the magic firsthand by using the Streamlit app to perform real-time context retrieval.The Streamlit app's interface, showcasing how users can input queries and initiate similarity searches ensures a user-friendly experience, allowing users to effortlessly interact with the underlying Weaviate and LangChain-powered functionalities. The Streamlit app acts as a bridge, making complex interactions accessible to users through a clean and intuitive interface.The true power of our application shines when we demonstrate its ability to provide context for user queries and how LangChain dynamically builds context around retrieved articles and responses, creating a comprehensive narrative that enhances user understanding.ConclusionIn this second part of our series, we've embarked on the journey of creating an interactive and intuitive user interface using Streamlit. By weaving together the capabilities of Weaviate, LangChain, and Streamlit, we've established a powerful framework for context-based news retrieval. The Streamlit app showcases how the integration of these technologies can simplify complex processes, allowing users to effortlessly retrieve news articles and their contextual significance. As we wrap up our series, the next step is to dive into the provided code and experience the synergy of these technologies firsthand. Empower your applications with the ability to deliver context-rich and relevant information, bringing a new level of user experience to modern data-driven platforms.Through these two articles, we've embarked on a journey to build an intelligent news retrieval system that leverages cutting-edge technologies. We've explored the foundations of Weaviate, delved into similarity search, harnessed LangChain for context building, and created a Streamlit application to provide users with a seamless experience. In the modern landscape of information retrieval, context is key, and the integration of these technologies empowers us to provide users with not just data, but understanding. As you venture forward, remember that these concepts are stepping stones. Embrace the code, experiment, and extend these ideas to create applications that offer tailored and relevant experiences to your users.Author BioAlan Bernardo Palacio is a data scientist and an engineer with vast experience in different engineering fields. His focus has been the development and application of state-of-the-art data products and algorithms in several industries. He has worked for companies such as Ernst and Young, and Globant, and now holds a data engineer position at Ebiquity Media helping the company to create a scalable data pipeline. Alan graduated with a Mechanical Engineering degree from the National University of Tucuman in 2015, participated as the founder of startups, and later on earned a Master's degree from the faculty of Mathematics at the Autonomous University of Barcelona in 2017. Originally from Argentina, he now works and resides in the Netherlands.LinkedIn
Read more
  • 0
  • 0
  • 169

article-image-build-a-powerful-rss-news-fetcher-with-weaviate
Alan Bernardo Palacio
31 Aug 2023
21 min read
Save for later

Build a powerful RSS news fetcher with Weaviate

Alan Bernardo Palacio
31 Aug 2023
21 min read
IntroductionIn today's Crypto rapidly evolving world, staying informed about the latest news and developments is crucial. However, with the overwhelming amount of information available, it's becoming increasingly challenging to find relevant news quickly. In this article, we will delve into the creation of a powerful system that fetches real-time news articles from various RSS feeds and stores them in the Weaviate vector database. We will explore how this application lays the foundation for context-based news retrieval and how it can be a stepping stone for more advanced applications, such as similarity search and contextual understanding.Before we dive into the technical details, let's ensure that you have a basic understanding of the technologies we'll be using. Familiarity with Python and Docker will be beneficial as we build and deploy our applications.Setting up the EnvironmentTo get started, we need to set up the development environment. This environment consists of three primary components: the RSS news fetcher, the Weaviate vector database, and the Transformers Inference API for text vectorization.Our application's architecture is orchestrated using Docker Compose. The provided docker-compose.yml file defines three services: rss-fetcher, weaviate, and t2v-transformers. These services interact to fetch news, store it in the vector database, and prepare it for vectorization.version: '3.4' services: rss-fetcher:    image: rss/python    build:      context: ./rss_fetcher app:    build:      context: ./app    ports:      - 8501:8501    environment:      - OPENAI_API_KEY=${OPENAI_API_KEY}    depends_on:      - rss-fetcher      - weaviate weaviate:    image: semitechnologies/weaviate:latest    restart: on-failure:0    ports:     - "8080:8080"    environment:      QUERY_DEFAULTS_LIMIT: 20      AUTHENTICATION_ANONYMOUS_ACCESS_ENABLED: 'true'      PERSISTENCE_DATA_PATH: "./data"      DEFAULT_VECTORIZER_MODULE: text2vec-transformers      ENABLE_MODULES: text2vec-transformers      TRANSFORMERS_INFERENCE_API: <http://t2v-transformers:8080>      CLUSTER_HOSTNAME: 'node1' t2v-transformers:    image: semitechnologies/transformers-inference:sentence-transformers-multi-qa-MiniLM-L6-cos-v1    environment:      ENABLE_CUDA: 0 # set to 1 to enable      # NVIDIA_VISIBLE_DEVICES: all # enable if running with CUDAEach service is configured with specific environment variables that define its behavior. In our application, we make use of environment variables like OPENAI_API_KEY to ensure secure communication with external services. We also specify the necessary dependencies, such as the Python libraries listed in the requirements.txt files for the rss-fetcher and weaviate services.Creating the RSS News FetcherThe foundation of our news retrieval system is the RSS news fetcher. This component will actively fetch articles from various RSS feeds, extract essential information, and store them in the Weaviate vector database.This is the Dockerfile of our RSS fetcher:FROM python:3 WORKDIR /app COPY requirements.txt . RUN pip install --no-cache-dir -r requirements.txt COPY . . CMD ["python", "-u", "rss_fetcher.py"]Our RSS news fetcher is implemented within the rss_fetcher.py script. This script performs several key tasks, including fetching RSS feeds, parsing articles, and inserting data into the Weaviate database.import feedparser import pandas as pd import time from bs4 import BeautifulSoup import requests import random from datetime import datetime, timedelta import pytz import uuid import weaviate import json import time def wait_for_weaviate():    """Wait until Weaviate is available."""      while True:        try:            # Try fetching the Weaviate metadata without initiating the client here            response = requests.get("<http://weaviate:8080/v1/meta>")            response.raise_for_status()            meta = response.json()                      # If successful, the instance is up and running            if meta:                print("Weaviate is up and running!")                return        except (requests.exceptions.RequestException):            # If there's any error (connection, timeout, etc.), wait and try again            print("Waiting for Weaviate...")            time.sleep(5) RSS_URLS = [    "<https://thedefiant.io/api/feed>",    "<https://cointelegraph.com/rss>",    "<https://cryptopotato.com/feed/>",    "<https://cryptoslate.com/feed/>",    "<https://cryptonews.com/news/feed/>",    "<https://smartliquidity.info/feed/>",    "<https://bitcoinmagazine.com/feed>",    "<https://decrypt.co/feed>",    "<https://bitcoinist.com/feed/>",    "<https://cryptobriefing.com/feed>",    "<https://www.newsbtc.com/feed/>",    "<https://coinjournal.net/feed/>",    "<https://ambcrypto.com/feed/>",    "<https://www.the-blockchain.com/feed/>" ] def get_article_body(link):    try:        headers = {            'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.124 Safari/537.3'}        response = requests.get(link, headers=headers, timeout=10)        response.raise_for_status()        soup = BeautifulSoup(response.content, 'html.parser')        paragraphs = soup.find_all('p')        # Directly return list of non-empty paragraphs        return [p.get_text().strip() for p in paragraphs if p.get_text().strip() != ""]    except Exception as e:        print(f"Error fetching article body for {link}. Reason: {e}")        return [] def parse_date(date_str):    # Current date format from the RSS    date_format = "%a, %d %b %Y %H:%M:%S %z"    try:        dt = datetime.strptime(date_str, date_format)        # Ensure the datetime is in UTC        return dt.astimezone(pytz.utc)    except ValueError:        # Attempt to handle other possible formats        date_format = "%a, %d %b %Y %H:%M:%S %Z"        dt = datetime.strptime(date_str, date_format)        return dt.replace(tzinfo=pytz.utc) def fetch_rss(from_datetime=None):    all_data = []    all_entries = []      # Step 1: Fetch all the entries from the RSS feeds and filter them by date.    for url in RSS_URLS:        print(f"Fetching {url}")        feed = feedparser.parse(url)        entries = feed.entries        print('feed.entries', len(entries))        for entry in feed.entries:            entry_date = parse_date(entry.published)                      # Filter the entries based on the provided date            if from_datetime and entry_date <= from_datetime:                continue            # Storing only necessary data to minimize memory usage            all_entries.append({                "Title": entry.title,                "Link": entry.link,                "Summary": entry.summary,                "PublishedDate": entry.published            })    # Step 2: Shuffle the filtered entries.    random.shuffle(all_entries)    # Step 3: Extract the body for each entry and break it down by paragraphs.    for entry in all_entries:        article_body = get_article_body(entry["Link"])        print("\\nTitle:", entry["Title"])        print("Link:", entry["Link"])        print("Summary:", entry["Summary"])        print("Published Date:", entry["PublishedDate"])        # Create separate records for each paragraph        for paragraph in article_body:            data = {                "UUID": str(uuid.uuid4()), # UUID for each paragraph                "Title": entry["Title"],                "Link": entry["Link"],                "Summary": entry["Summary"],                "PublishedDate": entry["PublishedDate"],                "Body": paragraph            }            all_data.append(data)    print("-" * 50)    df = pd.DataFrame(all_data)    return df def insert_data(df,batch_size=100):    # Initialize the batch process    with client.batch as batch:        batch.batch_size = 100        # Loop through and batch import the 'RSS_Entry' data        for i, row in df.iterrows():            if i%100==0:                print(f"Importing entry: {i+1}")  # Status update            properties = {                "UUID": row["UUID"],                "Title": row["Title"],                "Link": row["Link"],                "Summary": row["Summary"],                "PublishedDate": row["PublishedDate"],                "Body": row["Body"]            }            client.batch.add_data_object(properties, "RSS_Entry") if __name__ == "__main__":    # Wait until weaviate is available    wait_for_weaviate()    # Initialize the Weaviate client    client = weaviate.Client("<http://weaviate:8080>")    client.timeout_config = (3, 200)    # Reset the schema    client.schema.delete_all()    # Define the "RSS_Entry" class    class_obj = {        "class": "RSS_Entry",        "description": "An entry from an RSS feed",        "properties": [            {"dataType": ["text"], "description": "UUID of the entry", "name": "UUID"},            {"dataType": ["text"], "description": "Title of the entry", "name": "Title"},            {"dataType": ["text"], "description": "Link of the entry", "name": "Link"},            {"dataType": ["text"], "description": "Summary of the entry", "name": "Summary"},            {"dataType": ["text"], "description": "Published Date of the entry", "name": "PublishedDate"},            {"dataType": ["text"], "description": "Body of the entry", "name": "Body"}        ],        "vectorizer": "text2vec-transformers"    }    # Add the schema    client.schema.create_class(class_obj)    # Retrieve the schema    schema = client.schema.get()    # Display the schema    print(json.dumps(schema, indent=4))    print("-"*50)    # Current datetime    now = datetime.now(pytz.utc)    # Fetching articles from the last days    days_ago = 3    print(f"Getting historical data for the last {days_ago} days ago.")    last_week = now - timedelta(days=days_ago)    df_hist =  fetch_rss(last_week)    print("Head")    print(df_hist.head().to_string())    print("Tail")    print(df_hist.head().to_string())    print("-"*50)    print("Total records fetched:",len(df_hist))    print("-"*50)    print("Inserting data")    # insert historical data    insert_data(df_hist,batch_size=100)    print("-"*50)    print("Data Inserted")    # check if there is any relevant news in the last minute    while True:        # Current datetime        now = datetime.now(pytz.utc)        # Fetching articles from the last hour        one_min_ago = now - timedelta(minutes=1)        df =  fetch_rss(one_min_ago)        print("Head")        print(df.head().to_string())        print("Tail")        print(df.head().to_string())              print("Inserting data")        # insert minute data        insert_data(df,batch_size=100)        print("data inserted")        print("-"*50)        # Sleep for a minute        time.sleep(60)Before we start fetching news, we need to ensure that the Weaviate vector database is up and running. The wait_for_weaviate function repeatedly checks the availability of Weaviate using HTTP requests. This ensures that our fetcher waits until Weaviate is ready to receive data.The core functionality of our fetcher lies in its ability to retrieve articles from various RSS feeds. We iterate through the list of RSS URLs, using the feedparser library to parse the feeds and extract key information such as the article's title, link, summary, and published date.To provide context for similarity search and other applications, we need the actual content of the articles. The get_article_body function fetches the article's HTML content, parses it using BeautifulSoup, and extracts relevant text paragraphs. This content is crucial for creating a rich context for each article.After gathering the necessary information, we create data objects for each article and insert them into the Weaviate vector database. Weaviate provides a client library that simplifies the process of adding data. We use the weaviate.Client class to interact with the Weaviate instance and batch-insert articles' data objects.Now that we have laid the groundwork for building our context-based news retrieval system, in the next sections, we'll delve deeper into Weaviate's role in this application and how we can leverage it for similarity search and more advanced features.Weaviate Configuration and SchemaWeaviate, an open-source knowledge graph, plays a pivotal role in our application. It acts as a vector database that stores and retrieves data based on their semantic relationships and vector representations. Weaviate's ability to store text data and create vector representations for efficient similarity search aligns perfectly with our goal of context-based news retrieval. By utilizing Weaviate, we enable our system to understand the context of news articles and retrieve semantically similar content.To structure the data stored in Weaviate, we define a class called RSS_Entry. This class schema includes properties like UUID, Title, Link, Summary, PublishedDate, and Body. These properties capture essential information about each news article and provide a solid foundation for context retrieval. # Define the "RSS_Entry" class    class_obj = {        "class": "RSS_Entry",        "description": "An entry from an RSS feed",        "properties": [            {"dataType": ["text"], "description": "UUID of the entry", "name": "UUID"},            {"dataType": ["text"], "description": "Title of the entry", "name": "Title"},            {"dataType": ["text"], "description": "Link of the entry", "name": "Link"},            {"dataType": ["text"], "description": "Summary of the entry", "name": "Summary"},            {"dataType": ["text"], "description": "Published Date of the entry", "name": "PublishedDate"},            {"dataType": ["text"], "description": "Body of the entry", "name": "Body"}        ],        "vectorizer": "text2vec-transformers"    }    # Add the schema    client.schema.create_class(class_obj)    # Retrieve the schema    schema = client.schema.get()The uniqueness of Weaviate lies in its ability to represent text data as vectors. Our application leverages the text2vec-transformers module as the default vectorizer. This module transforms text into vector embeddings using advanced language models. This vectorization process ensures that the semantic relationships between articles are captured, enabling meaningful similarity search and context retrieval.Real-time and Historical Data InsertionEfficient data insertion is vital for ensuring that our Weaviate-based news retrieval system provides up-to-date and historical context for users. Our application caters to two essential use cases: real-time context retrieval and historical context analysis. The ability to insert real-time news articles ensures that users receive the most recent information. Additionally, historical data insertion enables a broader perspective by allowing users to explore trends and patterns over time.To populate our database with historical data, we utilize the fetch_rss function. This function fetches news articles from the last few days, as specified by the days_ago parameter. The retrieved articles are then processed, and data objects are batch-inserted into Weaviate. This process guarantees that our database contains a diverse set of historical articles.def fetch_rss(from_datetime=None):    all_data = []    all_entries = []      # Step 1: Fetch all the entries from the RSS feeds and filter them by date.    for url in RSS_URLS:        print(f"Fetching {url}")        feed = feedparser.parse(url)        entries = feed.entries        print('feed.entries', len(entries))        for entry in feed.entries:            entry_date = parse_date(entry.published)                      # Filter the entries based on the provided date            if from_datetime and entry_date <= from_datetime:                continue            # Storing only necessary data to minimize memory usage            all_entries.append({                "Title": entry.title,                "Link": entry.link,                "Summary": entry.summary,                "PublishedDate": entry.published            })    # Step 2: Shuffle the filtered entries.    random.shuffle(all_entries)    # Step 3: Extract the body for each entry and break it down by paragraphs.    for entry in all_entries:        article_body = get_article_body(entry["Link"])        print("\\nTitle:", entry["Title"])        print("Link:", entry["Link"])        print("Summary:", entry["Summary"])        print("Published Date:", entry["PublishedDate"])        # Create separate records for each paragraph        for paragraph in article_body:            data = {                "UUID": str(uuid.uuid4()), # UUID for each paragraph                "Title": entry["Title"],                "Link": entry["Link"],                "Summary": entry["Summary"],                "PublishedDate": entry["PublishedDate"],                "Body": paragraph            }            all_data.append(data)    print("-" * 50)    df = pd.DataFrame(all_data)    return dfThe real-time data insertion loop ensures that newly published articles are promptly added to the Weaviate database. We fetch news articles from the last minute and follow the same data insertion process. This loop ensures that the database is continuously updated with fresh content.ConclusionIn this article, we've explored crucial aspects of building an RSS news retrieval system with Weaviate. We delved into Weaviate's role as a vector database, examined the RSS_Entry class schema, and understood how text data is vectorized using text2vec-transformers. Furthermore, we discussed the significance of real-time and historical data insertion in providing users with relevant and up-to-date news context. With a solid foundation in place, we're well-equipped to move forward and explore more advanced applications, such as similarity search and context-based content retrieval, which is what we will be building in the next article. The seamless integration of Weaviate with our news fetcher sets the stage for a powerful context-aware information retrieval system.Author BioAlan Bernardo Palacio is a data scientist and an engineer with vast experience in different engineering fields. His focus has been the development and application of state-of-the-art data products and algorithms in several industries. He has worked for companies such as Ernst and Young, and Globant, and now holds a data engineer position at Ebiquity Media helping the company to create a scalable data pipeline. Alan graduated with a Mechanical Engineering degree from the National University of Tucuman in 2015, participated as the founder of startups, and later on earned a Master's degree from the faculty of Mathematics at the Autonomous University of Barcelona in 2017. Originally from Argentina, he now works and resides in the Netherlands.LinkedIn   
Read more
  • 0
  • 0
  • 193