Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Free Learning
Arrow right icon

How-To Tutorials - AI Tools

89 Articles
article-image-generate-google-doc-summaries-using-palm-api-and-google-apps-script
Aryan Irani
13 Sep 2023
8 min read
Save for later

Generate Google Doc summaries using PaLM API and Google Apps Script

Aryan Irani
13 Sep 2023
8 min read
IntroductionIn this article, we'll delve into the powerful synergy of the PaLM API and Google Apps Script, unveiling a seamless way to generate concise summaries for your Google Docs. Say goodbye to manual summarization and embrace efficiency as we guide you through the process of simplifying your document management tasks. Let's embark on this journey to streamline your Google Doc summaries and enhance your productivity.Sample Google DocFor this blog, we will be using a very simple Google Doc that contains a paragraph for which we want to generate a summary for. If you want to work with the Google Docs, click here. Once you make a copy of the Google Doc you have to go ahead and change the API key in the Google Apps Script code. Step1: Get the API keyCurrently, PaLM API hasn’t been released for public use but to access it before everybody does, you can apply for the waitlist by clicking here. If you want to know more about the process of applying for MakerSuite and PaLM API, you can check the YouTube tutorial here.Once you have access, to get the API key, we have to go to MakerSuite and go to the Get API key section. To get the API key, follow these steps:1. Go to MakerSuite or click here.2. On opening the MakerSuite you will see something like this3. To get the API key go ahead and click on Get API key on the left side of the page.4. On clicking Get API key, you will see something like this where you can create your API key.5. To create the API key go ahead and click on Create API key in new project.On clicking Create API Key, in a few seconds, you will be able to copy the API key.Step 2: Write the Automation ScriptWhile you are in the Google Docs, let’s open up the Script Editor to write some Google Apps Script. To open the Script Editor, follow these steps:1. Click on Extensions and open the Script Editor.2. This brings up the Script Editor as shown below.We have reached the script editor lets code.Now that we have the Google Doc setup and the API key ready, let’s go ahead and write our Google Apps Script code to get the summary for the paragraph in the Google Doc. function onOpen(){ var ui = DocumentApp.getUi(); ui.createMenu('Custom Menu')     .addItem('Summarize Selected Paragraph', 'summarizeSelectedParagraph')     .addToUi();   }We are going to start out by creating our own custom menu using which we can select the paragraph we want to summarize and run the code. To do that we are going to start out by opening a new function called onOpen(). On opening the function we are going to create a menu using the create.Menu() function, inside which we will be passing the name of the menu. After that, we assign some text to the name followed by the function you want to run when the menu is clicked. function DocSummary(paragraph){ var apiKey = "your_api_key"; var apiUrl = "https://generativelanguage.googleapis.com/v1beta2/models/text-bison-001:generateText";We start out by opening a new function BARD() inside which we will declare the API key that we just copied. After declaring the API key we go ahead and declare the API endpoint that is provided in the PaLM API documentation. You can check out the documentation by checking out the link given below.We are going to be receiving the prompt from the Google Doc from the BARD function that we just created.Generative Language API | PaLM API | Generative AI for DevelopersThe PaLM API allows developers to build generative AI applications using the PaLM model. Large Language Models (LLMs)…developers.generativeai.googl var url = apiUrl + "?key=" + apiKey var headers = {   "Content-Type": "application/json" } var prompt = {   'text': "Please generate a short summary for :\n" + paragraph } var requestBody = {   "prompt": prompt }Here we create a new variable called url inside which we combine the API URL and the API key, resulting in a complete URL that includes the API key as a parameter. The headers specify the type of data that will be sent in the request which in this case is “application/json”.Now we come to the most important part of the code which is declaring the prompt. For this blog, we will be asking Bard to summarize a paragraph followed by the paragraph present in the Google Doc. All of this will be stored in the prompt variable. Now that we have the prompt ready, we create an object that will contain this prompt that will be sent in the request to the API. var options = {   "method": "POST",   "headers": headers,   "payload": JSON.stringify(requestBody) }Now that we have everything ready, its time to define the parameters for the HTTP request that will be sent to the PaLM API endpoint. We start out by declaring the method parameter which is set to POST which indicates that the request will be sending data to the API.The headers parameter contains the header object that we declared a while back. Finally, the payload parameter is used to specify the data that will be sent in the request.These options are now passed as an argument to the UrlFetchApp.fetch function which sends the request to the PaLM API endpoint, and returns the response that contains the AI generated text.var response = UrlFetchApp.fetch(url,options); var data = JSON.parse(response.getContentText()); return data.candidates[0].output; }In this case, we just have to pass the url and options variable inside the UrlFetchApp.fetch function. Now that we have sent a request to the PaLM API endpoint we get a response back. In order to get an exact response we are going to be parsing the data.The getContentText() function is used to extract the text content from the response object. Since the response is in JSON format, we use the JSON.parse function to convert the JSON string into an object.The parsed data is then passed to the final variable output, inside which we get the first response out of multiple other drafts that Bard generates for us. On getting the first response we just return the output. function summarizeSelectedParagraph(){ var selection = DocumentApp.getActiveDocument().getSelection(); var text = selection.getRangeElements()[0].getElement().getText(); var summary = DocSummary(text); Now that we have the summary function ready and good to go, we will now go ahead and open the function that will be interacting with the Google Doc. We want the summary to be generated for the paragraph that the user selects. To do that we are going to get the selected text from the Google Doc using the getSelection() function. Once we get the selected text we go ahead and get the text using the .getText() function. To generate the summary using Google Bard we pass the text in the DocSummary() function. DocumentApp.getActiveDocument().getBody().appendParagraph("Summary"); DocumentApp.getActiveDocument().getBody().appendParagraph(summary) }Now that we have the summary for the selected text, it's time to append the paragraph back into the Google Doc. To do that we are going to be using the appendParagraph() function inside which we will pass the summary variable. Just to divide the summary from the original paragraph we append an additional line that says “Summary”. Our code is complete and good to go.Step 3: Check the outputIt's time to check the output and see if the code is working according to what we expected. To do that go ahead and save your code and run the OnOpen() function. This will create the menu that we can select and generate the summary for the paragraph.On running the code you should get an output like this in the Execution Log.On running the onOpen() function the custom menu has been created in the Google Doc successfully.To generate the summary in the Google Doc, follow the steps.1. Select the paragraph you want to generate the summary for.2. Once you select the paragraph go ahead and click on the custom menu and click on Summarise Selected paragraph.3. On clicking the option, you will see that the code will generate a summary for the paragraph we selected.Here you can see the summary for the paragraph has been generated in the Google Doc successfully.ConclusionIn this blog, we walked through the process of how we can access the PaLM API to integrate Google Bard inside of a Google Doc using Google Apps Script. The integration of Google Bard and Google Apps Script empowers users to generate summaries of paragraphs in Google Docs effortlessly.You can get the code from the GitHub link given below. Google-Apps-Script/BlogSummaryPaLM.js at master · aryanirani123/Google-Apps-ScriptCollection of Google Apps Script Automation scripts written and compiled by Aryan Irani. …github.comAuthor BioAryan Irani is a Google Developer Expert for Google Workspace. He is a writer and content creator who has been working in the Google Workspace domain for three years. He has extensive experience in the area, having published 100 technical articles on Google Apps Script, Google Workspace Tools, and Google APIs.Website
Read more
  • 0
  • 0
  • 212

article-image-getting-started-with-microsoft-fabric
Arshad Ali, Bradley Schacht
11 Sep 2023
7 min read
Save for later

Getting Started with Microsoft Fabric

Arshad Ali, Bradley Schacht
11 Sep 2023
7 min read
This article is an excerpt from the book, Learn Microsoft Fabric, by Arshad Ali and Bradley Schacht. A step-by-step guide to harness the power of Microsoft Fabric in developing data analytics solutions for various use cases.IntroductionIn this article, you will learn about enabling Microsoft Fabric in an existing Power BI tenant or create a new one Fabric tenant if you don’t have one already. Next, you will create your first Fabric workspace, which you will use to carry out all subsequent chapters' exercises.  Enabling Microsoft Fabric Microsoft Fabric shares the same Power BI tenant. If you have a Power BI or Microsoft Fabric tenant already created, you have these two options, mentioned next, to enable Fabric (more at https://learn.microsoft.com/en-us/fabric/admin/fabric-switch) in that tenant. For each of these options, depending on the configuration you select, Microsoft Fabric becomes available for either everyone in the tenant, or to a selected group of users. Note: If you are new to Power BI or your organization doesn't have a Power BI/Fabric tenant (https://aka.ms/try-fabric) yet, you can set it up and use a Fabric trial by visiting to sign up for a Power BI free license. Afterward, you can start the Fabric trial, as mentioned later in this section while discussing trial capacity. Fabric trial includes access to the Fabric product experiences and the resources to create and host Fabric items. As of this writing, the Fabric Trial license allows you to work with Fabric for 60 days free. At that point, you will need to provision Fabric Capacity to continue using Microsoft Fabric.Enable Fabric at the tenant level: If you have admin privileges, you can access the Admin center from the Settings menu in the upper right corner of the Power BI service. From here, you enable Fabric on the Tenant settings page. When you enable Microsoft Fabric using the tenant setting, users can create Fabric items in that tenant. For that, navigate to the Tenant settings page in the Admin portal page of the tenant, expand the Users can create Fabric items field and toggle the switch to enable or disable it, and then hit Apply. Figure 2.1 – Microsoft Fabric - tenant settings Enable Fabric at the capacity level: While it is recommended to enable Microsoft Fabric for the entire organization at the tenant level, there are times when you would like it to be enabled for a certain group of people at the capacity level. For that, on the Tenant Admin portal, please navigate to the Capacity settings page, identify and select the capacity for which you want Microsoft Fabric to be enabled, and then click on the Delegate tenant settings tab at the top. Then under the Microsoft Fabric section of this page, expand the Users can create Fabric items setting and toggle the switch to enable or disable it, and then hit Apply. Figure 2.2 – Microsoft Fabric - capacity settings In both these above scenarios, it assumes you have paid capacity already available. However, if you don’t have it yet, you can use Fabric Trial (more at https://learn.microsoft.com/en-us/fabric/get-started/fabric-trial) for creating Fabric items for a certain duration if you want to learn or test the functionalities of Microsoft Fabric. For that, open the Fabric homepage (by visiting https://app.fabric.microsoft.com/home) and select Account Manager. In the Account Manager, click on Start Trial and follow the wizard instructions to enable Fabric trial with trial capacity. Note: For you to learn and try out different capabilities in Fabric, Microsoft provides free trial capacity. With this trial capacity, you get full access to all the Fabric workloads or, its features, including the ability to create Fabric items, and collaborate with others as well OneLake storage up to 1 TB. However, the usage of trial capacity is intended for trial and testing only and not for production usage        Checking your access to Microsoft Fabric To validate if Fabric is enabled and you have access to it in your organization's tenant, sign in to Power BI and look for the Power BI icon at the bottom left of the screen. If you see the Power BI icon, select to see the experiences available within Fabric. Figure 2.3 – Microsoft Fabric - workloads switcher If the icon is present, you can click Microsoft Fabric link at the top the screen (as shown in Figure 2.3) to switch to Fabric experience or click on individual experience you want to switch to. Figure 2.4 – Microsoft Fabric - home page However, if the icon isn't present, Fabric is not available to you. In that case, please follow the steps (or work with your Power BI or Fabric admin) mentioned in the previous section to enable it. Creating your first Fabric enabled workspace Once you have confirmed that Fabric is enabled in your tenant and you have access to it, the next step is to create your Fabric workspace. You can think of a Fabric workspace as a logical container that will contain all the items such as lakehouses, warehouse, notebooks, and pipelines. Follow these steps to create your first Fabric workspace: 1. Sign in to Power BI (https://app.powerbi.com/). 2. Select Workspaces | + New workspace. Figure 2.5 – Create a new workspace 3.   Fill out the Create a workspace form as follows: o   Name: Enter Learn Microsoft Fabric and some characters for uniqueness o   Description: Optionally, enter a description for the workspace  Figure 2.6 – Create new workspace - details o    Advanced: Select Fabric capacity under License mode and then choose a capacity you have access to. If not, you can start a trial license, as described earlier, and use it here. 4.    Select Apply. The workspace will be created and opened. 5.    You can click on Workspaces again and then search for your workspace by typing its name in the search box. You can also pin the selected workspace so that it always appears at the top.  Figure 2.7 – Search for a workspace 6. Clicking on the name of the workspace will open the workspace and its link will be available in the left side navigation bar, allowing you to switch from one item to others quickly. Currently, since we haven’t created anything yet, there is nothing here. You can click on +New to start creating Fabric items.  Figure 2.8 – Switch to a workspace With a Microsoft Fabric workspace set up, let’s review the different workloads available.ConclusionIn this article, we covered the basics of Microsoft Fabric in Power BI. You can enable Fabric at the tenant or capacity level, with a trial option available for newcomers. To check your access, look for the Power BI icon. If present, you're ready to use Fabric; if not, follow the setup steps. Create a Fabric workspace to manage items like lakehouses and pipelines. This article offers a quick guide to kickstart your journey with Microsoft Fabric in Power BI.Author BioArshad Ali is a Principal Program Manager on the Microsoft Fabric product team based in Redmond, WA. As part of his role at Microsoft, he works with strategic customers, partners, and ISVs to help them adopt Microsoft Fabric in solving their complex data analytics problems and driving business insights as well as helps shape the future of the Microsoft Fabric.Bradley Schacht is a Principal Program Manager on the Microsoft Fabric product team based in Jacksonville, FL. As part of his role at Microsoft, Bradley works directly with customers to solve some of their most complex data warehousing problems and helps shape the future of the Microsoft Fabric cloud service.
Read more
  • 0
  • 0
  • 1051

article-image-getting-started-with-med-palm-2
07 Sep 2023
5 min read
Save for later

Getting Started with Med-PaLM 2

07 Sep 2023
5 min read
Introduction Med-PaLM 2 is a large language model (LLM) from Google Research, designed for the medical domain. It is trained on a massive dataset of text and code, including medical journals, textbooks, and clinical trials. Med-PaLM 2 can answer questions about a wide range of medical topics, including diseases, treatments, and procedures. It can also generate text, translate languages, and write different kinds of creative content. Use Cases Med-PaLM 2 can be used for a variety of purposes in the healthcare industry, including: Medical research: Med-PaLM 2 can be used to help researchers find and analyze medical data. It can also be used to generate hypotheses and test new ideas. Clinical decision support: Med-PaLM 2 can be used to help doctors diagnose diseases and make treatment decisions. It can also be used to provide patients with information about their condition and treatment options. Health education: Med-PaLM 2 can be used to create educational materials for patients and healthcare professionals. It can also be used to answer patients' questions about their health. Drug discovery: Med-PaLM 2 can be used to help researchers identify new drug targets and develop new drugs. Personalized medicine: Med-PaLM 2 can be used to help doctors personalize treatment for individual patients. It can do this by taking into account the patient's medical history, genetic makeup, and other factors. How to Get Started Med-PaLM 2 is currently available to a limited number of Google Cloud customers. To get started, you can visit the Google Cloud website: https://cloud.google.com/ and sign up for a free trial. Once you have a Google Cloud account, you can request access to Med-PaLM 2. Here are the steps on how to get started with using Med-PaLM: 1. Check if Med-PaLM is available in your country. Med-PaLM is currently only available in the following countries: United States Canada United Kingdom Australia New Zealand Singapore India Japan South KoreaYou can check the Med-PaLM website: https://sites.research.google/med-palm/ for the latest list of supported countries. 2. Create a Google Cloud Platform (GCP) account. Med-PaLM is a cloud-based service, so you will need to create a GCP account in order to use it. You can do this by going to the GCP website: https://cloud.google.com/ and clicking on the "Create Account" button. 3. Enable the Med-PaLM API. Once you have created a GCP account, you will need to enable the Med-PaLM API. You can do this by going to the API Library: https://console.cloud.google.com/apis/library and searching for "Med-PaLM". Click on the "Enable" button to enable the API. 4. Create a Med-PaLM service account. A service account is a special type of account that can be used to access GCP resources. You will need to create a service account in order to use Med-PaLM. You can do this by going to the IAM & Admin: https://console.cloud.google.com/iam-admin/ page and clicking on the "Create Service Account" button. 5. Download the Med-PaLM credentials. Once you have created a service account, you will need to download the credentials. The credentials will be a JSON file that contains your service account's email address and private key. You can download the credentials by clicking on the "Download JSON" button. 6. Set up the Med-PaLM client library. There are client libraries available for a variety of programming languages. You will need to install the client library for the language that you are using. You can find the client libraries on the Med-PaLM website: https://sites.research.google/med-palm/. 7. Initialize the Med-PaLM client. Once you have installed the client library, you can initialize the Med-PaLM client. The client will need your service account's email address and private key in order to authenticate with Med-PaLM. You can initialize the client by using the following code: import medpalm client = medpalm.Client(    email="your_service_account_email_address",    key_file="your_service_account_private_key.json" ) 8. Start using Med-PaLM! Once you have initialized the Med-PaLM client, you can start using it to access Med-PaLM's capabilities. For example, you can use Med-PaLM to answer medical questions, generate text, and translate languages. Key Features Med-PaLM 2 has a number of key features that make it a valuable tool for the healthcare industry. These features include: Accuracy: Med-PaLM 2 is highly accurate in answering medical questions. It has been shown to achieve an accuracy of 85% on a variety of medical question answering datasets. Expertise: Med-PaLM 2 is trained on a massive dataset of medical text and code. This gives it a deep understanding of medical concepts and terminology. Versatility: Med-PaLM 2 can be used for a variety of purposes in the healthcare industry. It can answer questions, generate text, translate languages, and write different kinds of creative content. Scalability: Med-PaLM 2 is scalable and can be used to process large amounts of data. This makes it a valuable tool for research and clinical applications. Conclusion Med-PaLM 2 is a powerful LLM that has the potential to revolutionize the healthcare industry. It can be used to improve medical research, clinical decision support, health education, drug discovery, and personalized medicine. Med-PaLM 2 is still under development, but it has already demonstrated the potential to make a significant impact on healthcare.  
Read more
  • 0
  • 0
  • 6814
Banner background image

article-image-introduction-to-gen-ai-studio
Anubhav Singh
07 Sep 2023
6 min read
Save for later

Introduction to Gen AI Studio

Anubhav Singh
07 Sep 2023
6 min read
In this article, we’ll explore the basics of Generative AI Studio and how to run a language model within this suite with practical example. Generative AI Studio is the all-encompassing offering of generative AI-based services on Google Cloud. It includes models of different types, allowing users to generate content that may be - text, image, or audio. On the Generative AI Studio, or Gen AI Studio, users can rapidly prototype and test different types of prompts associated with the different types of models to figure out which parameters and settings work best for their use cases. Then, they can easily shift the tested configurations to the code bases of their solutions. Model Garden on the other hand provides a collection of foundation and customized generative AI models which can be used directly as models in code or as APIs. The foundation models are based on the models that have been trained by Google themselves, whereas the fine-tuned/task-specific models include models that have been developed and trained by third parties. Gen AI Studio  Packaged within Vertex AI, the Generative AI Studio on Google Cloud Platform provides low-code solutions for developing and testing invocations over Google’s AI models that can then be used within customer’s solutions. As of August 2023, the following solutions are a part of the Generative AI Studio -  Language: Models used to generate text-based responses. The models may be generating answers to questions, performing classification, recognizing sentiment, or anything that involves text understanding. Vision: The models are used to generate images/visual content with different types of drawing styles Speech: The speech models perform either speech-to-text conversation or text-to-speech conversion. Let’s explore each one of these in detail. The language models in Gen AI studio are based on the PaLM 2 for Text models and are currently in the form of either “text-bison” or “chat-bison”. The first type of model is the base model which allows performing any kind of tasks related to text understanding and generation. “Chat-bison” models on the other hand are focused on providing a conversational interface for interacting with the model. Thus, they are more suitable for tasks that require a conversation to happen between the model user and the model. There’s another form of the PaLM2 models available as “code-bison” which powers the Codey product suite. This deals with programming languages instead of human languages. Let’s take a look at how we can use a language model in Gen AI Studio. Follow the steps below: 1. First, head over to https://console.cloud.google.com/vertex-ai/generative on your browser with a Billing enabled Google Cloud account. You will be able to see the Generative AI Studio dashboard.   2. Next, click “Open” in the card titled “Language”. 3. Then, click on “Text Prompt” to open the prompt builder interface. The interface should look similar to the image below, however, being an actively developed product, it may change in several ways in the future.   4. Now, let us write a prompt. For our example, we’ll instruct the model to fact check whatever is passed to it. Here’s a sample prompt: You're a Fact Checker Bot. Whatever the user says, fact check it and say any of the following:  1. "This is a fact" if the statement by the user is a true fact. 2. "This is not a fact" if the user's statement is not classifiable as a fact. 3. "This is a myth" if the user's state is a false fact. User:  5. Now, write the user’s part as well and hit the Submit button. The last line of the prompt would now be:  User: I am eating an apple.6. Observe the response. Then, change the user’s part to “I am an apple” and “I am a human”. Observe the response in each case. The following output table is expected: Once we’re satisfied with the model responses based on our prompt, we can shift the model invocation to code. In our example, we’ll do it on Google Colaboratory. Follow the steps below: 1. Open Google Colaboratory by visiting: https://colab.research.google.com/ 2. In the first cell, we’ll install the required libraries for using Gen AI Studio models %%capture  !pip install "shapely<2.0.0"  !pip install google-cloud-aiplatform --upgrade  3. Next, we’ll authenticate the Colab notebook to be able to access the resources available on Google Cloud to the currently logged in user. from google.colab import auth as google_auth  google_auth.authenticate_user() 4. Then we import the required libraries. import vertexai  from vertexai.language_models import TextGenerationModel  5. Now, we instantiate the VertexAI client to work with the project. Take note to replace the PROJECT_ID with your own project’s ID on Google Cloud vertexai.init(project=PROJECT_ID, location="us-central1")  6. Let us now set the configurations that the model will use while answering to our prompts and initialize the model client parameters = {      "candidate_count": 1,      "max_output_tokens": 256,      "temperature": 0,      "top_p": 0.8,      "top_k": 40  }  model = TextGenerationModel.from_pretrained("text-bison@001")  7. Now, we can call the model and observe the response by printing it response = model.predict(      """You\'re a Fact Checker Bot. Whatever the user says, fact check it and say any of the following: 1. \"This is a fact\" if the statement by the user is a true fact.  2. \"This is not a fact\" if the user\'s statement is not classifiable as a fact.  3. \"This is a myth\" if the user\'s state is a false fact.  User: I am a human""",      **parameters  )  print(f"Response from Model: {response.text}")  You can similarly work with the other models available in Gen AI Studio. In this notebook, we’ve provided an example each of Language, Vision and Speech model usage: GenAIStudio&ModelGarden.ipynb  Author BioAnubhav Singh, Co-founder of Dynopii & Google Dev Expert in Google Cloud, is a  seasoned developer since the pre-Bootstrap era, Anubhav has extensive experience as a freelancer and AI startup founder. He authored "Hands-on Python Deep Learning for Web" and "Mobile Deep Learning with TensorFlow Lite, ML Kit, and Flutter." A Google Developer Expert in GCP, he co-organizes for TFUG Kolkata community and formerly led the team at GDG Cloud Kolkata. Anubhav is often found discussing System Architecture, Machine Learning, and Web technologies.
Read more
  • 0
  • 0
  • 3066

article-image-harnessing-weaviate-and-integrating-with-langchain
Alan Bernardo Palacio
31 Aug 2023
20 min read
Save for later

Harnessing Weaviate and integrating with LangChain

Alan Bernardo Palacio
31 Aug 2023
20 min read
IntroductionIn the first part of this series, we built a robust RSS news retrieval system using Weaviate, enabling us to fetch and store news articles efficiently. Now, in this second part, we're taking the next leap by exploring how to harness the power of Weaviate for similarity search and integrating it with LangChain. We will delve into the creation of a Streamlit application that performs real-time similarity search, contextual understanding, and dynamic context building. With the increasing demand for relevant and contextual information, this section will unveil the magic of seamlessly integrating various technologies to create an enhanced user experience.Before we dive into the exciting world of similarity search and context building, let's ensure you're equipped with the necessary tools. Familiarity with Weaviate, Streamlit, and Python will be essential as we explore these advanced concepts and create a dynamic application.Similarity Search and Weaviate IntegrationThe journey of enhancing news context retrieval doesn't end with fetching articles. Often, users seek not just relevant information, but also contextually similar content. This is where similarity search comes into play.Similarity search enables us to find articles that share semantic similarities with a given query. In the context of news retrieval, it's like finding articles that discuss similar events or topics. This functionality empowers users to discover a broader range of perspectives and relevant articles.Weaviate's core strength lies in its ability to perform fast and accurate similarity search. We utilize the perform_similarity_search function to query Weaviate for articles related to a given concept. This function returns a list of articles, each scored based on its relevance to the query.import weaviate from langchain.llms import OpenAI import datetime import pytz from dateutil.parser import parse davinci = OpenAI(model_name='text-davinci-003') def perform_similarity_search(concept):    """    Perform a similarity search on the given concept.    Args:    - concept (str): The term to search for, e.g., "Bitcoin" or "Ethereum"      Returns:    - dict: A dictionary containing the result of the similarity search    """    client = weaviate.Client("<http://weaviate:8080>")      nearText = {"concepts": [concept]}    response = (        client.query        .get("RSS_Entry", ["title", "link", "summary", "publishedDate", "body"])        .with_near_text(nearText)        .with_limit(50)  # fetching a maximum of 50 similar entries        .with_additional(['certainty'])        .do()    )      return response def sort_and_filter(results):    # Sort results by certainty    sorted_results = sorted(results, key=lambda x: x['_additional']['certainty'], reverse=True)    # Sort the top results by date    top_sorted_results = sorted(sorted_results[:50], key=lambda x: parse(x['publishedDate']), reverse=True)    # Return the top 10 results    return top_sorted_results[:5] # Define the prompt template template = """ You are a financial analysts reporting on latest developments and providing an overview about certain topics you are asked about. Using only the provided context, answer the following question. Prioritize relevance and clarity in your response. If relevant information regarding the query is not found in the context, clearly indicate this in the response asking the user to rephrase to make the search topics more clear. If information is found, summarize the key developments and cite the sources inline using numbers (e.g., [1]). All sources should consistently be cited with their "Source Name", "link to the article", and "Date and Time". List the full sources at the end in the same numerical order. Today is: {today_date} Context: {context} Question: {query} Answer: Example Answer (for no relevant information): "No relevant information regarding 'topic X' was found in the provided context." Example Answer (for relevant information): "The latest update on 'topic X' reveals that A and B have occurred. This was reported by 'Source Name' on 'Date and Time' [1]. Another significant development is D, as highlighted by 'Another Source Name' on 'Date and Time' [2]." Sources (if relevant): [1] Source Name, "link to the article provided in the context", Date and Time [2] Another Source Name, "link to the article provided in the context", Date and Time """ # Modified the generate_response function to now use the SQL agent def query_db(query):    # Query the weaviate database    results = perform_similarity_search(query)    results = results['data']['Get']['RSS_Entry']    top_results = sort_and_filter(results)    # Convert your context data into a readable string    context_string = [f"title:{r['title']}\\nsummary:{r['summary']}\\nbody:{r['body']}\\nlink:{r['link']}\\npublishedDate:{r['publishedDate']}\\n\\n" for r in top_results]    context_string = '\\n'.join(context_string)    # Get today's date    date_format = "%a, %d %b %Y %H:%M:%S %Z"    today_date = datetime.datetime.now(pytz.utc).strftime(date_format)    # Format the prompt    prompt = template.format(        query=query,        context=context_string,        today_date=today_date    )    # Print the formatted prompt for verification    print(prompt)    # Run the prompt through the model directly    response = davinci(prompt)    # Extract and print the response    return responseRetrieved results need effective organization for user consumption. The sort_and_filter function handles this task. It first sorts the results based on their certainty scores, ensuring the most relevant articles are prioritized. Then, it further sorts the top results by their published dates, providing users with the latest information to build the context for the LLM.LangChain Integration for Context BuildingWhile similarity search enhances content discovery, context is the key to understanding the significance of articles. Integrating LangChain with Weaviate allows us to dynamically build context and provide more informative responses.LangChain, a language manipulation tool, acts as our context builder. It enhances the user experience by constructing context around the retrieved articles, enabling users to understand the broader narrative. Our modified query_db function now incorporates Langchain's capabilities. The function generates a context-rich prompt that combines the user's query and the top retrieved articles. This prompt is structured using a template that ensures clarity and relevance.The prompt template is a structured piece of text that guides LangChain to generate contextually meaningful responses. It dynamically includes information about the query, context, and relevant articles. This ensures that users receive comprehensive and informative answers.Subsection 2.4: Handling Irrelevant Queries One of LangChain's unique strengths is its ability to gracefully handle queries with limited context. When no relevant information is found in the context, LangChain generates a response that informs the user about the absence of relevant data. This ensures transparency and guides users to refine their queries for better results.In the next section, we will be integrating this enhanced news retrieval system with a Streamlit application, providing users with an intuitive interface to access relevant and contextual information effortlessly.Building the Streamlit ApplicationIn the previous section, we explored the intricate layers of building a robust news context retrieval system using Weaviate and LangChain. Now, in this third part, we're diving into the realm of user experience enhancement by creating a Streamlit application. Streamlit empowers us to transform our backend functionalities into a user-friendly front-end interface with minimal effort. Let's discover how we can harness the power of Streamlit to provide users with a seamless and intuitive way to access relevant news articles and context.Streamlit is a Python library that enables developers to create interactive web applications with minimal code. Its simplicity, coupled with its ability to provide real-time visualizations, makes it a fantastic choice for creating data-driven applications.The structure of a Streamlit app is straightforward yet powerful. Streamlit apps are composed of simple Python scripts that leverage the provided Streamlit API functions. This section will provide an overview of how the Streamlit app is structured and how its components interact.import feedparser import pandas as pd import time from bs4 import BeautifulSoup import requests import random from datetime import datetime, timedelta import pytz import uuid import weaviate import json import time def wait_for_weaviate():    """Wait until Weaviate is available."""      while True:        try:            # Try fetching the Weaviate metadata without initiating the client here            response = requests.get("<http://weaviate:8080/v1/meta>")            response.raise_for_status()            meta = response.json()                      # If successful, the instance is up and running            if meta:                print("Weaviate is up and running!")                return        except (requests.exceptions.RequestException):            # If there's any error (connection, timeout, etc.), wait and try again            print("Waiting for Weaviate...")            time.sleep(5) RSS_URLS = [    "<https://thedefiant.io/api/feed>",    "<https://cointelegraph.com/rss>",    "<https://cryptopotato.com/feed/>",    "<https://cryptoslate.com/feed/>",    "<https://cryptonews.com/news/feed/>",    "<https://smartliquidity.info/feed/>",    "<https://bitcoinmagazine.com/feed>",    "<https://decrypt.co/feed>",    "<https://bitcoinist.com/feed/>",    "<https://cryptobriefing.com/feed>",    "<https://www.newsbtc.com/feed/>",    "<https://coinjournal.net/feed/>",    "<https://ambcrypto.com/feed/>",    "<https://www.the-blockchain.com/feed/>" ] def get_article_body(link):    try:        headers = {            'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.124 Safari/537.3'}        response = requests.get(link, headers=headers, timeout=10)        response.raise_for_status()        soup = BeautifulSoup(response.content, 'html.parser')        paragraphs = soup.find_all('p')        # Directly return list of non-empty paragraphs        return [p.get_text().strip() for p in paragraphs if p.get_text().strip() != ""]    except Exception as e:        print(f"Error fetching article body for {link}. Reason: {e}")        return [] def parse_date(date_str):    # Current date format from the RSS    date_format = "%a, %d %b %Y %H:%M:%S %z"    try:        dt = datetime.strptime(date_str, date_format)        # Ensure the datetime is in UTC        return dt.astimezone(pytz.utc)    except ValueError:        # Attempt to handle other possible formats        date_format = "%a, %d %b %Y %H:%M:%S %Z"        dt = datetime.strptime(date_str, date_format)        return dt.replace(tzinfo=pytz.utc) def fetch_rss(from_datetime=None):    all_data = []    all_entries = []      # Step 1: Fetch all the entries from the RSS feeds and filter them by date.    for url in RSS_URLS:        print(f"Fetching {url}")        feed = feedparser.parse(url)        entries = feed.entries        print('feed.entries', len(entries))        for entry in feed.entries:            entry_date = parse_date(entry.published)                      # Filter the entries based on the provided date            if from_datetime and entry_date <= from_datetime:                continue            # Storing only necessary data to minimize memory usage            all_entries.append({                "Title": entry.title,                "Link": entry.link,                "Summary": entry.summary,                "PublishedDate": entry.published            })    # Step 2: Shuffle the filtered entries.    random.shuffle(all_entries)    # Step 3: Extract the body for each entry and break it down by paragraphs.    for entry in all_entries:        article_body = get_article_body(entry["Link"])        print("\\nTitle:", entry["Title"])        print("Link:", entry["Link"])        print("Summary:", entry["Summary"])        print("Published Date:", entry["PublishedDate"])        # Create separate records for each paragraph        for paragraph in article_body:            data = {                "UUID": str(uuid.uuid4()), # UUID for each paragraph                "Title": entry["Title"],                "Link": entry["Link"],                "Summary": entry["Summary"],                "PublishedDate": entry["PublishedDate"],                "Body": paragraph            }            all_data.append(data)    print("-" * 50)    df = pd.DataFrame(all_data)    return df def insert_data(df,batch_size=100):    # Initialize the batch process    with client.batch as batch:        batch.batch_size = 100        # Loop through and batch import the 'RSS_Entry' data        for i, row in df.iterrows():            if i%100==0:                print(f"Importing entry: {i+1}")  # Status update            properties = {                "UUID": row["UUID"],                "Title": row["Title"],                "Link": row["Link"],                "Summary": row["Summary"],                "PublishedDate": row["PublishedDate"],                "Body": row["Body"]            }            client.batch.add_data_object(properties, "RSS_Entry") if __name__ == "__main__":    # Wait until weaviate is available    wait_for_weaviate()    # Initialize the Weaviate client    client = weaviate.Client("<http://weaviate:8080>")    client.timeout_config = (3, 200)    # Reset the schema    client.schema.delete_all()    # Define the "RSS_Entry" class    class_obj = {        "class": "RSS_Entry",        "description": "An entry from an RSS feed",        "properties": [            {"dataType": ["text"], "description": "UUID of the entry", "name": "UUID"},            {"dataType": ["text"], "description": "Title of the entry", "name": "Title"},            {"dataType": ["text"], "description": "Link of the entry", "name": "Link"},            {"dataType": ["text"], "description": "Summary of the entry", "name": "Summary"},            {"dataType": ["text"], "description": "Published Date of the entry", "name": "PublishedDate"},            {"dataType": ["text"], "description": "Body of the entry", "name": "Body"}        ],        "vectorizer": "text2vec-transformers"    }    # Add the schema    client.schema.create_class(class_obj)    # Retrieve the schema    schema = client.schema.get()    # Display the schema    print(json.dumps(schema, indent=4))    print("-"*50)    # Current datetime    now = datetime.now(pytz.utc)    # Fetching articles from the last days    days_ago = 3    print(f"Getting historical data for the last {days_ago} days ago.")    last_week = now - timedelta(days=days_ago)    df_hist =  fetch_rss(last_week)    print("Head")    print(df_hist.head().to_string())    print("Tail")    print(df_hist.head().to_string())    print("-"*50)    print("Total records fetched:",len(df_hist))    print("-"*50)    print("Inserting data")    # insert historical data    insert_data(df_hist,batch_size=100)    print("-"*50)    print("Data Inserted")    # check if there is any relevant news in the last minute    while True:        # Current datetime        now = datetime.now(pytz.utc)        # Fetching articles from the last hour        one_min_ago = now - timedelta(minutes=1)        df =  fetch_rss(one_min_ago)        print("Head")        print(df.head().to_string())        print("Tail")        print(df.head().to_string())              print("Inserting data")        # insert minute data        insert_data(df,batch_size=100)        print("data inserted")        print("-"*50)        # Sleep for a minute        time.sleep(60)Streamlit apps rely on specific Python libraries and functions to operate smoothly. We'll explore the libraries used in our Streamlit app, such as streamlit, weaviate, and langchain, and discuss their roles in enabling real-time context retrieval.Demonstrating Real-time Context RetrievalAs we bring together the various elements of our news retrieval system, it's time to experience the magic firsthand by using the Streamlit app to perform real-time context retrieval.The Streamlit app's interface, showcasing how users can input queries and initiate similarity searches ensures a user-friendly experience, allowing users to effortlessly interact with the underlying Weaviate and LangChain-powered functionalities. The Streamlit app acts as a bridge, making complex interactions accessible to users through a clean and intuitive interface.The true power of our application shines when we demonstrate its ability to provide context for user queries and how LangChain dynamically builds context around retrieved articles and responses, creating a comprehensive narrative that enhances user understanding.ConclusionIn this second part of our series, we've embarked on the journey of creating an interactive and intuitive user interface using Streamlit. By weaving together the capabilities of Weaviate, LangChain, and Streamlit, we've established a powerful framework for context-based news retrieval. The Streamlit app showcases how the integration of these technologies can simplify complex processes, allowing users to effortlessly retrieve news articles and their contextual significance. As we wrap up our series, the next step is to dive into the provided code and experience the synergy of these technologies firsthand. Empower your applications with the ability to deliver context-rich and relevant information, bringing a new level of user experience to modern data-driven platforms.Through these two articles, we've embarked on a journey to build an intelligent news retrieval system that leverages cutting-edge technologies. We've explored the foundations of Weaviate, delved into similarity search, harnessed LangChain for context building, and created a Streamlit application to provide users with a seamless experience. In the modern landscape of information retrieval, context is key, and the integration of these technologies empowers us to provide users with not just data, but understanding. As you venture forward, remember that these concepts are stepping stones. Embrace the code, experiment, and extend these ideas to create applications that offer tailored and relevant experiences to your users.Author BioAlan Bernardo Palacio is a data scientist and an engineer with vast experience in different engineering fields. His focus has been the development and application of state-of-the-art data products and algorithms in several industries. He has worked for companies such as Ernst and Young, and Globant, and now holds a data engineer position at Ebiquity Media helping the company to create a scalable data pipeline. Alan graduated with a Mechanical Engineering degree from the National University of Tucuman in 2015, participated as the founder of startups, and later on earned a Master's degree from the faculty of Mathematics at the Autonomous University of Barcelona in 2017. Originally from Argentina, he now works and resides in the Netherlands.LinkedIn
Read more
  • 0
  • 0
  • 225

article-image-build-a-powerful-rss-news-fetcher-with-weaviate
Alan Bernardo Palacio
31 Aug 2023
21 min read
Save for later

Build a powerful RSS news fetcher with Weaviate

Alan Bernardo Palacio
31 Aug 2023
21 min read
IntroductionIn today's Crypto rapidly evolving world, staying informed about the latest news and developments is crucial. However, with the overwhelming amount of information available, it's becoming increasingly challenging to find relevant news quickly. In this article, we will delve into the creation of a powerful system that fetches real-time news articles from various RSS feeds and stores them in the Weaviate vector database. We will explore how this application lays the foundation for context-based news retrieval and how it can be a stepping stone for more advanced applications, such as similarity search and contextual understanding.Before we dive into the technical details, let's ensure that you have a basic understanding of the technologies we'll be using. Familiarity with Python and Docker will be beneficial as we build and deploy our applications.Setting up the EnvironmentTo get started, we need to set up the development environment. This environment consists of three primary components: the RSS news fetcher, the Weaviate vector database, and the Transformers Inference API for text vectorization.Our application's architecture is orchestrated using Docker Compose. The provided docker-compose.yml file defines three services: rss-fetcher, weaviate, and t2v-transformers. These services interact to fetch news, store it in the vector database, and prepare it for vectorization.version: '3.4' services: rss-fetcher:    image: rss/python    build:      context: ./rss_fetcher app:    build:      context: ./app    ports:      - 8501:8501    environment:      - OPENAI_API_KEY=${OPENAI_API_KEY}    depends_on:      - rss-fetcher      - weaviate weaviate:    image: semitechnologies/weaviate:latest    restart: on-failure:0    ports:     - "8080:8080"    environment:      QUERY_DEFAULTS_LIMIT: 20      AUTHENTICATION_ANONYMOUS_ACCESS_ENABLED: 'true'      PERSISTENCE_DATA_PATH: "./data"      DEFAULT_VECTORIZER_MODULE: text2vec-transformers      ENABLE_MODULES: text2vec-transformers      TRANSFORMERS_INFERENCE_API: <http://t2v-transformers:8080>      CLUSTER_HOSTNAME: 'node1' t2v-transformers:    image: semitechnologies/transformers-inference:sentence-transformers-multi-qa-MiniLM-L6-cos-v1    environment:      ENABLE_CUDA: 0 # set to 1 to enable      # NVIDIA_VISIBLE_DEVICES: all # enable if running with CUDAEach service is configured with specific environment variables that define its behavior. In our application, we make use of environment variables like OPENAI_API_KEY to ensure secure communication with external services. We also specify the necessary dependencies, such as the Python libraries listed in the requirements.txt files for the rss-fetcher and weaviate services.Creating the RSS News FetcherThe foundation of our news retrieval system is the RSS news fetcher. This component will actively fetch articles from various RSS feeds, extract essential information, and store them in the Weaviate vector database.This is the Dockerfile of our RSS fetcher:FROM python:3 WORKDIR /app COPY requirements.txt . RUN pip install --no-cache-dir -r requirements.txt COPY . . CMD ["python", "-u", "rss_fetcher.py"]Our RSS news fetcher is implemented within the rss_fetcher.py script. This script performs several key tasks, including fetching RSS feeds, parsing articles, and inserting data into the Weaviate database.import feedparser import pandas as pd import time from bs4 import BeautifulSoup import requests import random from datetime import datetime, timedelta import pytz import uuid import weaviate import json import time def wait_for_weaviate():    """Wait until Weaviate is available."""      while True:        try:            # Try fetching the Weaviate metadata without initiating the client here            response = requests.get("<http://weaviate:8080/v1/meta>")            response.raise_for_status()            meta = response.json()                      # If successful, the instance is up and running            if meta:                print("Weaviate is up and running!")                return        except (requests.exceptions.RequestException):            # If there's any error (connection, timeout, etc.), wait and try again            print("Waiting for Weaviate...")            time.sleep(5) RSS_URLS = [    "<https://thedefiant.io/api/feed>",    "<https://cointelegraph.com/rss>",    "<https://cryptopotato.com/feed/>",    "<https://cryptoslate.com/feed/>",    "<https://cryptonews.com/news/feed/>",    "<https://smartliquidity.info/feed/>",    "<https://bitcoinmagazine.com/feed>",    "<https://decrypt.co/feed>",    "<https://bitcoinist.com/feed/>",    "<https://cryptobriefing.com/feed>",    "<https://www.newsbtc.com/feed/>",    "<https://coinjournal.net/feed/>",    "<https://ambcrypto.com/feed/>",    "<https://www.the-blockchain.com/feed/>" ] def get_article_body(link):    try:        headers = {            'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.124 Safari/537.3'}        response = requests.get(link, headers=headers, timeout=10)        response.raise_for_status()        soup = BeautifulSoup(response.content, 'html.parser')        paragraphs = soup.find_all('p')        # Directly return list of non-empty paragraphs        return [p.get_text().strip() for p in paragraphs if p.get_text().strip() != ""]    except Exception as e:        print(f"Error fetching article body for {link}. Reason: {e}")        return [] def parse_date(date_str):    # Current date format from the RSS    date_format = "%a, %d %b %Y %H:%M:%S %z"    try:        dt = datetime.strptime(date_str, date_format)        # Ensure the datetime is in UTC        return dt.astimezone(pytz.utc)    except ValueError:        # Attempt to handle other possible formats        date_format = "%a, %d %b %Y %H:%M:%S %Z"        dt = datetime.strptime(date_str, date_format)        return dt.replace(tzinfo=pytz.utc) def fetch_rss(from_datetime=None):    all_data = []    all_entries = []      # Step 1: Fetch all the entries from the RSS feeds and filter them by date.    for url in RSS_URLS:        print(f"Fetching {url}")        feed = feedparser.parse(url)        entries = feed.entries        print('feed.entries', len(entries))        for entry in feed.entries:            entry_date = parse_date(entry.published)                      # Filter the entries based on the provided date            if from_datetime and entry_date <= from_datetime:                continue            # Storing only necessary data to minimize memory usage            all_entries.append({                "Title": entry.title,                "Link": entry.link,                "Summary": entry.summary,                "PublishedDate": entry.published            })    # Step 2: Shuffle the filtered entries.    random.shuffle(all_entries)    # Step 3: Extract the body for each entry and break it down by paragraphs.    for entry in all_entries:        article_body = get_article_body(entry["Link"])        print("\\nTitle:", entry["Title"])        print("Link:", entry["Link"])        print("Summary:", entry["Summary"])        print("Published Date:", entry["PublishedDate"])        # Create separate records for each paragraph        for paragraph in article_body:            data = {                "UUID": str(uuid.uuid4()), # UUID for each paragraph                "Title": entry["Title"],                "Link": entry["Link"],                "Summary": entry["Summary"],                "PublishedDate": entry["PublishedDate"],                "Body": paragraph            }            all_data.append(data)    print("-" * 50)    df = pd.DataFrame(all_data)    return df def insert_data(df,batch_size=100):    # Initialize the batch process    with client.batch as batch:        batch.batch_size = 100        # Loop through and batch import the 'RSS_Entry' data        for i, row in df.iterrows():            if i%100==0:                print(f"Importing entry: {i+1}")  # Status update            properties = {                "UUID": row["UUID"],                "Title": row["Title"],                "Link": row["Link"],                "Summary": row["Summary"],                "PublishedDate": row["PublishedDate"],                "Body": row["Body"]            }            client.batch.add_data_object(properties, "RSS_Entry") if __name__ == "__main__":    # Wait until weaviate is available    wait_for_weaviate()    # Initialize the Weaviate client    client = weaviate.Client("<http://weaviate:8080>")    client.timeout_config = (3, 200)    # Reset the schema    client.schema.delete_all()    # Define the "RSS_Entry" class    class_obj = {        "class": "RSS_Entry",        "description": "An entry from an RSS feed",        "properties": [            {"dataType": ["text"], "description": "UUID of the entry", "name": "UUID"},            {"dataType": ["text"], "description": "Title of the entry", "name": "Title"},            {"dataType": ["text"], "description": "Link of the entry", "name": "Link"},            {"dataType": ["text"], "description": "Summary of the entry", "name": "Summary"},            {"dataType": ["text"], "description": "Published Date of the entry", "name": "PublishedDate"},            {"dataType": ["text"], "description": "Body of the entry", "name": "Body"}        ],        "vectorizer": "text2vec-transformers"    }    # Add the schema    client.schema.create_class(class_obj)    # Retrieve the schema    schema = client.schema.get()    # Display the schema    print(json.dumps(schema, indent=4))    print("-"*50)    # Current datetime    now = datetime.now(pytz.utc)    # Fetching articles from the last days    days_ago = 3    print(f"Getting historical data for the last {days_ago} days ago.")    last_week = now - timedelta(days=days_ago)    df_hist =  fetch_rss(last_week)    print("Head")    print(df_hist.head().to_string())    print("Tail")    print(df_hist.head().to_string())    print("-"*50)    print("Total records fetched:",len(df_hist))    print("-"*50)    print("Inserting data")    # insert historical data    insert_data(df_hist,batch_size=100)    print("-"*50)    print("Data Inserted")    # check if there is any relevant news in the last minute    while True:        # Current datetime        now = datetime.now(pytz.utc)        # Fetching articles from the last hour        one_min_ago = now - timedelta(minutes=1)        df =  fetch_rss(one_min_ago)        print("Head")        print(df.head().to_string())        print("Tail")        print(df.head().to_string())              print("Inserting data")        # insert minute data        insert_data(df,batch_size=100)        print("data inserted")        print("-"*50)        # Sleep for a minute        time.sleep(60)Before we start fetching news, we need to ensure that the Weaviate vector database is up and running. The wait_for_weaviate function repeatedly checks the availability of Weaviate using HTTP requests. This ensures that our fetcher waits until Weaviate is ready to receive data.The core functionality of our fetcher lies in its ability to retrieve articles from various RSS feeds. We iterate through the list of RSS URLs, using the feedparser library to parse the feeds and extract key information such as the article's title, link, summary, and published date.To provide context for similarity search and other applications, we need the actual content of the articles. The get_article_body function fetches the article's HTML content, parses it using BeautifulSoup, and extracts relevant text paragraphs. This content is crucial for creating a rich context for each article.After gathering the necessary information, we create data objects for each article and insert them into the Weaviate vector database. Weaviate provides a client library that simplifies the process of adding data. We use the weaviate.Client class to interact with the Weaviate instance and batch-insert articles' data objects.Now that we have laid the groundwork for building our context-based news retrieval system, in the next sections, we'll delve deeper into Weaviate's role in this application and how we can leverage it for similarity search and more advanced features.Weaviate Configuration and SchemaWeaviate, an open-source knowledge graph, plays a pivotal role in our application. It acts as a vector database that stores and retrieves data based on their semantic relationships and vector representations. Weaviate's ability to store text data and create vector representations for efficient similarity search aligns perfectly with our goal of context-based news retrieval. By utilizing Weaviate, we enable our system to understand the context of news articles and retrieve semantically similar content.To structure the data stored in Weaviate, we define a class called RSS_Entry. This class schema includes properties like UUID, Title, Link, Summary, PublishedDate, and Body. These properties capture essential information about each news article and provide a solid foundation for context retrieval. # Define the "RSS_Entry" class    class_obj = {        "class": "RSS_Entry",        "description": "An entry from an RSS feed",        "properties": [            {"dataType": ["text"], "description": "UUID of the entry", "name": "UUID"},            {"dataType": ["text"], "description": "Title of the entry", "name": "Title"},            {"dataType": ["text"], "description": "Link of the entry", "name": "Link"},            {"dataType": ["text"], "description": "Summary of the entry", "name": "Summary"},            {"dataType": ["text"], "description": "Published Date of the entry", "name": "PublishedDate"},            {"dataType": ["text"], "description": "Body of the entry", "name": "Body"}        ],        "vectorizer": "text2vec-transformers"    }    # Add the schema    client.schema.create_class(class_obj)    # Retrieve the schema    schema = client.schema.get()The uniqueness of Weaviate lies in its ability to represent text data as vectors. Our application leverages the text2vec-transformers module as the default vectorizer. This module transforms text into vector embeddings using advanced language models. This vectorization process ensures that the semantic relationships between articles are captured, enabling meaningful similarity search and context retrieval.Real-time and Historical Data InsertionEfficient data insertion is vital for ensuring that our Weaviate-based news retrieval system provides up-to-date and historical context for users. Our application caters to two essential use cases: real-time context retrieval and historical context analysis. The ability to insert real-time news articles ensures that users receive the most recent information. Additionally, historical data insertion enables a broader perspective by allowing users to explore trends and patterns over time.To populate our database with historical data, we utilize the fetch_rss function. This function fetches news articles from the last few days, as specified by the days_ago parameter. The retrieved articles are then processed, and data objects are batch-inserted into Weaviate. This process guarantees that our database contains a diverse set of historical articles.def fetch_rss(from_datetime=None):    all_data = []    all_entries = []      # Step 1: Fetch all the entries from the RSS feeds and filter them by date.    for url in RSS_URLS:        print(f"Fetching {url}")        feed = feedparser.parse(url)        entries = feed.entries        print('feed.entries', len(entries))        for entry in feed.entries:            entry_date = parse_date(entry.published)                      # Filter the entries based on the provided date            if from_datetime and entry_date <= from_datetime:                continue            # Storing only necessary data to minimize memory usage            all_entries.append({                "Title": entry.title,                "Link": entry.link,                "Summary": entry.summary,                "PublishedDate": entry.published            })    # Step 2: Shuffle the filtered entries.    random.shuffle(all_entries)    # Step 3: Extract the body for each entry and break it down by paragraphs.    for entry in all_entries:        article_body = get_article_body(entry["Link"])        print("\\nTitle:", entry["Title"])        print("Link:", entry["Link"])        print("Summary:", entry["Summary"])        print("Published Date:", entry["PublishedDate"])        # Create separate records for each paragraph        for paragraph in article_body:            data = {                "UUID": str(uuid.uuid4()), # UUID for each paragraph                "Title": entry["Title"],                "Link": entry["Link"],                "Summary": entry["Summary"],                "PublishedDate": entry["PublishedDate"],                "Body": paragraph            }            all_data.append(data)    print("-" * 50)    df = pd.DataFrame(all_data)    return dfThe real-time data insertion loop ensures that newly published articles are promptly added to the Weaviate database. We fetch news articles from the last minute and follow the same data insertion process. This loop ensures that the database is continuously updated with fresh content.ConclusionIn this article, we've explored crucial aspects of building an RSS news retrieval system with Weaviate. We delved into Weaviate's role as a vector database, examined the RSS_Entry class schema, and understood how text data is vectorized using text2vec-transformers. Furthermore, we discussed the significance of real-time and historical data insertion in providing users with relevant and up-to-date news context. With a solid foundation in place, we're well-equipped to move forward and explore more advanced applications, such as similarity search and context-based content retrieval, which is what we will be building in the next article. The seamless integration of Weaviate with our news fetcher sets the stage for a powerful context-aware information retrieval system.Author BioAlan Bernardo Palacio is a data scientist and an engineer with vast experience in different engineering fields. His focus has been the development and application of state-of-the-art data products and algorithms in several industries. He has worked for companies such as Ernst and Young, and Globant, and now holds a data engineer position at Ebiquity Media helping the company to create a scalable data pipeline. Alan graduated with a Mechanical Engineering degree from the National University of Tucuman in 2015, participated as the founder of startups, and later on earned a Master's degree from the faculty of Mathematics at the Autonomous University of Barcelona in 2017. Originally from Argentina, he now works and resides in the Netherlands.LinkedIn   
Read more
  • 0
  • 0
  • 247
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-generative-fill-with-adobe-firefly-part-ii
Joseph Labrecque
24 Aug 2023
9 min read
Save for later

Generative Fill with Adobe Firefly (Part II)

Joseph Labrecque
24 Aug 2023
9 min read
Adobe Firefly OverviewAdobe Firefly is a new set of generative AI tools which can be accessed via https://firefly.adobe.com/ by anyone with an Adobe ID. To learn more about Firefly… have a look at their FAQ.  Image 1: Adobe FireflyFor more information about the usage of Firefly to generate images, text effects, and more… have a look at the previous articles in this series:      Animating Adobe Firefly Content with Adobe Animate      Exploring Text to Image with Adobe Firefly      Generating Text Effects with Adobe Firefly       Adobe Firefly Feature Deep Dive       Generative Fill with Adobe Firefly (Part I)This is the conclusion of a two-part article. You can catch up by reading Generative Fill with Adobe Firefly (Part I). In this article, we’ll continue our exploration of Firefly with the Generative fill module by looking at how to use the Insert and Replace features… and more.Generative Fill – Part I RecapIn part I of our Firefly Generative fill exploration, we uploaded a photograph of a cat, Poe, to the AI and began working with the various tools to remove the background and replace it with prompt-based generative AI content.Image 2: The original photograph of PoeNote that the original photograph includes a set of electric outlets exposed within the wall. When we remove the background, Firefly recognizes that these objects are distinct from the general background and so retains them.Image 3: A set of backgrounds is generated for us to choose fromYou can select any of the four variations that were generated from the set of preview thumbnails beneath the photograph.Again, if you’d like to view these processes in detail – check out Generative Fill with Adobe Firefly (Part I).Insert and Replace with Generative FillWe covered generating a background for our image in part I of this article. Now we will focus on other aspects of Firefly Generative fill, including the Remove and Insert tools.Consider the image above and note that the original photograph included a set of electric outlets exposed within the wall. When we removed the background in part I, Firefly recognized that they were distinct from the general background and so retained them. The AI has taken them into account when generating the new background… but we should remove them.This is where the Remove tool comes into play.Image 4: The Remove toolSwitching to the Remove tool will allow you to brush over an area of the photograph you’d like to remove. It fills in the removed area with pixels generated by the AI to create seamless removal.1.               Select the Remove tool now. Note that when switching between the Insert and Remove tools, you will often encounter a save prompt as seen below. If there are no changes to save, this prompt will not appear!Image 5: When you switch tools… you may be asked to save your work2.               Simply click the Save button to continue – as choosing the Cancel button will halt the tool selection.3.               With the Remove tool selected, you can adjust the Brush Settings from the toolbar below the image, at the bottom of the screen.Image 6: The Brush Settings overlay4.               Zoom in closer to the wall outlet and brush over the area by clicking and dragging with your mouse. The size of your brush, depending upon brush settings, will appear as a circular outline. You can change the size of the brush by tapping the [ or] keys on your keyboard.Image 7: Brushing over the wall outlet with the Remove tool5.               Once you are happy with the selection you’ve made, click the Remove button within the toolbar at the bottom of the screen.Image 8: The Remove button appears within the toolbar6.               The Firefly AI uses Generative fill to replace the brushed-over area with new content based upon the surrounding pixels. A set of four variations appears below the photograph. Click on each one to preview – as they can vary quite a bit.Image 9: Selecting a fill variant7.               Klick the Keep button in the toolbar to save your selection and continue editing. Remember – if you attempt to switch tools before saving… Firefly will prompt you to save your edits via a small overlay prompt.The outlet has now been removed and the wall is all patched up.Aside from the removal of objects through Generative fill, we can also perform insertions based on text prompts. Let’s add some additional elements to our photograph using these methods.  1.               Select the Insert tool from the left-hand toolbar.2.               Use it in a similar way as we did the Remove tool to brush in a selection of the image. In this case, we’ll add a crown to Poe’s head – so brush in an area that contains the top of his head and some space above it. Try and visualize a crown shape as you do this.3.               In the prompt input that appears beneath the photograph, type in a descriptive text prompt similar to the following: “regal crown with many jewels”Image 10: A selection is made, and a text prompt inserted4.               Click the Generate button to have the Firefly AI perform a Generative fill insertion based upon our text prompt as part of the selected area.Image 11: Poe is a regal cat5.               A crown is generated in accordance with our text prompt and the surrounding area. A set of four variations to choose from appears as well. Note how integrated they appear against the original photographic content.6.               Click the Keep button to commit and save your crown selection.7.               Let’s add a scepter as well. Brush the general form of a scepter across Poe’s body extending from his paws to his shoulder.8.               Type in the text prompt: “royal scepter”Image 12: Brushing in a scepter shape9.               Click the Generate button to have the Firefly AI perform a Generative fill insertion based upon our text prompt as part of the selected area.Image 13: Poe now holds a regal scepter in addition to his crown10.            Remember to choose a scepter variant and click the Keep button to commit and save your scepter selection.Okay! That should be enough regalia to satisfy Poe. Let’s download our creation for distribution or use in other software.Downloading your ImageClick the Download button in the upper right of the screen to begin the download process for your image.Image 14: The Download buttonAs Firefly begins preparing the image for download, a small overlay dialog appears.Image 15: Content credentials are applied to the image as it is downloadedFirefly applies metadata to any generated image in the form of content credentials and the image download process begins.Once the image is downloaded, it can be viewed and shared just like any other image file.Image 16: The final image from our exploration of Generative fillAlong with content credentials, a small badge is placed upon the lower right of the image which visually identifies the image as having been produced with Adobe Firefly.That concludes our set of articles on using Generative fill to remove and insert objects into your images using the Adobe Firefly AI. We have a number of additional articles on Firefly procedures on the way… including Generative recolor for vector artwork!Author BioJoseph Labrecque is a Teaching Assistant Professor, Instructor of Technology, University of Colorado Boulder / Adobe Education Leader / Partner by DesignJoseph is a creative developer, designer, and educator with nearly two decades of experience creating expressive web, desktop, and mobile solutions. He joined the University of Colorado Boulder College of Media, Communication, and Information as faculty with the Department of Advertising, Public Relations, and Media Design in Autumn 2019. His teaching focuses on creative software, digital workflows, user interaction, and design principles and concepts. Before joining the faculty at CU Boulder, he was associated with the University of Denver as adjunct faculty and as a senior interactive software engineer, user interface developer, and digital media designer.Labrecque has authored a number of books and video course publications on design and development technologies, tools, and concepts through publishers which include LinkedIn Learning (Lynda.com), Peachpit Press, and Adobe. He has spoken at large design and technology conferences such as Adobe MAX and for a variety of smaller creative communities. He is also the founder of Fractured Vision Media, LLC, a digital media production studio and distribution vehicle for a variety of creative works.Joseph is an Adobe Education Leader and member of Adobe Partners by Design. He holds a bachelor’s degree in communication from Worcester State University and a master’s degree in digital media studies from the University of Denver.Author of the book: Mastering Adobe Animate 2023
Read more
  • 0
  • 0
  • 737

article-image-generative-fill-with-adobe-firefly-part-i
Joseph Labrecque
24 Aug 2023
8 min read
Save for later

Generative Fill with Adobe Firefly (Part I)

Joseph Labrecque
24 Aug 2023
8 min read
Adobe Firefly AI Overview Adobe Firefly is a new set of generative AI tools that can be accessed via https://firefly.adobe.com/ by anyone with an Adobe ID. To learn more about Firefly… have a look at their FAQ.    Image 1: Adobe Firefly For more information about the usage of Firefly to generate images, text effects, and more… have a look at the previous articles in this series:  Animating Adobe Firefly Content with Adobe Animate  Exploring Text to Image with Adobe Firefly  Generating Text Effects with Adobe Firefly  Adobe Firefly Feature Deep Dive In the next two articles, we’ll continue our exploration of Firefly with the Generative fill module. We’ll begin with an overview of accessing Generative fill from a generated image and then explore how to use the module on our own personal images.  Recall from a previous article Exploring Text to Image with Adobe Firefly that when you hover your mouse cursor over a generated image – overlay controls will appear.  Image 2: Generative fill overlay control from Text to image  One of the controls in the upper right of the image frame will invoke the Generative fill module and pass the generated image into that view.   Image 3: The generated image is sent to the Generative fill module Within the Generative fill module, you can use any of the tools and workflows that are available when invoking Generative fill from the Firefly website. The only difference is that you are passing in a generated image rather than uploading an image from your local hard drive.  Keep this in mind as we continue to explore the basics of Generative fill in Firefly – as we’ll begin the process from scratch. Generative Fill When you first enter the Firefly web experience, you will be presented with the various workflows available.  These appear as UI cards and present a sample image, the name of the procedure, a procedure description, and either a button to begin the process or a label stating that it is “in exploration”. Those which are in exploration are not yet available to general users. We want to locate the Generative fill module and click the Generate button to enter the experience.   Image 4: The Generative fill module card From there, you’ll be taken to a view that prompts you to upload an image into the module. Firefly also presents a set of sample images you can load into the experience.    Image 5: Generative fill getting started promptly Clicking the Upload image button summons a file browser for you to locate the file you want to use Generative fill on. In my example, I’ll be using a photograph of my cat, Poe. You can download the photograph of Poe [[ NOTE – LINK TO FILE Poe.jpg ]] to work with as well.   Image 6: The photograph of Poe, a cat Once the image file has been uploaded into Firefly, you will be taken to the Generative fill user experience and the photograph will be visible. Note that this is exactly the same experience as when entering Generative fill from a prompt-generated image as we saw above. The only real difference is how we get to this point.   Image 7: The photograph is loaded into Generative fill You will note that there are two sets of tools available within the experience. One set is along the left side of the screen and includes Insert, Remove, and Pan tools.   Image 8: Insert, Remove, and Pan Switching between the Insert and Remove tools changes the function of the current process. The Pan tool allows you to pan the image around the view.  Along the bottom of the screen is the second set of tools – which are focused on selections. This set contains the Add and Subtract tools, access to Brush Settings, a Background removal process, and a selection Invert toggle.   Image 9: Add, Subtract, Brush Settings, Background removal, and selection Invert Let’s perform some Generative fill work on the photograph of Poe.  In the larger overlay along the bottom of the view, locate and click the Background option. This is an automated process that will detect and remove the background from the image loaded into Firefly.   Image 10: The background is removed from the selected photograph 2. A prompt input appears directly beneath the photograph. Type in the following prompt: “a quiet jungle at night with lots of mist and moonlight”  Image 11: Entering a prompt into the prompt input control 3. If desired, you can view and adjust the settings for the generative AI by clicking the Settings icon in the prompt input control. This summons the Settings overlay.  Image 12: The generative AI Settings overlay Within the Settings overlay, you will find there are three items that can be adjusted to influence the AI:  Match shape: You have two choices here – freeform or conform.  Preserve content: A slider that can be set to include more of the original content or produce new content. Guidance strength: A slider that can be set to provide more strength to the original image or the given prompt. I suggest leaving these at the default setting for now. 4. Click the Settings icon again to dismiss the overlay. 5. Click the Generate button to generate a background based upon the entered prompt. A new background is generated from our prompt, and it now appears as though Poe is visiting a lush jungle at night.   Image 13: Poe enjoying the jungle at night Note that the original photograph included a set of electric outlets exposed within the wall. When we removed the background, Firefly recognized that they were distinct from the general background and so retained them. The AI has taken them into account when generating the new background and has interestingly propped them up with a couple of sticks. It also has gone through and rendered a realistic shadow cast by Poe.  Before moving on, click the Cancel button to bring the transparent background back. Clicking the Keep button will commit the changes – and we do not want that as we wish to continue exploring other options. Clear out the prompt you previously wrote within the prompt input control so that there is no longer any prompt present.   Image 14: Click the Generate button with no prompt present 3. Click the Generate button without a text prompt in place. The photograph receives a different background from the one generated with a text prompt. When clicking the Generate button with no text prompt, you are basically allowing the Firefly AI to make all the decisions based solely on the visual properties of the image.   Image 15: A set of backgrounds is generated based on the remaining pixels present You can select any of the four variations that were generated from the set of preview thumbnails beneath the photograph. If you’d like Firefly to generate more variations – click the More button. Select the one you like best and click the Keep button. Okay! That’s pretty good but we are not done with Generative fill yet. We haven’t even touched the Insert and Remove functions… and there are Brush Settings to manipulate… and much more. In the next article, we’ll explore the remaining Generative fill tools and options to further manipulate the photograph of Poe.  Author BioJoseph Labrecque is a Teaching Assistant Professor, Instructor of Technology, University of Colorado Boulder / Adobe Education Leader / Partner by DesignJoseph is a creative developer, designer, and educator with nearly two decades of experience creating expressive web, desktop, and mobile solutions. He joined the University of Colorado Boulder College of Media, Communication, and Information as faculty with the Department of Advertising, Public Relations, and Media Design in Autumn 2019. His teaching focuses on creative software, digital workflows, user interaction, and design principles and concepts. Before joining the faculty at CU Boulder, he was associated with the University of Denver as adjunct faculty and as a senior interactive software engineer, user interface developer, and digital media designer.Labrecque has authored a number of books and video course publications on design and development technologies, tools, and concepts through publishers which include LinkedIn Learning (Lynda.com), Peachpit Press, and Adobe. He has spoken at large design and technology conferences such as Adobe MAX and for a variety of smaller creative communities. He is also the founder of Fractured Vision Media, LLC; a digital media production studio and distribution vehicle for a variety of creative works.Joseph is an Adobe Education Leader and member of Adobe Partners by Design. He holds a bachelor’s degree in communication from Worcester State University and a master’s degree in digital media studies from the University of Denver.Author of the book: Mastering Adobe Animate 2023
Read more
  • 0
  • 0
  • 898

article-image-adobe-firefly-feature-deep-dive
Joseph Labrecque
23 Aug 2023
9 min read
Save for later

Adobe Firefly Feature Deep Dive

Joseph Labrecque
23 Aug 2023
9 min read
Adobe FireflyAdobe Firefly is a new set of generative AI tools which can be accessed via https://firefly.adobe.com/ by anyone with an Adobe ID. To learn more about Firefly… have a look at their FAQ.  Image 1: Adobe FireflyFor more information about Firefly, have a look at the previous articles in this series:       Animating Adobe Firefly Content with Adobe Animate       Exploring Text to Image with Adobe Firefly       Generating Text Effects with Adobe FireflyIn this article, we’ll be exploring some of the more detailed features of Firefly in general. While we will be doing so from the perspective of the text-to-image module, much of what we cover will be applicable to other modules and procedures as well.Before moving on to the visual controls and options… let’s consider accessibility. Here is what Adobe has to say about accessibility within Firefly:Firefly is committed to providing accessible and inclusive features to all individuals, including users working with assistive devices such as speech recognition software and screen readers. Firefly is continuously enhanced to strive to meet the needs of all types of users, including individuals with visual, hearing, cognitive, motor, or other impairments, and is designed to conform to worldwide accessibility standards. -- AdobeYou can use the following keyboard shortcuts across the Firefly interface to navigate and control the software in a non-visual way:       Tab: navigates between user interface controls.       Space/Enter: activates buttons.       Enter: activates links.       Arrow Keys: navigates between options.       Space: selects options.As with most accessibility concerns and practices, these additional controls within Firefly can benefit those users who are not otherwise impaired as well – similar to sight-enabled users making use of captions when watching video-based content.For our exploration of the various additional controls and options within Firefly, we’ll start off with a generated set of images based on a prompt. To review how to achieve this, have a look at the article “Exploring Text to Image with Adobe Firefly”.Choose one of the generated images to work with and hover your mouse across the image to reveal a set of controls.Image 2: Image Overlay OptionsWe will explore each of these options one by one as we continue along with this article.Rating and Feedback OptionsAdobe is very open to feedback with Firefly. One reason is to get general user feedback to improve the experience of using the product… and the other is to influence the generative models so that users receive the output that is expected.Giving a simple thumbs-up or thumbs-down is the most basic level of feedback and is meant to rate the results of your prompt.Image 3: Rating the generated resultsOnce you provide a thumbs-up or thumbs-down… the overlay changes to request additional feedback. You don’t necessarily need to provide more feedback – but clicking on the Feedback button will allow you to go more in-depth in terms of why you provided the initial rating.Image 4: Additional feedback promptClicking the Feedback button will summon a much larger overlay where you can make choices via a checkbox as to why you rated the results the way you did. You also have the option to put a little note in here as well.Image 5: Additional feedback formClicking the Submit Feedback button or the Cancel button will close the overlay and bring you back to the experience.Additionally, there is an option to Report the image to Adobe. This is always a negative action – meaning that you find the results offensive or inappropriate in some way.Image 6: Report promptClicking on the Report option will summon a similar form to that of additional feedback, but the options will, of course, be different.Image 7: Report feedback formHere, you can report via a checkbox and add an optional note as part of the report. Adobe has committed to making sure that violence and things like copyrighted or trademarked characters and such are not generated by Firefly.For instance, if you use a prompt such as “Micky Mouse murdering a construction worker with a chainsaw”… you will receive a message like the following:Image 8: Firefly will not render trademarked characters or violenceWith Adobe is being massively careful in filtering certain words right now… I do hope in the future that users will be able to selectively choose exclusions in place of a general list of censored terms as exists now. While the prompt above is meant to be absurd – there are legitimate artistic reasons for many of the word categories which are currently banned.General Image ControlsThe controls in this section include some of the most used in Firefly at the moment – including the ability to download your generated image.Image 9: Image optionsWe have the following controls exposed, from left to right they are named:       Options       Download       FavoriteOptionsStarting at the left-hand side of this group of controls, we begin with an ellipse that represents Options which, when clicked, will summon a small overlay with additional choices.Image 10: Expanded optionsThe menu that appears includes the following items:1.     Submit to Firefly gallery2.     Use as a reference image3.     Copy to the clipboardLet’s examine each of these in detail.You may have noticed that the main navigation of the Firefly website includes a number of options: Home, Gallery, Favorites, About, and FAQ. The Gallery section contains generated images that users have submitted to be featured on this page.Clicking the Submit to Firefly gallery option will summon a submission overlay through which you can request that your image is included in the Gallery.Image 11: Firefly Gallery submissionSimply read over the details and click Continue or Cancel to return.The second item, Use as reference image, brings up a small overlay that includes the selected image to use as a reference along with a strength slider.Image 12: Reference image sliderMoving the slider to the left will favor the reference image and moving it to the right will favor the raw prompt instead. You must click the Generate button after adjusting the slider to see its effect.The final option is Copy to clipboard – which does exactly as you’d expect. Note that Content Credentials are applied in this case just the same as they are when downloading an image. You can read more about this feature in the Firefly FAQ.DownloadBack up to the set of three controls, the middle option allows you to initiate a Download of the selected image. As Firefly begins preparing the image for download, a small overlay dialog appears.Image 13: Download applies content credentials – similar to the Copy to clipboard optionFirefly applies metadata to any generated image in the form of content credentials and the image download process begins. We’ve covered exactly what this means in previous articles. The image is then downloaded to your local file system.FavoriteClicking the Favorite control will add the generated image to your Firefly Favorites so that you can return to the generated set of images for further manipulation or to download later on.Image 14: Adding a favoriteThe Favorite control works as a toggle. Once you declare a favorite, the heart icon will appear filled and the control will allow you to un-favorite the selected image instead.That covers the main set of controls which overlay the right of your image – but there is a smaller set of controls on the left that we must explore as well.Additional Manipulation OptionsThe alternative set of controls numbers only two – but they are both very powerful. To the left is the Show similar control and to the right is Generative fill.Image 15: Show similar and Generative fill controlsClicking upon the Show similar control will retain the particular, chosen image while regenerating the other three to be more in conformity with the image specified.Image 16: Show similar will refresh the other three imagesAs you can see when comparing the sets of images in the figures above and below… you can have great influence over your set of generated images through this control.Image 17: The original image stays the sameThe final control we will examine in this article is Generative fill. It is located right next to the Show similar control.The generative fill view presents us with a separate view and a number of all-new tools for making selections in order to add or remove content from our images.Image 18: Generative fill brings you to a different view altogetherGenerative fill is actually its own proper procedure in Adobe Firefly… and we’ll explore how to use this feature in full - in the next article! Author BioJoseph Labrecque is a Teaching Assistant Professor, Instructor of Technology, University of Colorado Boulder / Adobe Education Leader / Partner by DesignJoseph is a creative developer, designer, and educator with nearly two decades of experience creating expressive web, desktop, and mobile solutions. He joined the University of Colorado Boulder College of Media, Communication, and Information as faculty with the Department of Advertising, Public Relations, and Media Design in Autumn 2019. His teaching focuses on creative software, digital workflows, user interaction, and design principles and concepts. Before joining the faculty at CU Boulder, he was associated with the University of Denver as adjunct faculty and as a senior interactive software engineer, user interface developer, and digital media designer.Labrecque has authored a number of books and video course publications on design and development technologies, tools, and concepts through publishers which include LinkedIn Learning (Lynda.com), Peachpit Press, and Adobe. He has spoken at large design and technology conferences such as Adobe MAX and for a variety of smaller creative communities. He is also the founder of Fractured Vision Media, LLC; a digital media production studio and distribution vehicle for a variety of creative works.Joseph is an Adobe Education Leader and member of Adobe Partners by Design. He holds a bachelor’s degree in communication from Worcester State University and a master’s degree in digital media studies from the University of Denver.Author of the book: Mastering Adobe Animate 2023 
Read more
  • 0
  • 0
  • 3156

article-image-generative-recolor-with-adobe-firefly
Joseph Labrecque
23 Aug 2023
10 min read
Save for later

Generative Recolor with Adobe Firefly

Joseph Labrecque
23 Aug 2023
10 min read
Adobe Firefly OverviewAdobe Firefly is a new set of generative AI tools which can be accessed via https://firefly.adobe.com/ by anyone with an Adobe ID. To learn more about Firefly… have a look at their FAQ:Image 1: Adobe FireflyFor more information about the usage of Firefly to generate images, text effects, and more… have a look at the previous articles in this series:     Animating Adobe Firefly Content with Adobe Animate       Exploring Text to Image with Adobe Firefly      Generating Text Effects with Adobe Firefly       Adobe Firefly Feature Deep Dive       Generative Fill with Adobe Firefly (Part I)      Generative Fill with Adobe Firefly (Part II)This current Firefly article will focus on a unique use of AI prompts via the Generative recolor module.Generative Recolor and SVGWhile most procedures in Firefly are focused on generating imagery through text prompts, the service also includes modules that use prompt-driven AI a bit differently. The subject of this article, Generative recolor, is a perfect example of this.Generative recolor works with vector artwork in the form of SVG files. If you are unfamiliar with SVG, it stands for Scalable Vector Graphics and is an XML… so uses text-based nodes similar to HTML:Image 2: An SVG file is composed of vector information defining points, paths, and colorsAs the name indicates, we are dealing with vector graphics here and not photographic pixel-based bitmap images. Vectors are often used for artwork, logos, and such – as they can be infinitely scaled and easily recolored.One of the best ways of generating SVG files is by designing them in a vector-based design tool like Adobe Illustrator. Once you have finished designing your artwork, you’ll save it as SVG for use in Firefly:Image 3: Cat artwork designed in Adobe IllustratorTo convert your Illustrator artwork to SVG, perform the following steps:1.     Choose File > Save As to open the save as dialog.2.     Choose SVG (svg) for the file format:Image 3: Selecting SVG (svg) as the file format3.     Browse to the location on your computer you would like to save the file to.4.     Click the Save button.You now have an SVG file ready to recolor within Firefly. If you desire, you can download the provided cat.svg file that we will work on in this article. Recolor Vector Artwork with Generative RecolorGenerative recolor, like all Firefly modules, can be found directly at https://firefly.adobe.com/ so long as you are logged in with your Adobe ID.From the main Firefly page, you will find a number of modules for different AI-driven tasks:Image 4: Locate the Generative recolor Firefly moduleLet’s explore Generative recolor in Firefly:1.     You’ll want to locate the module named Generative recolor.2.     Click the Generate button to get started.You are taken to an intermediate view where you are able to upload your chosen SVG file for the purposes of vector recolor based upon a descriptive text prompt:Image 5: The Upload SVG button prompt appears, along with sample files3.     Click the Upload SVG button and choose cat.svg from your file system. Of course, you can use any SVG file you want if you have another in mind. If you do not have an SVG file you’d like to use, you can click on any of the samples presented below the Upload SVG button to load one up into the module.The SVG is uploaded and a control appears which displays a preview of your file along with a text input where you can write a short text prompt describing the color palette you’d like to generate:Image 6: The Generative recolor input requests a text prompt4.     Think of some descriptive words for an interesting color palette and type them into the text input. I’ll input the following simple prompt for this demonstration: “northern lights”.5.     Click the Generate button when ready.You are taken into the primary Generative recolor user interface experience and a set of four color variants is immediately available for preview:Image 7: The Firefly Generative recolor user interfaceThe interface appears similar to what you might have seen in other Firefly modules – but there are some key differences here, since we are dealing with recoloring vector artwork.The larger, left-most area contains a set of four recolor variants to choose from. Below this is the prompt input area which displays the current text prompt and a Refresh button that allows the generation of additional variants when the prompt is updated. To the right of this area are presented various additional options within a clean user interface that scrolls vertically. Let’s explore these from top to bottom.The first thing you’ll see is a thumbnail of your original artwork with the ability to replace the present SVG with a new file:Image 8: You can replace your artwork by uploading a new SVG fileDirectly below this, you will find a set of sample prompts that can be applied to your artwork:Image 9: Sample prompts can provide immediate resultsClicking upon any of these thumbnails will immediately overwrite the existing prompt and cause a refresh – generating a new set of four recolor variants.Next, is a dropdown selection which allows the choice of color harmony:Image 10: A number of color harmonies are availableChoosing to align the recolor prompt with a color harmony will impact which colors are being used based off a combination of the raw prompt – guided by harmonization rules. An indicator will be added along with the text prompt.For more information about color and color harmonies, check out Understanding color: A visual guide – from Adobe.Below is a set of eighteen color swatches to choose from:Image 11: Color chips can add bias to your text promptClicking on any of these swatches will add that color to the options below your text prompt to help guide the recolor process. You can select one or many of these swatches to use.Finally, at the very bottom of this area is a toggle switch that allows you to either preserve black and white colors in your artwork or to recolor them just like any other color:Image 12: You can choose to preserve black and white during a recolor session or notThat is everything along the right-hand side of the interface. We’ll return to this area shortly – but for now… let’s see the options that appear when hovering the mouse cursor over any of the four recolor variants:Image 13: The Generative recolor overlayHovering over a recolor variant will reveal a number of options:       Prominent colors: Displays the colors used in this recolor variant.       Shuffle colors: Will use the same colors… but distribute them differently across the vector artwork.       Options: Copy to clipboard is the only option that is available via this menu.       Download: Enables the download of this particular recolor variant.       Rate this result: Provide a positive or negative rating of this result.We’ll make use of the Download option in a bit – but first… let’s make use of some of the choices present in the right side panel to modify and guide our recolor.Modifying the PromptYou can always change the text prompt however you wish and click the Refresh button to generate a different set of variants. Let’s instead keep this same text prompt but see how various choices can impact how it affects the recolor results:Image 14: A modified prompt box with options addedFocus again on the right side of the user interface and make the following selections:1.     Select a color harmony: Complementary2.     Choose a couple of colors to weight the prompt: Green and Blue violet3.     Disable the Preserve black and white toggle4.     Click the Refresh button to see the results of these optionsA new set of four recolor variants is produced. This set of variants is guided by the extra choices we made and is vastly different from the original set which was recolored solely based upon the text prompt:Image 15: A new set of recolor variations is generatedPlay with the various options on your own to see what kind of variations you can achieve in the artwork.Downloading your Recolored ArtworkOnce you are happy with one of the generated recolored variants, you’ll want to download it for use elsewhere. Click the Download button in the upper right of the selected variant to begin the download process for your recolored SVG file.The recolored SVG file is immediately downloaded to your computer. Note that unlike other content generated with Firefly, files created with Generative recolor do not contain a Firefly watermark or badge:Image 17: The resulting recolored SVG fileThat’s all there is to it! You can continue creating more recolor variants and freely download any that you find particularly interesting.Before we conclude… note that another good use for Generative recolor – similar to most applications of AI – is for ideation. If you are stuck with a creative block when trying to decide on a color palette for something you are designing… Firefly can help kick-start that process for you.Author BioJoseph is a Teaching Assistant Professor, Instructor of Technology, University of Colorado Boulder / Adobe Education Leader / Partner by DesignJoseph Labrecque is a creative developer, designer, and educator with nearly two decades of experience creating expressive web, desktop, and mobile solutions. He joined the University of Colorado Boulder College of Media, Communication, and Information as faculty with the Department of Advertising, Public Relations, and Media Design in Autumn 2019. His teaching focuses on creative software, digital workflows, user interaction, and design principles and concepts. Before joining the faculty at CU Boulder, he was associated with the University of Denver as adjunct faculty and as a senior interactive software engineer, user interface developer, and digital media designer.Labrecque has authored a number of books and video course publications on design and development technologies, tools, and concepts through publishers which include LinkedIn Learning (Lynda.com), Peachpit Press, and Adobe. He has spoken at large design and technology conferences such as Adobe MAX and for a variety of smaller creative communities. He is also the founder of Fractured Vision Media, LLC; a digital media production studio and distribution vehicle for a variety of creative works.Joseph is an Adobe Education Leader and member of Adobe Partners by Design. He holds a bachelor’s degree in communication from Worcester State University and a master’s degree in digital media studies from the University of Denver.Author of the book: Mastering Adobe Animate 2023 
Read more
  • 0
  • 0
  • 434
article-image-adobe-firefly-integrations-in-illustrator-and-photoshop
Joseph Labrecque
23 Aug 2023
12 min read
Save for later

Adobe Firefly Integrations in Illustrator and Photoshop

Joseph Labrecque
23 Aug 2023
12 min read
Adobe Firefly OverviewAdobe Firefly is a new set of generative AI tools which can be accessed via https://firefly.adobe.com/ by anyone with an Adobe ID. To learn more about Firefly… have a look at their FAQ:Image 1: Adobe FireflyFor more information around the usage of Firefly to generate images, text effects, and more… have a look at the previous articles in this series:       Animating Adobe Firefly Content with Adobe Animate       Exploring Text to Image with Adobe Firefly       Generating Text Effects with Adobe Firefly       Adobe Firefly Feature Deep Dive      Generative Fill with Adobe Firefly (Part I)      Generative Fill with Adobe Firefly (Part II)       Generative Recolor with Adobe Firefly       Adobe Firefly and Express (beta) IntegrationThis current Firefly article will focus on Firefly integrations within the release version of Adobe Illustrator and the public beta version of Adobe Photoshop.Firefly in Adobe IllustratorVersion 27.7 is the most current release of Illustrator at the writing of this article and this version contains Firefly integrations in the form of Generative Recolor (Beta).To access this, design any vector artwork within Illustrator or open existing artwork to get started. I’m using the cat.ai file that was used to generate the cat.svg file used in the Generative Recolor with Adobe Firefly article:Image 2: The cat vector artwork with original colors1.     Select the artwork you would like to recolor. Artwork must be selected for this to work.2.     Look to the Properties panel and locate the Quick Actions at the bottom of the panel. Click the Recolor quick action:Image 3: Choosing the Recolor Quick action3.     By default, the Recolor overlay will open with the Recolor tab active. Switch to the Generative Recolor (Beta) tab to activate it instead:Image 4: The Generative Recolor (Beta) view4.     You are invited to enter a prompt. I’ve written “northern lights green and vivid neon” as my prompt that describes colors I’d like to see. There are also sample prompts you can click on below the prompt input box.5.     Click the Generate button once a prompt has been entered:Image 5: Selecting a Recolor variantA set of recolor variants is presented within the overlay. Clicking on any of these will recolor your existing artwork according to the variant look:Image 6: Adding a specific color swatchIf you would like to provide even more guidance, you can modify the prompt and even add specific color swatches you’d like to see included in the recolored artwork.That’s it for Illustrator – very straightforward and easy to use!Firefly in Adobe Photoshop (beta)Generative Fill through Firefly is also making its way into Photoshop. While within Illustrator – we have Firefly as part of the current version of the software, albeit with a beta label on the feature, with Photoshop things are a bit different:Image 7: Generative Fill is only available in the Photoshop public betaTo make use of Firefly within Photoshop, the current release version will not cut it. You will need to install the public beta from the Creative Cloud Desktop application in order to access these features.With that in mind, let’s use Generative Fill in the Photoshop public beta to expand a photograph beyond its bounds and add in additional objects.1.     First, open a photograph in the Photoshop public beta. I’m using the Poe.jpg photograph that we previously used in the articles Generative Fill with Adobe Firefly (Parts I & II):Image 8: The original photograph in Photoshop2.     With the photograph open, we’ll add some extra space to the canvas to generate additional content and expand the image beyond its bounds. Summon the Canvas Size dialog by choosing Image > Canvas Size… from the application menu.3.     Change both the width and height values to 200 Percent:Image 9: Expanding the size of the canvas4.     Click the OK button to close the dialog and apply the change.The original canvas is expanded to 200 percent of its original size while the image itself remains exactly the same:Image 10: The photograph with an expanded canvasGenerative Fill, when used in this manner to expand an image, works best by selecting portions to expand bit by bit rather than all the expanded areas at once. It is also beneficial to select parts of the original image you want to expand from. This feeds and directs the Firefly AI.5.     Using the Rectangular Marquee tool, make such a selection across either the top, bottom, left, or right portions of the document:Image 11: Making a selection for Generative Fill6.     With a selection established, click Generative Fill within the contextual toolbar:Image 12: Leaving the prompt blank allows Photoshop to make all the decisions7.     The contextual toolbar will now display a text input where you can enter a prompt to guide the process. However, in this case, we want to simply expand the image based upon the original pixels selected – so we will leave this blank with no prompt whatsoever. Click Generate to continue.8.     The AI processes the image and displays a set of variants to choose from within the Properties panel. Click the one that conforms closest to the imagery you are looking to produce and that is what will be used upon the canvas:Image 13: Choosing a Generative Fill variantNote that if you look to the Layers panel, you will find a new layer type has been created and added to the document layer stack:Image 14: Generative Layers are a new layer type in PhotoshopThe Generative Layer retains both the given prompt and variants so that you can continue to make changes and adjustments as needed – even following this specific point in time.The resulting expansion of the original image as performed by Generative Fill can be very convincing! As mentioned before, this often works best by performing fills in a piece-by-piece patchwork manner:Image 15: The photograph with a variant applied across the selectionContinue selecting portions of the image using the Rectangular Marquee tool (or any selection tools, really) and generate new content the same way we have done so already – without supplying any text prompt to the AI:Image 16: The photograph with all expanded areas filled via generative AIEventually, you will complete the expansion of the original image and produce a very convincing deception.Of course, you can also guide the AI with actual text prompts. Let’s add in an object to the image as a demonstration.1.     Using the Lasso tool (or again… any selection tool), make a selection across the image in the form of what might hold a standing lamp of some sort:Image 17: Making an additional selection2.     With a selection established, click Generative Fill within the contextual toolbar.3.     Type in a prompt that describes the object you want to generate. I will use the prompt “tall rustic wooden and metal lamp”.4.     Click the Generate button to process the Generative Fill request:Image 18: A lamp is generated from our selection and text promptA set of generated lamp variants are established within the Properties panel. Choose the one you like the most and it will be applied within the image.You will want to be careful with how many Generative Layers are produced as you work on any single document. Keep an eye on the Layers panel as you work:Image 19: Each Generative Fill process produces a new layerEach time you use Generative Fill within Photoshop, a new Generative Layer is produced.Depending upon the resources and capabilities of your computer… this might become burdensome as everything becomes more and more complex. You can always flatten your layers to a single pixel layer if this occurs to free up additional resources.That concludes our overview of Generative Fill in the Photoshop public beta!Ethical Concerns with Generative AII want to make one additional note before concluding this series and that has to do with the ethics of generative AI. This concern goes beyond Adobe Firefly specifically – as it could be argued that Firefly is the least problematic and most ethical implementation of generative AI that is available today.See https://firefly.adobe.com/faq for additional details on steps Adobe has taken to ensure responsible AI through their use of Adobe Stock content to train their models, through the use of Content Credentials, and more...Like all our AI capabilities, Firefly is developed and deployed around our AI ethics principles of accountability, responsibility, and transparency.Data collection: We train our model by collecting diverse image datasets, which have been curated and preprocessed to mitigate against harmful or biased content. We also recognize and respect artists’ ownership and intellectual property rights. This helps us build datasets that are diverse, ethical, and respectful toward our customers and our community.Addressing bias and testing for safety and harm: It’s important to us to create a model that respects our customers and aligns with our company values. In addition to training on inclusive datasets, we continually test our model to mitigate against perpetuating harmful stereotypes. We use a range of techniques, including ongoing automated testing and human evaluation.Regular updates and improvements: This is an ongoing process. We will regularly update Firefly to improve its performance and mitigate harmful bias in its output. We also provide feedback mechanisms for our users to report potentially biased outputs or provide suggestions into our testing and development processes. We are committed to working together with our customers to continue to make our model better.-- AdobeI have had discussions with a number of fellow educators about the ethical use of generative AI and Firefly in general. Here are some paraphrased takeaways to consider as we conclude this article series:      “We must train the new generations in the respect and proper use of images or all kinds of creative work.”      “I don't think Ai can capture that sensitive world that we carry as human beings.”      “As dire as some aspects of all of this are, I see opportunities.”      “Thousands of working artists had their life's work unknowingly used to create these images.”       “Professionals will be challenged, truly, by all of this, but somewhere in that process I believe we will find our space.”      “AI data expropriations are a form of digital colonialism.”      “For many students, the notion of developing genuine skill seems pointless now.”     “Even for masters of the craft, it’s dispiriting to see someone type 10 words and get something akin to what took them 10 years.”I’ve been using generative AI for a few years now and can appreciate and understand the concerns expressed above - but also recognize that this technology is not going away. We must do what we can to address the ethical concerns brought up here and make sure to use our awareness of these problematic issues to further guide the direction of these technologies as we rapidly advance forward. These are very challenging times, right now. Author BioJoseph Labrecque is a Teaching Assistant Professor, Instructor of Technology, University of Colorado Boulder / Adobe Education Leader / Partner by DesignJoseph Labrecque is a creative developer, designer, and educator with nearly two decades of experience creating expressive web, desktop, and mobile solutions. He joined the University of Colorado Boulder College of Media, Communication, and Information as faculty with the Department of Advertising, Public Relations, and Media Design in Autumn 2019. His teaching focuses on creative software, digital workflows, user interaction, and design principles and concepts. Before joining the faculty at CU Boulder, he was associated with the University of Denver as adjunct faculty and as a senior interactive software engineer, user interface developer, and digital media designer.Labrecque has authored a number of books and video course publications on design and development technologies, tools, and concepts through publishers which include LinkedIn Learning (Lynda.com), Peachpit Press, and Adobe. He has spoken at large design and technology conferences such as Adobe MAX and for a variety of smaller creative communities. He is also the founder of Fractured Vision Media, LLC; a digital media production studio and distribution vehicle for a variety of creative works.Joseph is an Adobe Education Leader and member of Adobe Partners by Design. He holds a bachelor’s degree in communication from Worcester State University and a master’s degree in digital media studies from the University of Denver.Author of the book: Mastering Adobe Animate 2023 
Read more
  • 0
  • 0
  • 872

article-image-getting-started-with-aws-codewhisperer
Rohan Chikorde
23 Aug 2023
11 min read
Save for later

Getting Started with AWS CodeWhisperer

Rohan Chikorde
23 Aug 2023
11 min read
IntroductionEfficiently writing secure, high-quality code within tight deadlines remains a constant challenge in today's fast-paced software development landscape. Developers often face repetitive tasks, code snippet searches, and the need to adhere to best practices across various programming languages and frameworks. However, AWS CodeWhisperer, an innovative AI-powered coding companion, aims to transform the way developers work. In this blog, we will explore the extensive features, benefits, and setup process of AWS CodeWhisperer, providing detailed insights and examples for technical professionals.At its core, CodeWhisperer leverages machine learning and natural language processing to deliver real-time code suggestions and streamline the development workflow. Seamlessly integrated with popular IDEs such as Visual Studio Code, IntelliJ IDEA, and AWS Cloud9, CodeWhisperer enables developers to remain focused and productive within their preferred coding environment. By eliminating the need to switch between tools and external resources, CodeWhisperer accelerates coding tasks and enhances overall productivity.A standout feature of CodeWhisperer is its ability to generate code from natural language comments. Developers can now write plain English comments describing a specific task, and CodeWhisperer automatically analyses the comment, identifies relevant cloud services and libraries, and generates code snippets directly within the IDE. This not only saves time but also allows developers to concentrate on solving business problems rather than getting entangled in mundane coding tasks.In addition to code generation, CodeWhisperer offers advanced features such as real-time code completion, intelligent refactoring suggestions, and error detection. By analyzing code patterns, industry best practices, and a vast code repository, CodeWhisperer provides contextually relevant and intelligent suggestions. Its versatility extends to multiple programming languages, including Python, Java, JavaScript, TypeScript, C#, Go, Rust, PHP, Ruby, Kotlin, C, C++, Shell scripting, SQL, and Scala, making it a valuable tool for developers across various language stacks.AWS CodeWhisperer addresses the need for developer productivity tools by streamlining the coding process and enhancing efficiency. With its AI-driven capabilities, CodeWhisperer empowers developers to write clean, efficient, and high-quality code. By supporting a wide range of programming languages and integrating with popular IDEs, CodeWhisperer caters to diverse development scenarios and enables developers to unlock their full potential. Embrace the power of AWS CodeWhisperer and experience a new level of productivity and coding efficiency in your development journey.Key Features and Benefits of CodeWhisperer A. Real-time code suggestions and completionCodeWhisperer provides developers with real-time code suggestions and completion, significantly enhancing their coding experience. As developers write code, CodeWhisperer's AI-powered engine analyzes the context and provides intelligent suggestions for function names, variable declarations, method invocations, and more. This feature helps developers write code faster, with fewer errors, and improves overall code quality. By eliminating the need to constantly refer to documentation or search for code examples, CodeWhisperer streamlines the coding process and boosts productivity.B. Intelligent code generation from natural language commentsOne of the standout features of CodeWhisperer is its ability to generate code snippets from natural language comments. Developers can simply write plain English comments describing a specific task, and CodeWhisperer automatically understands the intent and generates the corresponding code. This powerful capability saves developers time and effort, as they can focus on articulating their requirements in natural language rather than diving into the details of code implementation. With CodeWhisperer, developers can easily translate their high-level concepts into working code, making the development process more intuitive and efficient.C. Streamlining routine or time-consuming tasksCodeWhisperer excels at automating routine or time-consuming tasks that developers often encounter during the development process. From file manipulation and data processing to API integrations and unit test creation, CodeWhisperer provides ready-to-use code snippets that accelerate these tasks. By leveraging CodeWhisperer's automated code generation capabilities, developers can focus on higher-level problem-solving and innovation, rather than getting caught up in repetitive coding tasks. This streamlining of routine tasks allows developers to work more efficiently and deliver results faster.D. Leveraging AWS APIs and best practicesAs an AWS service, CodeWhisperer is specifically designed to assist developers in leveraging the power of AWS services and best practices. It provides code recommendations tailored to AWS application programming interfaces (APIs), allowing developers to efficiently interact with services such as Amazon EC2, Lambda, and Amazon S3. CodeWhisperer ensures that developers follow AWS best practices by providing code snippets that adhere to security measures, performance optimizations, and scalability considerations. By integrating AWS expertise directly into the coding process, CodeWhisperer empowers developers to build robust and reliable applications on the AWS platform.E. Enhanced security scanning and vulnerability detectionSecurity is a top priority in software development, and CodeWhisperer offers enhanced security scanning and vulnerability detection capabilities. It automatically scans both generated and developer-written code to identify potential security vulnerabilities. By leveraging industry-standard security guidelines and knowledge, CodeWhisperer helps developers identify and remediate security issues early in the development process. This proactive approach to security ensures that code is written with security in mind, reducing the risk of vulnerabilities and strengthening the overall security posture of applications.F. Responsible AI practices to address bias and open-source usageAWS CodeWhisperer is committed to responsible AI practices and addresses potential bias and open-source usage concerns. The AI models behind CodeWhisperer are trained on vast amounts of publicly available code, ensuring accuracy and relevance in code recommendations. However, CodeWhisperer goes beyond accuracy and actively filters out biased or unfair code recommendations, promoting inclusive coding practices. Additionally, it provides reference tracking to identify code recommendations that resemble specific open source training data, allowing developers to make informed decisions and attribute sources appropriately. By focusing on responsible AI practices, CodeWhisperer ensures that developers can trust the code suggestions and recommendations it provides.Setting up CodeWhisperer for individual developersIf you are an individual developer who has acquired CodeWhisperer independently and will be using AWS Builder ID for login, follow these steps to access CodeWhisperer from your JetBrains IDE:1.      Ensure that the AWS Toolkit for JetBrains is installed. If it is not already installed, you can install it from the JetBrains plugin marketplace.2.      In your JetBrains IDE, navigate to the edge of the window and click on the AWS Toolkit icon. This will open the AWS Toolkit for the JetBrains panel:3. Within the AWS Toolkit for JetBrains panel, click on the Developer Tools tab. This will open the Developer Tools Explorer.4. In the Developer Tools Explorer, locate the CodeWhisperer section and expand it. Then, select the "Start" option:5. A pop-up window titled "CodeWhisperer: Add a Connection to AWS" will appear. In this window, choose the "Use a personal email to sign up" option to sign in with your AWS Builder ID.6. Once you have entered your personal email associated with your AWS Builder ID, click on the "Connect" button to establish the connection and access CodeWhisperer within your JetBrains IDE:7.      A pop-up titled "Sign in with AWS Builder ID" will appear. Select the "Open and Copy Code" option.8.      A new browser tab will open, displaying the "Authorize request" window. The copied code should already be in your clipboard. Paste the code into the appropriate field and click "Next."9.      Another browser tab will open, directing you to the "Create AWS Builder ID" page. Enter your email address and click "Next." A field for your name will appear. Enter your name and click "Next." AWS will send a confirmation code to the email address you provided.10.   On the email verification screen, enter the code and click "Verify." On the "Choose your password" screen, enter a password, confirm it, and click "Create AWS Builder ID." A new browser tab will open, asking for your permission to allow JetBrains to access your data. Click "Allow."11.   Another browser tab will open, asking if you want to grant access to the AWS Toolkit for JetBrains to access your data. If you agree, click "Allow."12.   Return to your JetBrains IDE to continue the setup process. CodeWhisperer in ActionExample Use Case: Automating Unit Test Generation with CodeWhisperer in Python (Credits: aws-solutions-library-samples):One of the powerful use cases of CodeWhisperer is its ability to automate the generation of unit test code. By leveraging natural language comments, CodeWhisperer can recommend unit test code that aligns with your implementation code. This feature significantly simplifies the process of writing repetitive unit test code and improves overall code coverage.To demonstrate this capability, let's walk through an example using Python in Visual Studio Code:        Begin by opening an empty directory in your Visual Studio Code IDE.        (Optional) In the terminal, create a new Python virtual environment:python3 -m venv .venvsource .venv/bin/activate        Set up your Python environment and ensure that the necessary dependencies are installed.pip install pytest pytest-cov               Create a new file in your preferred Python editor or IDE and name it "calculator.py".       Add the following comment at the beginning of the file to indicate your intention to create a simple calculator class:   # example Python class for a simple calculator       Once you've added the comment, press the "Enter" key to proceed.       CodeWhisperer will analyze your comment and start generating code suggestions based on the desired functionality.      To accept the suggested code, simply press the "Tab" key in your editor or IDE.                                                            Picture Credit: aws-solutions-library-samplesIn case CodeWhisperer does not provide automatic suggestions, you can manually trigger CodeWhisperer to generate recommendations using the following keyboard shortcuts:For Windows/Linux users, press "Alt + C".For macOS users, press "Option + C".If you want to view additional suggestions, you can navigate through them by pressing the Right arrow key. On the other hand, to access previous suggestions, simply press the Left arrow key. If you wish to reject a recommendation, you can either press the ESC key or use the backspace/delete key.To continue building the calculator class, proceed by selecting the Enter key and accepting CodeWhisperer's suggestions, whether they are provided automatically or triggered manually. CodeWhisperer will propose basic functions for the calculator class, including add(), subtract(), multiply(), and divide(). In addition to these fundamental operations, it can also suggest more advanced functions like square(), cube(), and square_root().By following these steps, you can leverage CodeWhisperer to enhance your coding workflow and efficiently develop the calculator class, benefiting from a range of pre-generated functions tailored to your specific needs.ConclusionAWS CodeWhisperer is a groundbreaking tool that has the potential to revolutionize the way developers work. By harnessing the power of AI, CodeWhisperer provides real-time code suggestions and automates repetitive tasks, enabling developers to focus on solving core business problems. With seamless integration into popular IDEs and support for multiple programming languages, CodeWhisperer offers a comprehensive solution for developers across different domains. By leveraging CodeWhisperer's advanced features, developers can enhance their productivity, reduce errors, and ensure the delivery of high-quality code. As CodeWhisperer continues to evolve, it holds the promise of driving accelerated software development and fostering innovation in the developer community.Author BioRohan Chikorde is an accomplished AI Architect professional with a post-graduate in Machine Learning and Artificial Intelligence. With almost a decade of experience, he has successfully developed deep learning and machine learning models for various business applications. Rohan's expertise spans multiple domains, and he excels in programming languages such as R and Python, as well as analytics techniques like regression analysis and data mining. In addition to his technical prowess, he is an effective communicator, mentor, and team leader. Rohan's passion lies in machine learning, deep learning, and computer vision.LinkedIn
Read more
  • 0
  • 0
  • 2873

article-image-co-pilot-microsoft-fabric-for-power-bi
Sagar Lad
23 Aug 2023
8 min read
Save for later

Co-Pilot & Microsoft Fabric for Power BI

Sagar Lad
23 Aug 2023
8 min read
IntroductionMicrosoft's data platform solution for the modern era is called Fabric. Microsoft's three primary data analytics tools:  Power BI, Azure Data Factory, and Azure Synapse all covered under Fabric. Advanced artificial intelligence capabilities built on machine learning and natural language processing (NLP) are made available to Power BI customers through Copilot. In this article, we will deep dive into how co-pilot and Microsoft Fabric will transform the way we develop and work with Power BI.Co-Pilot and Fabric with Power BIThe urgent requirement for businesses to turn their data into value is something that both Microsoft Fabric and Copilot aspire to address. Big Data continues to fall short of its initial promises even after years have passed. Every year, businesses generate more data, yet a recent IBM study found that 90% of this data is never successfully exploited for any kind of strategic purpose. So, more data does not mean more value or business insight. Data fragmentation and poor data quality are the key obstacles to releasing the value of data. These problems are what Microsoft hopes to address with Microsoft Fabric, a human-centric, end-to-end analytics product that brings together all of an organization's data and analytics in one place. Copilot has now been integrated into Power BI. Large multi-modal artificial intelligence models based on natural language processing have gained attention since the publication of ChatGPT. Beyond casuistry, Microsoft Fabric and Copilot share a trait in that they each want to transform the Power BI user interface.●       Microsoft Fabric and Power BIMicrosoft Fabric is just Synapse and Power BI together. By combining the benefits of the Power BI SaaS platform with the various Synapse workload types, Microsoft Fabric creates an environment that is more cohesive, integrated, and easier to use for all of the associated profiles. However, Power BI Premium users will get access to new opportunities for data science, data engineering, etc. Power BI will continue to function as it does right now. Data analysts and Power BI developers are not required to begin using Synapse Data Warehouse if they do not want to. Microsoft wants to combine all of its data offerings into one, called Fabric, just like it did with Office 365:Image 1: Microsoft Fabric (Source: Microsoft)Let’s understand in detail how Microsoft Fabric will make life easier for Power BI developers.1.     Data IngestionThere are various methods by which we can connect to data sources in Fabric in order to consume data. For example, utilising Spark notebooks or pipelines, for instance. This may be unknown to the Power BI realm, though.                                                       Image 2: Data Transformation in Power BI Instead, we can ingest the data using dataflows gen2, which will save it on OneLake in the proper format.2.     Ad Hoc Query One or more dataflows successfully published and refreshed will show in the workspace along with a number of other artifacts. The SQL Endpoint artifact is one of them. We can begin creating on-demand SQL queries and saving them as views after you open them. As an alternative, we can also create visual queries which will enable us to familiarise ourselves with the data flow diagram view. Above all, however, is the fact that this interface shares many characteristics with Power BI Data Marts, making it a familiar environment for those familiar with Power BI:   Image 3: Power BI - One Data Lake Hub3.     Data ModellingWith the introduction of web modelling for Power BI, we can introduce new metrics and start establishing linkages between different tables right away in this interface. The default workspace where the default dataset is kept will automatically contain the data model. The new storage option Direct Lake is advantageous for the datasets created in this manner via the cloud interface. By having just one copy of data in OneLake, this storage style prevents data duplication and unnecessary data refreshes.●       Co-Pilot and Power BI Copilot, a new artificial intelligence framework for Power BI is an offering from Microsoft. CoPilot is Power BI's expensive multimodal artificial intelligence model that is built on natural language processing. It might be compared to the ChatGPT of Power BI. Users will be able to ask inquiries about data, generate graphics, and DAX measures by providing a brief description of what they need thanks to the addition of Copilot to Power BI. For instance, it demonstrates how a brief statement of the user's preferences for the report:"Add a table of the top 500 MNC IT Companies by total sales to my model”.The DAX code required to generate measures and tables is generated automatically by the algorithm.Copilot enables:●       Power BI reports can be created and customized to provide insights.●       Create and improve DAX computations.●       Inquire about your data.●       Publish narrative summaries.●       Ease of Use●       Faster Time to Market Key Features of the Power BI Copilot are as follows: ●       Automated report generationCopilot can create well-designed dashboards, data narratives, and interactive components automatically, saving time and effort compared to manually creating reports.●       Conversational language interfaceWe can use everyday language to express data requests and inquiries, making it simpler to connect with your data and gain insights. ●        Real-time analyticsCopilot's real-time analytics capabilities can be used by Power BI customers to view data and react swiftly to shifts and trends. Let’s look at the step-by-step process on how to use Copilot for Power BI:Step 1: Open Power BI and go to the Copilot tab screen,Step 2:  Type a query pertaining to the data for example to produce a financial report or pick from a list of suggestions that Copilot has automatically prepared for you.Step 3: Copilot sorts through and analyses data to provide the information.Step 4: Copilot compiles a visually stunning report, successfully converting complex data into easily comprehended, practical information.Step 5: Investigate data even more by posing queries, writing summaries to present to stakeholders, and more. There are also a few limitations to using the Copilot features with Power BI: ●       Reliability for the recommendationsAll programming languages that are available in public sources have been taught to Copilot, ensuring the quality of its proposals. The quantity of the training dataset that is accessible for that language, however, may have an impact on the quality of the suggestions. APL, Erlang, and other specialized programming languages' suggestions won't be as useful as those for more widely used ones like Python, Java, etc.●       Privacy and security issuesThere are worries that the model, which was trained on publicly accessible code, can unintentionally recommend code fragments that have security flaws or were intended to be private.●       Dependence on comments and namingThe user is responsible for accuracy because the AI can provide suggestions that are more accurate when given specific comments and descriptive variable names.●       Lack of original solutions and inability to creatively solve problems. Unlike a human developer, the tool is unable to do either. It can only make code suggestions based on the training data.●       Inefficient codebaseThe tool is not designed for going through and comprehending big codebases. It works best when recommending code for straightforward tasks.ConclusionThe combination of Microsoft Copilot and Fabric with Power BI has the ability to completely alter the data modelling field. It blends sophisticated generative AI with data to speed up the discovery and sharing of insights by everyone. By enabling both data engineers and non-technical people to examine data using AI models, it is transforming Power BI into a human-centered analytics platform. Author Bio: Sagar Lad is a Cloud Data Solution Architect with a leading organization and has deep expertise in designing and building Enterprise-grade Intelligent Azure Data and Analytics Solutions. He is a published author, content writer, Microsoft Certified Trainer, and C# Corner MVP.Medium , Amazon , LinkedIn   
Read more
  • 0
  • 0
  • 488
article-image-getting-started-with-azure-speech-service
M.T. White
22 Aug 2023
10 min read
Save for later

Getting Started with Azure Speech Service

M.T. White
22 Aug 2023
10 min read
IntroductionCommanding machines to your bidding was once sci-fi. Being able to command a machine to do something with mere words graced the pages of many sci-fi comics and novels.  It wasn’t until recently that science fiction became science fact.  With the rise of devices such as Amazon’s Alexa and Apple’s Siri, being able to vocally control a device has become a staple of the 21st century. So, how does one integrate voice control in an app?  There are many ways to accomplish that.  However, one of the easiest ways is to use an Azure AI tool called Speech Service.  This tutorial is going to be a crash course on how to integrate Azure’s Speech Service into a standard C# app.  To explore this AI tool, we’re going to use it to create a simple profanity filter to demonstrate the Speech Service. What is Azure Speech Service?There are many ways to create a speech-to-text app.  One could create one from scratch, use a library, or use a cloud service.  Arguably the easiest way to create a speech-to-text app is with a cloud service such as the Azure speech service.  This service is an Azure AI tool that will analyze speech that is picked up by a microphone and converts it to a text string in the cloud.  The resulting string will then be sent back to the app that made the request.  In other words, the Speech-to-Text service that Azure offers is an AI developer tool that allows engineers to quickly convert speech to a text string. It is important to understand the Speech Service is a developer’s tool.  Since the rise of systems like ChatGPT what is considered an AI tool has been ambiguous at best.  When one thinks of modern AI tools they think of tools where you can provide a prompt and get a response.  However, when a developer thinks of a tool, they usually think of a tool that can help them get a job done quickly and efficiently.  As such, the Azure Speech Service is an AI tool that can help developers integrate speech-to-text features into their applications with minimal setup. The Azuer Speech service is a very powerful tool that can be integrated into almost anything.  For example, you can create a profanity filter with minimal code, make a voice request to LLM like ChatGPT or do any number of things.  Now, it is important to remember that Azure Speech Service is an AI tool that is meant for engineers.  Unlike tools like ChatGPT or LLMs in general, you will have to understand the basics of code to use it successfully.  With that, what do you need to get started with the Speech Service?What do you need to build to use Azure Speech Service?Setting up an app that can utilize the Azure service is relatively minimal.  All you will need is the following:    An Azure account.    Visual Studios (preferably the latest version)    Internet connectivity    Microsoft.CognitiveServices.Speech Nuget packageThis project is going to be a console-based application, so you won’t need to worry about anything fancy like creating a Graphical User Interface (GUI). When all that is installed and ready to go the next thing you will want to do is set up a simple speech-to-text service in Azure. Setup Azure Speech ServiceAfter you have your environment set up, you’re going to want to set up your service.  Setting up the Speech-to-Text service is quick and easy as there is very little that needs to be done on the Azure side.  All one has to do is set the service up in perform the following steps,1.     Login into Azure and search for Speech Services.2.     Click the Create button in Figure 1 and fill out the wizard that appears:Figure 1. Create Button3.     Fill out the wizard to match Figure 2.  You can name the instance anything you want and set the resource group to anything you want.  As far as the pricing tier goes, you will usually be able to use the service for free for a time.  However, after the trial period ends you will eventually have to pay for the service.  Regardless, once you have the wizard filled out click Review + Create:Figure 2. Speech Service 4.     Keep following the wizard until you see the screen in Figure 3.  On this screen, you will want to click the manager key link that is circled in red:Figure 3.  Instance ServiceThis is where you get the keys necessary to use the AI tool.  Clicking the link is not totally necessary as the keys are at the bottom of the page.  However, clicking the link is sometimes easier as it’ll bring you directly to the keys. At this point, the service is set up.  You will need to capture the key info which can be viewed in Figure 4:Figure 4. Key InformationYou will need to capture the key data. You can do this by simply clicking the Show Keys button which will unmask KEY 1 and KEY 2.  Each instance you create will generate a new set of keys.  As a safety note, you should never share your keys with anyone as they’ll be able to use your service which in turn means they will rack up your bill among other cyber-security concerns.  As such, you will want to unmask the keys and grab KEY 1 and copy the region as well.  C# CodeNow, comes the fun part of the project, creating the app.  The app will be relatively simple.  The only hard part will be installing the NuGet package for the speech service.  To do this simply add the NuGet package found in Figure 5.Figure 5. NuGet PackageOnce that package is installed you can now start to implement the code. To start off, we’re simply going to make an app that can dictate back what we say to it.  To do this input the following code:// See https://aka.ms/new-console-template for more information using Microsoft.CognitiveServices.Speech; await translateSpeech(); static async Task translateSpeech() {    string key = "<Your Key>";    string region = "<Your Region";    var config = SpeechConfig.FromSubscription(key, region);    using (var recognizer = new SpeechRecognizer(config))    {        var result = await recognizer.RecognizeOnceAsync();        Console.WriteLine(result.Text);    } } }When you run this program it will open up a prompt.  You will be able to speak into the computer mic and whatever you say will be displayed.  For example, run the program and say “Hello World”.  After the service is finished translating your speech you should see the following display on the command prompt: Figure 6. Output From AppNow, this isn’t the full project.  This is just a simple app that will dictate what we say to the computer.  What we’re aiming for in this tutorial is a simple profanity filter.  For that, we need to add another function to the project to help filter the returned string. It is important to remember that what is returned is a text string.  The text string is just like any other text string that one would use in C#.  As such, we can modify the program to the following to filter profanity:// See https://aka.ms/new-console-template for more information using Microsoft.CognitiveServices.Speech; await translateSpeech(); static async Task translateSpeech() {    string key = "<Your Key>";    string region = "<Your Region>";    var config = SpeechConfig.FromSubscription(key, region);    using (var recognizer = new SpeechRecognizer(config))    {        var result = await recognizer.RecognizeOnceAsync();        Console.WriteLine(result.Text);        VetSpeech(result.Text);    } } static void VetSpeech(String input) {    Console.WriteLine("checking phrase: " + input);    String[] badWords = { "Crap", "crap", "Dang", "dang", "Shoot", "shoot" };    foreach(String word in badWords)    {        if (input.Contains(word))        {            Console.WriteLine("flagged");        }    }   }Now, in the VetSpeech function, we have an array of “bad” words.  In short, if the returned string contains a variation of these words the program will display “flagged”.  As such, if we were to say “Crap Computer” when the program is run we can expect to see the following output in the prompt:Figure 7. Profanity OutputAs can be seen, the program flagged the phrase because the word Crap was in it. ExercisesThis tutorial was a basic rundown of the Speech Service in Azure.  This is probably one of the simplest services to use but it is still very powerful.  Now, that you have a basic idea of how the service works and how to write C# code for it.  Create a ChatGPT developer token and take the returned string and pass it to ChatGPT.  When done correctly, this project will allow you to verbally interact with ChatGPT.  That is you should be able to verbally ask ChatGPT a question and get a response.ConclusionThe Azure Speech Service is an AI tool.  Unlike many other AI tools like ChatGPT and the like, this tool is meant for developers to build applications with.  Also, unlike many other Azure services, this is a very easy-to-use system with a minimal setup.  As can be seen from the tutorial the hardest part was writing the code that utilized the service, and even still that was not that difficult.  The best part is that the code provided in this tutorial is the basic code you will need to interact with the service meaning that all you have to do now, is modify it to fit your project’s needs. Overall, the power of the Speech Service is limited to your imagination.  This tool would be excellent for integrating verbal interaction with other tools like ChatGPT, creating voice-controlled robots, or anything else.  Overall, this is a relatively cheap and powerful tool that can be leveraged for many things.Author BioM.T. White has been programming since the age of 12. His fascination with robotics flourished when he was a child programming microcontrollers such as Arduino. M.T. currently holds an undergraduate degree in mathematics, and a master's degree in software engineering, and is currently working on an MBA in IT project management. M.T. is currently working as a software developer for a major US defense contractor and is an adjunct CIS instructor at ECPI University. His background mostly stems from the automation industry where he programmed PLCs and HMIs for many different types of applications. M.T. has programmed many different brands of PLCs over the years and has developed HMIs using many different tools.Author of the book: Mastering PLC Programming 
Read more
  • 0
  • 0
  • 877

article-image-getting-started-with-automl
M.T. White
22 Aug 2023
7 min read
Save for later

Getting Started with AutoML

M.T. White
22 Aug 2023
7 min read
IntroductionTools like ChatGPT have been making headlines as of late.  ChatGPT and other LLMs have been transforming the way people study, work, and for the most part, do anything.  However, ChatGPT and other LLMs are for everyday users.  In short, ChatGPT and other similar systems can help engineers and data scientists, but they are not designed to be engineering or analytics tools.  Though ChatGPT and other LLMs are not designed to be machine-learning tools, there is a tool that can assist engineers and data scientists.  Enter the world of AutoML for Azure.  This article is going to explore AutoML and how it can be used by engineers and data scientists to create machine learning models. What is AutoML?AutoML is an Azure tool that builds the optimal model for a given data set. In many senses, AutoML can be thought of as a ChatGPT-like system for engineers.  AutoML is a tool that allows engineers to quickly produce optimal machine-learning models with little to no technical input.  In short, ChatGPT and other similar systems are tools that can answer general questions about anything, but AutoML is specifically designed to produce machine-learning models. How AutoML works?Though AutoML is a tool designed to produce machine learning models it doesn’t actually use AI or machine learning in the process.  The key to AutoML is parallel pipelines.  A pipeline can be thought of as the logic in a machine-learning model.  For example, the pipeline logic will include things such as cleaning data, splitting data, using a model for the system, and so on.When a person utilizes AutoML it will create a series of parallel pipelines with different algorithms and parameters.  When a model “fits” the data the best it will cease, and that pipeline will be chosen.  Essentially, AutoML in Azure is a quick and easy way for engineers to cut out all the skilled and time-consuming development that can easily hinder non-experienced data scientists or engineers.  To demonstrate how AutoML in Azure works let’s build a model using the tool.What do you need to know?Azure’s AutoML takes a little bit of technical knowledge to get up and running, especially if you’re using a custom dataset.  For the most part, you’re going to need to know approximately what type of analysis you’re going to perform.  You’re also going to need to know how to create a dataset.  This may seem like a daunting task but it is relatively easy. SetupTo use AutoML in Azure you’ll need to setup a few things.  The first thing to set up an ML workspace.  This is done by simply logging into Azure and searching for ML like in Figure 1:Figure 1From there, click on Azure Machine Learning and you should be redirected to the following page.  Once on the Azure Machine Learning page click on the Create button and New Workspace:Figure 2Once there, fill out the form, all you need to do is select a resource group and give the workspace a name.  You can use any name you want, but for this tutorial, the name Article 1 will be used.  You’ll be prompted to click create, once you click that button Azure will start to deploy the workspace.  The workspace deployment may take a few minutes to complete.  Once done click Go to resource. Once you click Go to resource click on Launch studio like in Figure 3.Figure 3At this point, the workspace has been generated and we can move to the next step in the process, using AutoML to create a new model.Now, that the workspace has been created, click Launch Studio you should be met with Figure 4.  The page in Figure 4 is Azure Machine Learning Studio. From here you can navigate to AutoML by clicking the link on the left sidebar:Figure 4Once you click the AutoML you should be redirected to the page in Figure 5:Figure 5Once you see something akin to Figure 5 click on the New Automated ML Job button which should redirect you to a screen that prompts you to select a dataset.  This step is one of the more in-depth compared to the rest of the process.  During this step, you will need to select your dataset.  You can opt to use a predefined dataset that Azure provides for test purposes.  However, for a real-world application, you’ll probably want to opt for a custom dataset that was engineered for your task.  Azure will allow you to either use a pre-built dataset or your own.  For this tutorial we’re going to use a custom dataset that is the following:HoursStory Points161315121511134228281830191032114117129251924172315161315121511134228281830191032114117129251924172315161315121511134228281830191032114117129251924172315161315121511134228281830191032114117129251924172315To use this dataset simply copy and paste into a CSV file.  To use it select the data from a file option and follow the wizard.  Note, that for custom datasets you’ll need at least 50 data points. Continue to follow the wizard and give the experiment a name, for example, E1.  You will also have to select a Target Column.  For this tutorial select Story Points.  If you do not already have a compute instance available, click the New button at the bottom and follow the wizard to set one up.  Once that step is complete you should be directed to a page like in Figure 6:Figure 6This is where you select the general type of analysis to be done on the dataset.  For this tutorial select Regression and click the Next button in Figure 6 then click Finish.  This will start the process which will take several minutes to complete.   The whole process can take up to about 20 or so minutes depending on which compute instance you use.  Once done you will be able to see the metrics by clicking on the Models tab.  This will show all the models that were tried out.  From here you can explore the model and the associated statistics. SummaryIn all, Azure’s AutoML is an AI tool that helps engineers quickly produce an optimal model.  Though not the same, this tool can be used by engineers the same way ChatGPT and similar systems can be used by everyday users.  The main drawback to AutoML is that unlike ChatGPT a user will need a rough idea as to what they’re doing.  However, once a person has a rough idea of the basic types of machine-learning analysis they should be able to use this tool to great effect. Author BioM.T. White has been programming since the age of 12. His fascination with robotics flourished when he was a child programming microcontrollers such as Arduino. M.T. currently holds an undergraduate degree in mathematics, and a master's degree in software engineering, and is currently working on an MBA in IT project management. M.T. is currently working as a software developer for a major US defense contractor and is an adjunct CIS instructor at ECPI University. His background mostly stems from the automation industry where he programmed PLCs and HMIs for many different types of applications. M.T. has programmed many different brands of PLCs over the years and has developed HMIs using many different tools.Author of the book: Mastering PLC Programming 
Read more
  • 0
  • 0
  • 2660