Search icon CANCEL
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon

How-To Tutorials

7017 Articles
article-image-get-started-with-fabric-create-your-workspace-reports
Arshad Ali, Bradley Schacht
14 Mar 2024
9 min read
Save for later

Get Started with Fabric: Create Your Workspace & Reports

Arshad Ali, Bradley Schacht
14 Mar 2024
9 min read
Dive deeper into the world of AI innovation and stay ahead of the AI curve! Subscribe to our AI_Distilled newsletter for the latest insights. Don't miss out – sign up today!This article is an excerpt from the book, Learn Microsoft Fabric, by Arshad Ali, Bradley Schacht. Harness the power of Microsoft Fabric to develop data analytics solutions for various use cases guided by step-by-step instructionsIntroductionEmbark on a journey to harness the full potential of Microsoft Fabric within Power BI. This article serves as your comprehensive guide, walking you through the essential steps to create your first Fabric workspace seamlessly. From understanding the fundamentals to practical implementation, we'll equip you with the knowledge and tools needed to optimize your data management and reporting processes. Get ready to elevate your Power BI experience and unlock new possibilities with Fabric-enabled workspaces.Creating your first Fabric-enabled workspaceOnce you have confirmed that Fabric is enabled in your tenant and you have access to it, the next step is to create your Fabric workspace. You can think of a Fabric workspace as a logical container that will contain items such as lakehouses, warehouses, notebooks, and pipelines. Follow these steps to create your first Fabric workspace:1. Sign into Power BI (https://app.powerbi.com/).2. Select Workspaces | + New workspace:Figure 2.5 – Creating a new workspace3. Fill out the Create a workspace form, as follows:Name: Enter Learn Microsoft Fabric and some characters for uniqueness.Description: Optionally, enter a description for the workspace:Figure 2.6 – Create a workspace – detailsAdvanced: Select Fabric capacity under License mode and then choose a capacity you have access to. If not, you can start a trial license, as described earlier, and use it here.4. Select Apply. Th e workspace will be created and opened.5. You can click on Workspaces again and then search for your workspace by typing its name in the search box. You can also pin the selected workspace so that it always appears at the top:Figure 2.7 – Searching for a workspace6. Clicking on the name of the workspace will open that workspace. A link to it will become available in the left-hand side navigation bar, allowing you to switch from one item to another quickly. Since we haven’t created anything yet, there is nothing here. You can click on +New to start creating Fabric items:Figure 2.8 – Switching to a workspaceWith a Microsoft  Fabric workspace set up, let’s review the different workloads that are available.Copilot in Power BIPower BI has several key components, including data transformation and data modeling, culminating in a visual report that end users will consume. The Copilot experience is centered around the visual storytelling and reporting aspects of Power BI. This materializes in three ways: report page creation, narrative generation, and improving Q&A.Let’s look at each of these Copilot capabilities.Creating reports with the Power BI CopilotThe most common use for Copilot with Power BI is likely to be for creating reports. There are two features that come together to build reports. The first analyzes the dataset to suggest content for your report by using table relationships and column names, while the second one helps you create intuitive reports quickly. Figure 11.30 shows an example where Copilot has suggested several report pages, each with a short description of what would be displayed:Figure 11.30 – The Power BI Copilot page suggestionsIf you like the page suggestions, simply click on the Create button and the report page will appear.While a suggested set of report content is a good starting point, analysts often have a specific need to meet. You can have Copilot create a report from the criteria you provide using prompts as well. These can be as simple as “create a page that shows customer analysis” or more specific, such as “create a page to show the impact of each sales territory on profit and quantity sold.”Figure 11.31 – Sales impact report created by CopilotOnce the report page is generated, Copilot cannot update the report, but you can interact with and modify the report as necessary. This is a great way to reduce the time to get started building reports.A couple of other important things to note are that in addition to not being able to modify reports, Copilot will not allow you to specify specific visual types, apply filters, or change the report layout. All of these can be changed manually after the initial report generation. It is worth noting that users should not expect Copilot to filter results to a specific time period based on their prompt as an example.Next, let’s look at the smart narrative.Creating a narrative using CopilotVisuals are a wonderful way to tell a story and give users the ability to explore data on their own. However, sometimes a narrative that summarizes what is being displayed in a report can be useful. It can not only tell a story but also provide some additional context and information for users.To get started, open a report and add a narrative visualization to the report as shown in Figure 11.32. You will see two options; click on Copilot. Choose the type of summary you wish to produce and optionally select specific pages or visuals to include in the summary. Then click on Create.Figure 11.32 – The report narrative generated by CopilotAfter the narrative is generated, remember to always review the narrative for accuracy and adjust the prompt, if necessary, to produce more accurate results. In addition to summaries, you can ask it to highlight key information, customize the order in which the data is described to help convey importance, specify specific data points to include in the summary, and even generate impact analysis showing how different factors affect metrics on the report.Report, page, and visual narratives are a great way to guide users through a report, especially if there isn’t a subject matter expert there to explain all the data.Finally, let’s look at using Copilot to improve the Q&A visual.Generating synonyms with CopilotThe Q&A visual has been dazzling users for years at this point. It is impressive to build a model, walk into the room, and tell users that they can use natural language to query their data without needing to build any visuals. This may not be as impressive as the Copilot functionality that we have today, but it is still a very useful tool in your Power BI visualization toolbelt.One piece of important information for the success of Q&A is something called a synonym. These are end-user-specific ways to reference data. For example, a table in the data model may be called Dim Person, but you know that some report consumers always refer to these as “users.” Therefore, you would create a synonym that tells Q&A that when someone asks about users, they are really talking about persons. This can also be done on a column level. A synonym for “postal code” could be “zip code,” while a synonym for an “item” could be “product” or “finished good.”Q&A itself may not use Copilot, but Power BI Desktop can leverage Copilot to generate synonyms. This can be done when creating a new Q&A visual by clicking on Add synonyms from the ribbon with the label Improve Q&A with synonyms from Copilot. They can also be generated from the Q&A settings menu by adding Copilot as a source from the Suggestion settings list.The more synonyms that can be used to describe your data, the more likely you are to produce quality Q&A results. It is important to double-check the synonyms generated by Copilot to ensure they line up with your specific business terminology.With these Copilot experiences for Power BI, you will be able to generate report ideas, report pages and visuals, summaries, and narratives, and improve Q&A.ConclusionIn conclusion, by mastering the creation of Fabric workspaces in Power BI, you've laid a solid foundation for efficient data management and reporting. With Fabric's capabilities at your fingertips, you're equipped to streamline workflows, generate insightful reports, and enhance collaboration within your organization. Keep exploring the diverse functionalities of Fabric to continuously refine your Power BI experience and stay ahead in the realm of data analytics.Author bioArshad Ali is a principal product manager at Microsoft, working on the Microsoft Fabric product team in Redmond, WA. He focuses on Spark Runtime, which empowers both data engineering and data science experiences. In his previous role, he helped strategic customers and partners adopt Azure Synapse and Microsoft Fabric.Arshad has more than 20 years of industry experience and has been with Microsoft for over 16 years. He is the co-author of the book Big Data Analytics with Azure HDInsight and the author of over 200 technical articles and blogs on data and analytics. Arshad holds an MBA from the Foster School of Business at the University of Washington and an MCA from India.Bradley Schacht is a principal program manager on the Microsoft Fabric product team based in Saint Augustine, Florida. Bradley is a former consultant and trainer and has co-authored five books on SQL Server and Power BI. As a member of the Microsoft Fabric product team, Bradley works directly with customers to solve some of their most complex data problems and helps shape the future of Microsoft Fabric. Bradley gives back to the community by speaking at events, such as the PASS Summit, SQL Saturday, Code Camp, and user groups across the country, including locally at the Jacksonville SQL Server User Group (JSSUG). He is a contributor on SQLServerCentral and blogs on his personal site, BradleySchacht.
Read more
  • 0
  • 0
  • 212

article-image-enhancing-image-search-with-vector-similarity
Bahaaldine Azarmi, Jeff Vestal
12 Mar 2024
12 min read
Save for later

Enhancing Image Search with Vector Similarity

Bahaaldine Azarmi, Jeff Vestal
12 Mar 2024
12 min read
Dive deeper into the world of AI innovation and stay ahead of the AI curve! Subscribe to our AI_Distilled newsletter for the latest insights. Don't miss out – sign up today!This article is an excerpt from the book, Vector Search for Practitioners with Elastic, by Bahaaldine Azarmi and Jeff Vestal. Optimize your search capabilities in Elastic by operationalizing and fine-tuning vector search and enhance your search relevance while improving overall search performanceIntroductionVector similarity search plays a crucial role in image search. After images are transformed into vectors, a search query (also represented as a vector) is compared against the database of image vectors to find the most similar matches. This process is known as k-Nearest Neighbor (kNN) search, where “k” represents the number of similar items to retrieve.Several algorithms can be used for kNN search, including brute-force search and more efficient methods such as the Hierarchical Navigable Small World (HNSW) algorithm (see Chapter 7, Next Generation of Observability Powered, by Vectors for a more in-depth discussion on HNSW). Bruteforce search involves comparing the query vector with every vector in the database, which can be computationally expensive for large databases. On the other hand, HNSW is an optimized algorithm that can quickly find the nearest neighbors in a large-scale database, making it particularly useful for vector similarity search in image search systems.The tangible benefits of image search are observed across industries. Its flexibility and adaptability make it a tool of choice for enhancing user experiences, ensuring digital security, or even revolutionizing digital content interactions.Image search in practiceApplications of image search are varied and far-reaching. In e-commerce, for example, reverse image search allows customers to upload a photo of a product and find similar items for sale. In the field of digital forensics, image search can be used to find visually similar images across a database to detect illicit content. It is also used in the realm of social media for face recognition, image tagging, and content recommendation.As we continue to generate and share more visual content, the need for effective and efficient image search technology will only grow. The combination of artificial intelligence, machine learning, and vector similarity search provides a powerful toolkit to meet this demand, powering a new generation of image search capabilities that can analyze and understand visual content.Traditionally, image search engines use text-based metadata associated with images, such as the image’s filename, alt text, and surrounding text context, to understand the content of an image. This approach, however, is limited by the accuracy and completeness of the metadata, and it fails to analyze the actual visual content of the image itself.Over time, with advancements in artificial intelligence and machine learning, more sophisticated methods of image search have been developed that can analyze the visual content of images directly. This technique, known as content-based image retrieval (CBIR), involves extracting feature vectors from images and using these vectors to find visually similar images.Feature vectors are a numerical representation of an image’s visual content. They are generated by applying a feature extraction algorithm to the image. The specifics of the feature extraction process can vary, but in general, it involves analyzing the image’s colors, textures, and shapes. In recent years, CNNs have become a popular tool for feature extraction due to their ability to capture complex patterns in image data.Once feature vectors have been extracted from a set of images, these vectors can be indexed in a database. When a new query image is submitted, its feature vector is compared to the indexed vectors, and the images with the most similar vectors are returned as the search results. The similarity between vectors is typically measured using distance metrics such as Euclidean distance or cosine similarity.Despite the impressive capabilities of CBIR systems, there are several challenges in implementing them. For instance, interpreting and understanding the semantic meaning of images is a complex task due to the subjective nature of visual perception. Furthermore, the high dimensionality of image data can make the search process computationally expensive, particularly for large databases.To address these challenges, approximate nearest neighbor (ANN) search algorithms, such as the HNSW graph, are often used to optimize the search process. These algorithms sacrifice a small amount of accuracy for a significant increase in search speed, making them a practical choice for large-scale image search applications.With the advent of Elasticsearch’s dense vector field type, it is now possible to index and search highdimensional vectors directly within an Elasticsearch cluster. This functionality, combined with an appropriate feature extraction model, provides a powerful toolset for building efficient and scalable image search systems.In the following sections, we will delve into the details of image feature extraction, vector indexing, and search techniques. We will also demonstrate how to implement an image search system using Elasticsearch and a pre-trained CNN model for feature extraction. The overarching goal is to provide a comprehensive guide for building and optimizing image search systems using state-of-the-art technology.Vector search with imagesVector search is a transformative feature of Elasticsearch and other vector stores that enables a method for performing searches within complex data types such as images. Through this approach, images are converted into vectors that can be indexed, searched, and compared against each other, revolutionizing the way we can retrieve and analyze image data. This inherent characteristic of producing embeddings applies to other media types as well. This section provides an in-depth overview of the vector search process with images, including image vectorization, vector indexing in Elasticsearch, kNN search, vector similarity metrics, and fine-tuning the kNN algorithm.Image vectorizationThe first phase of the vector search process involves transforming the image data into a vector, a process known as image vectorization. Deep learning models, specifically CNNs, are typically employed for this task. CNNs are designed to understand and capture the intricate features of an image, such as color distribution, shapes, textures, and patterns. By processing an image through layers of convolutional, pooling, and fully connected nodes, a CNN can represent an image as a high-dimensional vector. This vector encapsulates the key features of the image, serving as its numerical representation.The output layer of a pre-trained CNN (often referred to as an embedding or feature vector) is often used for this purpose. Each dimension in this vector represents some learned feature from the image. For instance, one dimension might correspond to the presence of a particular color or texture pattern.The values in the vector quantify the extent to which these features are present in the image.Figure 1 : Layers of a CNN modelAs seen in the preceding diagram, these are the layers of a CNN model:1. Accepts raw pixel values of the image as input.2. Each layer extracts specific features such as edges, corners, textures, and so on.3. Introduces non-linearity, learns from errors, and approximates more complex functions.4. Reduces the dimensions of feature maps through down-sampling to decrease the computational complexity.5. Consists of the weights and biases from the previous layers for the classification process to take place.6. Outputs a probability distribution over classes.Indexing image vectors in ElasticsearchOnce the image vectors have been obtained, the next step is to index these vectors in Elasticsearch for future searching. Elasticsearch provides a special field type, the dense_vector field, to handle the storage of these high-dimensional vectors.A dense_vector field is defined as an array of numeric values, typically floating-point numbers, with a specified number of dimensions (dims). The maximum number of dimensions allowed for indexed vectors is currently 2,048, though this may be further increased in the future. It’s essential to note that each dense_vector field is single-valued, meaning that it is not possible to store multiple values in one such field.In the context of image search, each image (now represented as a vector) is indexed into an Elasticsearch document. This vector can be one per document or multiple vectors per document. The vector representing the image is stored in a dense_vector field within the document. Additionally, other relevant information or metadata about the image can be stored in other fields within the same document.The full example code can be found in the Jupyter Notebook available in the chapter 5 folder of this book’s GitHub repository at https://github.com/PacktPublishing/VectorSearch-for-Practitioners-with-Elastic/tree/main/chapter5, but we’ll discuss the relevant parts here.First, we will initialize a pre-trained model using the SentenceTransformer library.The clip-ViT-B-32-multilingual-v1 model is discussed in detail later in this chapter:model = SentenceTransformer('clip-ViT-B-32-multilingual-v1')Next, we will prepare the image transformation function:transform = transforms.Compose([ transforms.Resize(224), transforms.CenterCrop(224), lambda image: image.convert("RGB"), transforms.ToTensor(), transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5)), ])Transforms.Compose() combines all the following transformations:transforms.Resize(224): Resizes the shorter side of the image to 224 pixels while maintaining the aspect ratio.transforms.CenterCrop(224): Crops the center of the image so that the resultant image has dimensions of 224x224 pixels.lambda image: image.convert("RGB"): This is a transformation that converts the image to the RGB format. This is useful for grayscale images or images with an alpha channel, as deep learning models typically expect RGB inputs.transforms.ToTensor(): Converts the image (in the PIL image format) into a PyTorch tensor. This will change the data from a range of [0, 255] in the PIL image format to a float in a range [0.0, 1.0].transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5)): Normalizes the tensor image with a given mean and standard deviation for each channel. In this case, the mean and standard deviation for all three channels (R, G, B) are 0.5. This normalization will transform the data range from [0.0, 1.0] to [-1.0, 1.0].We can use the following code to apply the transform to an image file and then generate an image vector using the model. See the Python notebook for this chapter to run against actual image files:from PIL import Image img = Image.open("image_file.jpg") image = transform(img).unsqueeze(0) image_vector = model.encode(image)The vector and other associated data can then be indexed into Elasticsearch for use with kNN search:# Create document document = {'_index': index_name, '_source': {"filename": filename, "image_vector": vector See the complete code in the chapter 5 folder of this book’s GitHub repository.With vectors generated and indexed into Elasticsearch, we can move on to searching for similar images.k-Nearest Neighbor (kNN) searchWith the vectors now indexed in Elasticsearch, the next step is to make use of kNN search. You can refer back to Chapter 2, Getting Started with Vector Search in Elastic, for a full discussion on kNN and HNSW search.As with text-based vector search, when performing vector search with images, we first need to convert our query image to a vector. The process is the same as we used to convert images to vectors at index time.We convert the image to a vector and include that vector in the query_vector parameter of the knn search function:knn = { "field": "image_vector", "query_vector": search_image_vector[0], "k": 1, "num_candidates": 10 }Here, we specify the following:field: The field in the index that contains vector representations of images we are searching againstquery_vector: The vector representation of our query imagek: We want only one closest imagenum_candidates: The number of approximate nearest neighbor candidates on each shard to search againstWith an understanding of how to convert an image to a vector representation and perform an approximate nearest neighbor search, let’s discuss some of the challenges.Challenges and limitations with image searchWhile vector search with images offers powerful capabilities for image retrieval, it also comes with certain challenges and limitations. One of the main challenges is the high dimensionality of image vectors, which can lead to computational inefficiencies and difficulties in visualizing and interpreting the data.Additionally, while pre-trained models for feature extraction can capture a wide range of features, they may not always align with the specific features that are relevant to a particular use case. This can lead to suboptimal search results. One potential solution, not limited to image search, is to use transfer learning to fine-tune the feature extraction model on a specific task, although this requires additional data and computational resources.ConclusionIn conclusion, vector similarity search revolutionizes image retrieval by harnessing advanced algorithms and machine learning. From e-commerce to digital forensics, its impact is profound, enhancing user experiences and content discovery. Leveraging techniques like k-Nearest Neighbor search and Elasticsearch's dense vector field, image search becomes more efficient and scalable. Despite challenges, such as high dimensionality and feature alignment, ongoing advancements promise even greater insights into visual data. As technology evolves, so does our ability to navigate and understand the vast landscape of images, ensuring a future of enhanced digital interactions and insights.Author BioBahaaldine Azarmi, Global VP Customer Engineering at Elastic, guides companies as they leverage data architecture, distributed systems, machine learning, and generative AI. He leads the customer engineering team, focusing on cloud consumption, and is passionate about sharing knowledge to build and inspire a community skilled in AI.Jeff Vestal has a rich background spanning over a decade in financial trading firms and extensive experience with Elasticsearch. He offers a unique blend of operational acumen, engineering skills, and machine learning expertise. As a Principal Customer Enterprise Architect, he excels at crafting innovative solutions, leveraging Elasticsearch's advanced search capabilities, machine learning features, and generative AI integrations, adeptly guiding users to transform complex data challenges into actionable insights.
Read more
  • 0
  • 0
  • 567

article-image-using-chatgpt-for-customer-service
Amita Kapoor
07 Mar 2024
10 min read
Save for later

Using ChatGPT for Customer Service

Amita Kapoor
07 Mar 2024
10 min read
Dive deeper into the world of AI innovation and stay ahead of the AI curve! Subscribe to our AI_Distilled newsletter for the latest insights and books. Don't miss out – sign up today!IntroductionCustomer service bots of old can often feel robotic, rigid, and painfully predictable. But enter ChatGPT: the fresher, more dynamic contender in the bot arena.ChatGPT isn't just another bot. It's been meticulously trained on a vast sea of text and code, equipping it to grapple with questions that would stump its predecessors. And it's not limited to just customer queries; this versatile bot can craft a range of text formats, from poems to programming snippets.But the standout feature? ChatGPT's touch of humour. It's not just about answering questions; it's about engaging in a way that's both informative and entertaining. So if you're in search of a customer service experience that's more captivating than the norm, it might be time to chat with ChatGPT. Onboarding ChatGPT: A Quick and Easy GuideReady to set sail with ChatGPT? Here's your easy guide to make sure you're all set and ready to roll:1. Obtain the API Key: First, you'll need to get an API key from OpenAI. This is like your secret password to the world of ChatGPT. To get an API key, head to the OpenAI platform and sign up. Once you're signed in, go to the API section and click on "Create New Key."2. Integrate ChatGPT with Your System: Once you have your API key, you can integrate ChatGPT with your system. This is like introducing ChatGPT to your system and making sure they're friends, ready to work together smoothly. To integrate ChatGPT, you'll need to add your API key into your system's code. The specific steps involved will vary depending on your system, but there are many resources available online to help you. Here is an example of how you can do it in Python:import openai import os # Initialize OpenAI API Client api_key = os.environ.get("OPENAI_API_KEY") # Retrieve the API key from environment variables openai.api_key = api_key # Set the API key # API parameters model = "gpt-3.5-turbo"  # Choose the appropriate engine max_tokens = 150  # Limit the response length3. Fine-Tune ChatGPT (Optional): ChatGPT is super smart, but sometimes you might need it to learn some specific stuff about your company. That's where fine-tuning comes in. To fine-tune ChatGPT, you can provide it with training data that is specific to your company. This could include product information, customer service FAQs, or even just examples of the types of conversations that you want ChatGPT to be able to handle. Fine-tuning is not required, but it can help to improve the performance of ChatGPT on your specific tasks. [https://www.packtpub.com/article-hub/fine-tuning-gpt-35-and-4].And that's it! With these three steps, ChatGPT will be all set to jump in and take your customer service to the next level. Ready, set, ChatGPT!Utilise ChatGPT for Seamless Question AnsweringIn the ever-evolving world of customer service, stand out by integrating ChatGPT into your service channels, making real-time, accurate response a seamless experience for your customers.  Let’s delve into an example to understand the process better.Example: EdTech Site with Online K-12 CoursesImagine operating a customer service bot for an EdTech site with online courses for K-12. You want to ensure that the bot provides answers only on relevant questions, enhancing the user experience and ensuring the accuracy and efficiency of responses. Here's how you can achieve this:1. Pre-defined Context:Initiate the conversation with a system message that sets the context for the bot’s role.role_gpt = "You are a customer service assistant for an EdTech site that offers online K-12 courses. Provide information and assistance regarding the courses, enrollment, and related queries." This directive helps guide the model's responses, ensuring they align with the expected topics.2. Keyword Filtering:Implement keyword filtering to review user’s queries for relevance to topics the bot handles. If the query includes keywords related to courses, enrollment, etc., the bot answers; otherwise, it informs the user about the limitation. Here's a basic example of a keyword filtering function in Python. This function is_relevant_query checks if the query contains certain keywords related to the services offered by the EdTech site.def is_relevant_query(query, keywords): """ Check if the query contains any of the specified keywords. :param query: str, the user's query :param keywords: list of str, keywords to check for :return: bool, True if query contains any keyword, False otherwise """ query = query.lower() return any(keyword in query for keyword in keywords) # Usage example: keywords = ['enrollment', 'courses', 'k-12', 'online learning'] query = "Tell me about the enrollment process." is_relevant = is_relevant_query(query, keywords)Next, we combine the bot role and user query to build the complete messagemessages = [ {    "role": "system",    "content": f"{role_gpt}" }, {"role": "user", "content": f"{query}"} ]We now make the openAI API can only when the question is relevant:is_relevant = is_relevant_query(query, keywords) if is_relevant: # Process the query with ChatGPT     # Make API call response = openai.ChatCompletion.create( model=model, messages=messages ) # Extract and print chatbot's reply chatbot_reply = response['choices'][0]['message']['content' print("ChatGPT: ", chatbot_reply) else: print("I'm sorry, I can only answer questions related to enrollment, courses, and online learning for K-12.")To elevate the user experience, prompt your customers to use specific questions. This subtle guidance helps funnel their queries, ensuring they stay on-topic and receive the most relevant information quickly. Continuous observation of user interactions and consistent collection of their feedback is paramount. This valuable insight allows you to refine your bot, making it more intuitive and adept at handling various questions. Further enhancing the bot's efficiency, enable a feature where it can politely ask for clarification on vague or ambiguous inquiries. This ensures your bot continues to provide precise and relevant answers, solidifying its role as an invaluable resource for your customers.Utilise ChatGPT to tackle Frequently Asked QuestionsAmidst the myriad of queries in customer service, frequently asked questions (FAQs) create a pattern. With ChatGPT, transform the typical, monotonous FAQ experience into an engaging and efficient one.Example: A Hospital ChatbotConsider the scenario of a hospital chatbot. Patients might have numerous questions before and after appointments. They might be inquiring about the hospital’s visitor policies, appointment scheduling, post-consultation care, or the availability of specialists. A well-implemented ChatGPT can swiftly and accurately tackle these questions, giving relief to both the hospital staff and the patients.  Here is a tentative role setting for such a bot:role_gpt = "You are a friendly assistant for a hospital, guiding users with appointment scheduling, hospital policies, and post-consultation care."This orientation anchors the bot within the healthcare context, offering relevant and timely patient information. For optimal results, a finely tuned ChatGPT model for this use case is ideal. This enhancement allows for precise, context-aware processing of healthcare-related queries, ensuring your chatbot stands as a trustworthy, efficient resource for patient inquiries.The approach outlined above can be seamlessly adapted to various other sectors. Imagine a travel agency, where customers frequently inquire about trip details, booking procedures, and cancellation policies. Or consider a retail setting, where questions about product availability, return policies, and shipping details abound. Universities can employ ChatGPT to assist students and parents with admission queries, course details, and campus information. Even local government offices can utilize ChatGPT to provide citizens with instant information about public services, documentation procedures, and local regulations. In each scenario, a tailored ChatGPT, possibly fine-tuned for the specific industry, can provide swift, clear, and accurate responses, elevating the customer experience and allowing human staff to focus on more complex tasks. The possibilities are boundless, underscoring the transformative potential of integrating ChatGPT in customer service across diverse sectors. Adventures in AI Land🐙 Octopus Energy: Hailing from the UK's bustling lanes, Octopus Energy unleashed ChatGPT into the wild world of customer inquiries. Lo and behold, handling nearly half of all questions, ChatGPT isn’t just holding the fort – it’s conquering, earning accolades and outshining its human allies in ratings!📘 Chegg: Fear not, night-owl students! The world of academia isn’t left behind in the AI revolution. Chegg, armed with the mighty ChatGPT (aka Cheggmate), stands as the valiant knight ready to battle those brain-teasing queries when the world sleeps at 2 AM. Say goodbye to the midnight oil blues!🥤 PepsiCo: Oh, the fizz and dazzle! The giants aren’t just watching from the sidelines. PepsiCo, joining forces with Bain & Company, bestowed upon ChatGPT the quill to script their advertisements. Now every pop and fizz of their beverages echo with the whispers of AI, making each gulp a symphony of allure and refreshment.Ethical Considerations for Customer Service ChatGPTIn the journey of enhancing customer service with ChatGPT, companies should hold the compass of ethical considerations steadfast. Navigate through the AI world with a responsible map that ensures not just efficiency and innovation but also the upholding of ethical standards. Below are the vital checkpoints to ensure the ethical use of ChatGPT in customer service:Transparency: Uphold honesty by ensuring customers know they are interacting with a machine learning model. This clarity builds a foundation of trust and sets the right expectations.Data Privacy: Safeguard customer data with robust security measures, ensuring protection against unauthorized access and adherence to relevant data protection regulations. For further analysis or training, use anonymized data, safeguarding customer identity and sensitive information.Accountability: Keep a watchful eye on AI interactions, ensuring the responses are accurate, relevant, and appropriate. Establish a system for accountability and continuous improvement.Legal Compliance: Keep the use of AI in customer service within the bounds of relevant laws and regulations, ensuring compliance with AI, data protection, and customer rights laws.User Autonomy: Ensure customers have the choice to switch to a human representative, maintaining their comfort and ensuring their queries are comprehensively addressed.TConclusionTo Wrap it Up (with a Bow), if you're all about leveling up your customer service game, ChatGPT's your partner-in-crime. But like any good tool, it's all about how you wield it. So, gear up, fine-tune, and dive into this AI adventure!Author BioAmita Kapoor is an accomplished AI consultant and educator with over 25 years of experience. She has received international recognition for her work, including the DAAD fellowship and the Intel Developer Mesh AI Innovator Award. She is a highly respected scholar with over 100 research papers and several best-selling books on deep learning and AI. After teaching for 25 years at the University of Delhi, Amita retired early and turned her focus to democratizing AI education. She currently serves as a member of the Board of Directors for the non-profit Neuromatch Academy, fostering greater accessibility to knowledge and resources in the field. After her retirement, Amita founded NePeur, a company providing data analytics and AI consultancy services. In addition, she shares her expertise with a global audience by teaching online classes on data science and AI at the University of Oxford. 
Read more
  • 0
  • 0
  • 908

article-image-streamlining-insights-with-microsoft-copilot
Gus Frazer
04 Mar 2024
9 min read
Save for later

Streamlining Insights with Microsoft Copilot

Gus Frazer
04 Mar 2024
9 min read
Dive deeper into the world of AI innovation and stay ahead of the AI curve! Subscribe to our AI_Distilled newsletter for the latest insights. Don't miss out – sign up today!This article is an excerpt from the book, Data Cleaning with Power BI, by Gus Frazer. Unlock the full potential of your data by mastering the art of cleaning, preparing, and transforming data with Power BI for smarter insights and data visualizationsIntroductionFor those who have never heard of Microsoft  Copilot, it is a new technology that Microsoft has released across a number of its platforms that combines generative AI with your data to enhance productivity. Copilot for Power BI harnesses cutting-edge generative AI alongside your dataset, revolutionizing the process of uncovering and disseminating insights with unprecedented speed. Seamlessly integrated into your workflow, Copilot offers an array of functionalities aimed at streamlining your reporting experience.When it comes to report creation, Copilot streamlines the process by allowing users to effortlessly generate reports by articulating the insights they seek or posing questions regarding their dataset using NLP. Copilot then analyzes the data, pulling together relevant information to craft visually striking reports, thereby transforming raw data into actionable insights instantaneously. Moreover, Copilot has the ability to read your data and suggest the best position to begin your analysis, which can then be tailored to suit the direction you want to take the analysis in.This is great, but how can it help you clean and prepare data for analysis? Well, Copilot can be leveraged on multiple data tools from within the Microsoft  Fabric platform. For those who are not aware, Power BI has now become part of the Fabric platform. Depending on what type of license you have for Power BI, you might already have access to this. Any customers with Premium capacity licensing for the Power BI service would have automatically been given access to Microsoft  Fabric, and more importantly, Copilot.That being said, currently, Copilot has only been made available to customers with a P1 (or above) Premium capacity or a Fabric license of F64 (or above), which is the equivalent licensing available directly from the Azure portal.If you would like to follow along with the next example, you will need to set up a Fabric capacity within your Azure portal. Don’t worry, you can pause this service when it’s not being used to ensure you are only charged for the time you’re using it. Alternatively, follow the steps to see the outcome:1. Log in to the Azure portal that you set up in the previous section of this chapter.2. Select the search bar at the top of the page and type in Microsoft Fabric. Select the service in the menu that appears below the search bar, which should take you to the page where you can manage your capacities.3. Select Create a Fabric capacity. Note that you will need to use an organizational account in order to create a Fabric capacity as opposed to a personal account. You can sign up for a Microsoft  Fabric trial for your organization within the window. Further details on how to do this are provided here: https://learn.microsoft.com/en-us/power-bi/ enterprise/service-admin-signing-up-for-power-bi-with-a-newoffice-365-trial.4. Select the subscription and resource group you would like to use for this Fabric capacity.5. Then, under capacity details, you can enter your capacity name. In this example, you can call it cleaningdata.6. The Region field should populate with the region of your tenant, but you can change this if you like. However, this may have implications on performance, which it should warn you about with a message.7. Set the capacity to F64.8. Then, click on select Review + create.9. Review the terms and then click on Create, which will begin the deployment of your capacity.10. Once deployed, select Go to resource to view your Fabric capacity. Take note that this will be active once deployed. Make sure to return here aft er testing to pause or delete your Fabric capacity to prevent yourself from getting charged for this service.Now you will need to ensure you have activated the Copilot settings from within your Fabric capacity. To do this, go to https://app.powerbi.com/admin-portal/ to log in and access the admin portal.Important tipIf you can’t see the Tenant settings tab, then you will need to ensure you have been set up as an admin within your Microsoft  365 admin center. If you have just created a new account, then you will need to set this up. Follow the next links to assign roles:• https://learn.microsoft.com/en-us/microsoft-365/admin/addusers/assign-admin-roles• https://learn.microsoft.com/en-us/fabric/admin/microsoftfabric-admin11. Scroll to the  bottom of Tenant settings until you see the Copilot and Azure OpenAI service (preview) section as shown:Figure  – The tenant settings from within Power BI12. Ensure both settings are set to Enabled and then click on Apply.Now that you have created your Fabric capacity, let’s jump into an example of how we can use Copilot to help with the cleaning of data. As we have created a new capacity, you will have to create a new workspace that uses this new capacity:1. Navigate back to Workspaces using the left navigation bar. Then, select New Workspace.2. Name your workspace CleaningData(Copilot), then select the dropdown for advanced configuration settings.3. Ensure you have selected Fabric capacity in the license mode, which in turn will have selected your capacity below, and then select Apply. You have now created your capacity!4. Now let’s use Fabric to create a new dataflow using the latest update of Datafl ow Gen2. Select New from within the workspace and then select More options.5. This will navigate you to a page with all the possible actions to create items within your Fabric workspace. Under Data Factory, select Datafl ow Gen2.6. This will load a Datafl ow Gen2 instance called Datafl ow 1. On the top row, you should now see the Copilot logo within the Home ribbon as highlighted:Figure – The ribbon within a Dataflow Gen2 instance7. Select Copilot to open the Copilot window on the right-hand side of the page. As you have not connected to any data, it will prompt you to select get data.8. Select Text/CSV and then enter the following into the File path or URL box:https://raw.githubusercontent.com/PacktPublishing/Data-Cleaningwith-Power-BI/main/Retail%20Store%20Sales%20Data.csv9. Leave the rest of the settings as their defaults and click on Next.10. This will then open a preview of the file data. Click on Create to load this data into your Datafl ow Gen2 instance. You will see that the Copilot window will have now changed to prompt you as to what you would like to do (if it hasn’t, then simply close the Copilot window and reopen):Figure – Data loaded into Dataflow Gen211. In this example, we can see that the data includes a column called Order Date but we don’t have a fi eld for the fi scal year. Enter the following prompt to ask Copilot to help with the transformation:There's a column in the data named Order Date, which shows when an order was placed. However, I need to create a new column from this that shows the Fiscal Year. Can you extract the year from the date and call this Fiscal Year? Set this new column to type number also.12. Proceed using the arrow key or press Enter. Copilot will then begin working on your request. As you will see in the resulting output, the model has added a function (or step) called Custom to the query that we had selected.13. Scroll to the far side and you will see that this has added a new column called Fiscal Year.14. Now add the following prompt to narrow down our data and press Enter:Can you now remove all columns leaving me with just Order ID, Order Date, Fiscal year, category, and Sales?15. This will then add another function or step called Choose columns. Finally, add the following prompt to aggregate this data and press Enter:Can you now group this data by Category, Fiscal year, and aggregated by Sum of Sales?As you can see, Copilot has now added another function called Custom 1 to the applied steps in this query, resulting in this table:Figure – The results from asking Copilot to transform the dataTo view the M query that  Copilot has added, select Advanced editor, which will show the functions that Copilot has added for you:Figure – The resulting M query created by Copilot to carry out the request transformations to clean the dataIn this example, you explored the new technologies available with Copilot and how they help to transform the data using tools such as Datafl ow Gen2.While it’s great to understand the amazing possibilities AI brings to data, it’s also crucially important that you understand the challenges it presents.ConclusionIn conclusion, Microsoft Copilot offers a groundbreaking approach to enhancing productivity and efficiency in data analysis and report generation within Power BI. By seamlessly integrating generative AI technology, Copilot revolutionizes the way insights are discovered and data is prepared, providing users with unprecedented speed and accuracy. Whether streamlining report creation or optimizing data management tasks, Copilot empowers users to unlock the full potential of their data, paving the way for more informed decision-making and actionable insights.Author BioGus Frazer is a seasoned Analytics Consultant focused on Business Intelligence solutions. With over 7 years of experience working for the two market-leading platforms, Power BI & Tableau, has amassed a wealth of knowledge and expertise. Gus has helped hundreds of customers to drive their digital and data transformations, scope data requirements, drive actionable insights, and most important of all, cleanse data ready for analysis. Most recently helping to set up, organize and run the Power BI UK community at Microsoft. He holds 6 Azure and Power BI certifications, including the PL-300 and DP-500 certifications. In this book, Gus offers readers invaluable guidance on ingesting, preparing, and cleansing data for analysis in Power BI. --This text refers to an out of print or unavailable edition of this title.
Read more
  • 0
  • 0
  • 603

article-image-ai-distilled-38-latest-in-ai-sora-gemini-15-and-more
Merlyn Shelley
01 Mar 2024
9 min read
Save for later

AI_Distilled 38: Latest in AI: Sora, Gemini 1.5, and More

Merlyn Shelley
01 Mar 2024
9 min read
Dive deeper into the world of AI innovation and stay ahead of the AI curve! Subscribe to our AI_Distilled newsletter for the latest insights. Don't miss out – sign up today!👋 Hello,“People say AI is overhyped, but I think it's not hyped enough. The next generation who will use this in the next few years will have a much higher bar on what technology can do for them. So how you build it for that generation, how you build it for that future will be really interesting to see.”-Puneet Chandok, Microsoft India and South Asia presidentSpeaking at a panel discussion on AI at the Mumbai Tech Week, Chandok believes AI is not hyped enough considering its potential for disruptive transformation. He encourages more training on AI to realize its full potential.Welcome back to a new issue of AI Distilled - your one-stop destination for all things AI, ML, NLP, and Gen AI. Let’s get started with the latest news and developments across the AI sector:OpenAI unveils Sora, an AI model generating videos from textGoogle's latest conversational AI model Gemini 1.5 has a million-token context windowNew AI news reader app tackles clickbait headlines, provides summariesSlack is rolling out new AI features for enterprise users including thread summariesLangChain announced raising $25 million to launch new platform for building LLM appsAI helps improve medical imaging to benefit patients globallyResearchers develop AI model that determines a person's sex from brain scansWe’ve also curated the latest GPT and LLM resources, tutorials, and secret knowledge:Giving AI Models a Better Memory: How Google DeepMind Expanded Context WindowsAdvanced Techniques For More Relevant AI ResponsesReinforcement Learning ExplainedBridging the Gap Between AI and App DevelopmentFinally, don’t forget to check-out our hands-on tips and strategies from the AI community for you to use on your own projects:Creating Custom Models Without the Hassle of Data CollectionCode Your Own AI Coding BuddyEvaluating Code Quality with AI AssistantsEasily Deploy Language Models LocallyLooking for some inspiration? Here are some GitHub repositories to get your projects going!gptscript-ai/gptscriptkarpathy/minbpeAAAI-DISIM-UnivAQ/DALIQwenLM/QwenWriter’s Credit: Special shout-out to Vidhu Jain for her valuable contribution to this week’s issue.Cheers,  Kartikey Pandey  Editor-in-Chief, Packt  ⚡ TechWave: AI/GPT News & AnalysisOpenAI unveiled Sora, an AI model generating videos from text at up to a minute in length. Sora demonstrates an understanding of language and the physical world and photorealism across styles, though human subjects appear game-like.Google's latest conversational AI model Gemini 1.5 analyzes more information than before, thanks to a million-token context window. This allows for summarizing the Apollo 11 mission transcript or analyzing a 44-minute silent film in full. Early results show the system maintains performance as context grows into the millions.Bulletin, a new AI-powered news reader app, tackles clickbait headlines and provides summaries of news articles with customizable news sources.Slack is rolling out new AI features for enterprise users including thread summaries, channel recaps, and answering workplace questions. The tools provide highlights from missed messages and help catch up.LangChain announced raising $25 million to launch their new platform LangSmith for building and monitoring LLM apps. LangSmith allows developers to accelerate workflows across development, testing, deployment, and monitoring. It has already seen significant adoption with over 70,000 signups and 5000 monthly active companies.Courtesy: Bulletin/Shihab MehboobAI is helping improve medical imaging to benefit patients globally. ML can quickly analyze large datasets to find issues doctors may miss and flag urgent cases. Cloud solutions also enable sharing scans and remote expert assistance anywhere. Companies are applying these methods to speed diagnoses, reduce wait times, and bring ultrasounds directly to homes. Researchers have also developed an AI model that can determine a person's sex from brain scans with over 90% accuracy. The model analyzed dynamic MRI scans and identified the default mode, striatum, and limbic networks as key in distinguishing male and female brains. This breakthrough furthers our understanding of brain organization and could help address sex-specific health issues. 🔮 Expert Insights from Packt Community Generative AI with LangChain - By Dr. Ben AuffarthChatGPT and the GPT models by OpenAI have brought about a revolution not only in how we write and research but also in how we can process information.This book discusses the functioning, capabilities, and limitations of LLMs underlying chat systems, including ChatGPT and Bard. It also demonstrates, in a series of practical examples, how to use the LangChain framework to build production-ready and responsive LLM applications for tasks ranging from customer support to software development assistance and data analysis Key TakeawaysExplore the expansive utility of LLMs in real-world applications.Guidance on fine-tuning, prompt engineering, and best practices.Learn how to use the LangChain framework to build production-ready LLM applications.By the end of this book, you'll be equipped with the practical knowledge and skills to leverage the transformative power of generative AI with confidence and creativity.Read More🌟 Secret Knowledge: AI/LLM Resources🌀 Giving AI Models a Better Memory: How Google DeepMind Expanded Context Windows: Google DeepMind's latest AI model Gemini 1.5 has significantly improved how much information it can process at once, thanks to advances in "long context windows." The team discovered their model could understand over 1 million pieces of information in a single sitting, far surpassing earlier limits. This opens up new possibilities for tasks like summarizing lengthy documents, analyzing large codebases, and even comprehending full movies. Developers are excited to explore creative uses of this expanded recall.🌀 Advanced Techniques For More Relevant AI Responses: This article discusses how to improve AI conversation models like RAG by enhancing how information is stored, found and used. Methods covered include indexing sentences individually while keeping their surrounding context, combining keyword search with semantic search, and re-scoring results based on the question. The author demonstrates implementing these "advanced RAG" techniques in Python using tools like LlamaIndex and Weaviate. With these optimizations, AI systems can provide more helpful responses by accessing knowledge in a targeted manner.🌀 Reinforcement Learning Explained: This article breaks down the key concepts of reinforcement learning in an easy-to-understand way. It covers states, actions, rewards, and how agents interact with environments to learn policies. RL agents try different strategies to maximize long-term rewards through trial and error. Episodes provide a framework to evaluate policies. Deterministic policies pick set actions while stochastic policies use probabilities. Whether you're new to RL or a veteran, this primer is worth a read to get acquainted with the basics.🌀 Bridging the Gap Between AI and App Development: As AI becomes more advanced, developers need easier ways to integrate cutting-edge features into their work. However, directly using AI code frameworks can be challenging and limit scalability. The solution? AI gateways. By handling tasks like routing, caching, and monitoring behind the scenes, gateways act as a bridge between complex AI systems and traditional development workflows. They streamline the integration process while ensuring high performance. Are gateways the future of intelligent applications?Partnering with Notion Ever tried Notion? It's a workspace that helps you do things better and faster.You get AI for notes and teamwork, easy drag-and-drop for content, and cool new features to help manage projects and share knowledge.Give it a try!🔛 Masterclass: AI/LLM Tutorials🌀 Creating Custom Models Without the Hassle of Data Collection: Tired of spending big bucks to use proprietary AI APIs or going through the tedious process of collecting your training data? This page shows how you can train customized models more efficiently. By using an open-source LLM to generate synthetic annotations for a small sample of your data, you can then fine-tune a smaller model tailored exactly to your needs. The process takes just a few steps and allows you to analyze large datasets for a fraction of the cost. Best of all, you avoid sending sensitive data to third parties.🌀 Code Your Own AI Coding Buddy: This guide shows you how to build an AI assistant that lives right on your computer. Using tools like HuggingFace and Streamlit, you can create a chatbot trained on Code Llama. Simply ask it questions and it will respond with examples in languages like Python, Java, and C++. Better yet, the models are free and open-source. This is a neural net sidekick to help automate repetitive tasks and speed up your workflow.🌀 Evaluating Code Quality with AI Assistants: This article explores using AI to improve code quality by testing Python scripts with SonarQube and getting feedback from LLMs. The author ran tests on ChatGPT and open-source models like Code Llama to see if they could identify issues flagged by SonarQube. While the models struggled to pinpoint errors solely from descriptions, some provided insightful summaries. Continued development of coding-focused LLMs may help automate part of the review process.🌀 Easily Deploy Language Models Locally: With a simple four-step process, you can get powerful language models like ChatGPT running on your hardware. First, choose a model from HuggingFace and quantize it for faster performance. Then build an Ollama image to serve the model. For a slick interface, deploy a ChatGPT-style React app talking to Ollama via Docker. The whole setup only takes around 15 minutes. Now you've got a custom language assistant without internet dependence.🚀 HackHub: Trending AI Tools🌀 gptscript-ai/gptscript: Open source NLP tool that allows developers to automate tasks by writing scripts in plain English.🌀 karpathy/minbpe: Minimal and clean Python code for the byte pair encoding algorithm commonly used in NLP and language model tokenization.🌀 AAAI-DISIM-UnivAQ/DALI: Framework allowing developers to build multi-agent systems in Prolog for applications like robotics, event processing, and more.🌀 QwenLM/Qwen: Open source code, models, and documentation for the Qwen series of LLMs, including Qwen, Qwen-Chat, and their various sizes.
Read more
  • 0
  • 0
  • 1219

article-image-getting-started-with-microsoft-guidance
Prakhar Mishra
28 Feb 2024
8 min read
Save for later

Getting Started with Microsoft Guidance

Prakhar Mishra
28 Feb 2024
8 min read
Dive deeper into the world of AI innovation and stay ahead of the AI curve! Subscribe to our AI_Distilled newsletter for the latest insights and books. Don't miss out – sign up today!IntroductionThe emergence of a massive language model is a watershed moment in the field of artificial intelligence (AI) and natural language processing (NLP). Because of their extraordinary capacity to write human-like text and perform a range of language-related tasks, these models, which are based on deep learning techniques, have earned considerable interest and acceptance. This field has undergone significant scientific developments in recent years. Researchers all over the world have been developing better and more domain-specific LLMs to meet the needs of various use cases.Large Language Models (LLMs) such as GPT-3 and its descendants, like any technology or strategy, have downsides and limits. And, in order to use LLMs properly, ethically, and to their maximum capacity, it is critical to grasp their downsides and limitations. Unlike large language models such as GPT-4, which can follow the majority of commands. Language models that are not equivalently large enough (such as GPT-2, LLaMa, and its derivatives) frequently suffer from the difficulty of not following instructions adequately, particularly the part of instruction that asks for generating output in a specific structure. This causes a bottleneck when constructing a pipeline in which the output of LLMs is fed to other downstream functions.Introducing Guidance - an effective and efficient means of controlling modern language models compared to conventional prompting methods. It supports both open (LLaMa, GPT-2, Alpaca, and so on) and closed LLMs (ChatGPT, GPT-4, and so on). It can be considered as a part of a larger ecosystem of tools for expanding the capabilities of language models.Guidance uses Handlebars - a templating language. Handlebars allow us to build semantic templates effectively by compiling templates into JavaScript functions. Making it’s execution faster than other templating engines. Guidance also integrates well with Jsonformer - a bulletproof way to generate structured JSON from language models. Here’s a detailed notebook on the same. Also, in case you were to use OpenAI from Azure AI then Guidance has you covered - notebook.Moving on to some of the outstanding features that Guidance offers. Feel free to check out the entire list of features.Features1. Guidance Acceleration - This addition significantly improves inference performance by efficiently utilizing the Key/Value caches as we proceed through the prompt by keeping a session state with the LLM inference. Benchmarking revealed a 50% reduction in runtime when compared to standard prompting approaches. Here’s the link to one of the benchmarking exercises. The below image shows an example of generating a character profile of an RPG game in JSON format. The green highlights are the generations done by the model, whereas the blue and no highlights are the ones that are copied as it is from the input prompt, unlike the traditional method that tries to generate every bit of it.SourceNote: As of now, the Guidance Acceleration feature is implemented for open LLMs. We can soon expect to see if working with closed LLMs as well.2.  Token Healing - This feature attempts to correct tokenization artifacts that commonly occur at the border between the end of a prompt and the start of a group of generated tokens.For example - If we ask LLM to auto-complete a URL with the below-mentioned Input, it’s likely to produce the shown output. Apart from the obvious limitation that the URL might not be valid. I'd like to draw your attention to the extra space it creates (highlighted in red). Such considerations make it difficult to construct a dependable parsing function and robustly absorb its result into subsequent phases.Input: “The link is <a href=http:”Actual Output: “The link is <a href=http: //www.google.com/search?q”Expected Output: “The link is <a href=http://www.google.com/search?q” This is the exact bucket of problems that Token Healing tries to solve using the backtracking method. Feel free to check out this jupyter notebook for more examples.3. Guaranteed Output Structure - Large language models are fantastic at producing useful outputs, but not so much at producing outputs in a specified format (especially open-source ones like LLaMa, GPT-2, and so on). When we want to use the output of a language model as input to another system, this is frequently an issue. With Handlebars, guidance guarantees the output format to be the same as what was being asked for.Let’s now see Guidance in action -InstallationInstalling guidance is a breeze, just do a pip :$ pip install guidanceAssume we are now creating a product description for an e-commerce website. Here's how the traditional generation compares to the guidance generation. Feel free to play with this colab notebook with both the below examples.Traditional GenerationInput:Consider you are an e-commerce expert. You need to write a product description for a product to be listed on the e-commerce website. The product description consists of fixed set of fields to be filled in the JSON. The below shown JSON is the structure of the description with constraints for each of the attributes enclosed within < > brackets. Please follow the mentioned constraint and format diligently. { prod_id: <numeric value of 5 digits>, prod_name: <name starts with the prefix 'p_'>, prod_price: <should be an integer between 1 and 16. Should end with suffix '$'> } The product description isOutput:Consider you are an e-commerce expert. You need to write a product description for a product to be listed on the e-commerce website. The product description consists of fixed set of fields to be filled in the JSON. The below shown JSON is the structure of the description with constraints for each of the attributes enclosed within < > brackets. Please follow the mentioned constraint and format diligently. {   prod_id: <numeric value of 5 digits>,     prod_name: <name starts with the prefix 'p_'>, prod_price: <should be an integer between 1 and 16. Should end with suffix '$'> } The product description is { resentprod_id: <numeric value of 5 digits>, resentprod_name: <name begins with the prefix 'p_'>, resentprod_price: <should be an integer between 1 and 16. Should end with suffix '$'> } In the above example, the product description has 5 constraint fields and 5 attribute fields. The constraints are as follows: resentprod_id: - value of 5 digits, resentprod_name: - name of the product, resentprod_price: - price of the product, resentprod_price_suffix: - suffix of the product price, resentprod_id: - the product id, resentpro diabetic_id: value of 4 digits, resentprod_ astronomer_id: - value of 4 digits, resentprod_ star_id: - value of 4 digits, resentprod_is_generic: - if the product is generic and not the generic type, resentprod_type: - the type of the product, resentprod_is_generic_typeHere’s the code for the above example with GPT-2 language model -``` from transformers import AutoModelForCausalLM, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("gpt2-large") model = AutoModelForCausalLM.from_pretrained("gpt2-large") inputs = tokenizer(Input, return_tensors="pt") tokens = model.generate( **inputs, max_new_tokens=256, temperature=0.7, do_sample=True, )Output:tokenizer.decode(tokens[0], skip_special_tokens=True)) ```Guidance GenerationInput w/ code:guidance.llm = guidance.llms.Transformers("gpt-large") # define the prompt program = guidance("""Consider you are an e-commerce expert. You need to write a product description for a product to be listed on the e-commerce website. The product description consists of fixed set of fields to be filled in the JSON. The following is the format ```json { "prod_id": "{{gen 'id' pattern='[0-9]{5}' stop=','}}", "prod_name": "{{gen 'name' pattern='p_[A-Za-z]+' stop=','}}", "prod_price": "{{gen 'price' pattern='\b([1-9]|1[0-6])\b\$' stop=','}}" }```""") # execute the prompt Output = program()Output:Consider you are an e-commerce expert. You need to write a product description for a product to be listed on the e-commerce website. The product description consists of a fixed set of fields to be filled in the JSON. The following is the format```json { "prod_id": "11231", "prod_name": "p_pizzas", "prod_price": "11$" }```As seen in the preceding instances, with guidance, we can be certain that the output format will be followed within the given restrictions no matter how many times we execute the identical prompt. This capability makes it an excellent choice for constructing any dependable and strong multi-step LLM pipeline.I hope this overview of Guidance has helped you realize the value it may provide to your daily prompt development cycle. Also, here’s a consolidated notebook showcasing all the features of Guidance, feel free to check it out.Author BioPrakhar has a Master’s in Data Science with over 4 years of experience in industry across various sectors like Retail, Healthcare, Consumer Analytics, etc. His research interests include Natural Language Understanding and generation, and has published multiple research papers in reputed international publications in the relevant domain. Feel free to reach out to him on LinkedIn
Read more
  • 0
  • 0
  • 250
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €18.99/month. Cancel anytime
article-image-revolutionize-power-bi-queries-with-openai
Gus Frazer
27 Feb 2024
10 min read
Save for later

Revolutionize Power BI Queries with OpenAI

Gus Frazer
27 Feb 2024
10 min read
Dive deeper into the world of AI innovation and stay ahead of the AI curve! Subscribe to our AI_Distilled newsletter for the latest insights. Don't miss out – sign up today!This article is an excerpt from the book, Data Cleaning with Power BI, by Gus Frazer. Unlock the full potential of your data by mastering the art of cleaning, preparing, and transforming data with Power BI for smarter insights and data visualizationsIntroductionDiscover the transformative potential of leveraging Azure OpenAI, integrated with ChatGPT functionality, to enhance Power BI's M query capabilities. In this article, we delve into how this powerful combination offers expert guidance, efficient solutions, and insightful recommendations for optimizing data transformation tasks. From generating M queries to streamlining complex transformations, explore how Azure OpenAI with ChatGPT empowers users to boost productivity and efficiency in Power BI.Using OpenAI for M queriesAzure OpenAI, with ChatGPT functionality within it, can be a helpful tool for generating M queries in Power BI by providing suggestions, helping with syntax, and offering insights into data transformation tasks. In the following example, you will learn about how you can leverage the chat playground within OpenAI to improve your productivity and efficiency when writing M queries. We will do this by asking a series of questions directly within Azure OpenAI.Complete the next steps to  follow along with the example in your own environment:1. Click on Deployment on the left-hand side and then select Create new deployment to get started.2. Select a model from the base models, in this case, select gpt-35-turbo, and then name your deployment. In this example, name it CleaningDataOpenAI.3. Select Chat playground from the Azure OpenAI Studio home screen or from the Playground tab on the left of your screen. This should open the following view:Figure 15.4 – Screenshot of the chat playground from within Azure OpenAINow that you have set up an instance of OpenAI, here’s an example conversation taken from OpenAI, where the chat functionality has been used to assist with writing M queries in Power BI:User: Hi, I’m working on a Power BI project, and I need to transform some data using M queries. I have a table with sales data, and I want to calculate the total sales for each product category. Can you help me with the M query for that?ChatGPT: Of course! To calculate the total sales for each product category, you can use the Group By transformation in Power BI’s Power Query Editor. Here’s a sample M snippet for your scenario:let Source = YourDataSource, GroupedTable = Table.Group(Source, {"ProductCategory"}, {{"TotalSales", each List.Sum([Sales]), type number}}) in GroupedTableReplace YourDataSource with the actual name of your data source. This code groups the data by the ProductCategory column and calculates the total sales for each group.User: Thanks! That’s a great starting point. What if I want to filter out products with sales less than $1,000  before calculating the total sales?ChatGPT: You can add a filtering step before the Group By transformation. Here’s an updated version of the M:let Source = YourDataSource, FilteredTable = Table.SelectRows(Source, each [Sales] >= 1000), GroupedTable = Table.Group(FilteredTable, {"ProductCategory"}, {{"TotalSales", each List.Sum([Sales]), type number}}) in GroupedTableThis code filters out rows where sales are less than $1,000 before grouping the data by product category and calculating the total sales.User: Perfect! This is really helpful. One more thing – what if I want to sort the results in descending order based on total sales?ChatGPT: You can add a sorting step using the Table.Sort function. Here’s the modified M:let Source = YourDataSource, FilteredTable = Table.SelectRows(Source, each [Sales] >= 1000), GroupedTable = Table.Group(FilteredTable, {"ProductCategory"}, {{"TotalSales", each List.Sum([Sales]), type number}}), SortedTable = Table.Sort(GroupedTable,{{"TotalSales", Order. Descending}}) in SortedTableThis code sorts the grouped table in descending order based on the “TotalSales” column.This interaction represents a simple example of how those who can leverage the power of tools such as OpenAI and C hatGPT will be able to quickly upskill in areas such as coding. It has to be said, though, that while this is still in its infancy, it’s important to always test and validate the answers provided before implementing them in production. Also, ensure that you take precautions when using the publicly available ChatGPT model to avoid sharing sensitive data publicly. If you would like to use sensitive data or you want to ensure that requests are given within a secured governed environment, make sure to use the ChatGPT model within your own Azure OpenAI instance.In more complex examples, optimizing Power Query transformations could involve efficient interaction with Azure OpenAI. This includes streamlining API calls, managing large datasets, and incorporating caching mechanisms for repetitive queries, ensuring a seamless and performant data cleaning process.As we begin to explore the use cases where this technology can be most effective, there are a number of clear early winners:Optimizing query plans: ChatGPT’s natural language understanding can assist in formulating more efficient Power Query plans. By describing the desired transformations in natural language, users can interact with ChatGPT to generate optimized query plans. This involves selecting the most suitable Power Query functions and structuring transformations for performance gains.Caching strategies for repetitive queries: ChatGPT can guide users in devising effective caching strategies. By understanding the context of data transformations, it can recommend where to implement caching mechanisms to store and reuse intermediate results, minimizing redundant API calls and computations. The following is an example of just this, where I have asked Azure OpenAI to verify and optimize my query from the Power Query Advanced Editor. The model suggested I use the Table.Buffer function to help cache the table in memory and optimize the query.Figure – An example request to OpenAI to help optimize my query for Power Query                                                        Figure – An example response from OpenAI to help optimize my query for Power QueryNow as we highlighted in Chapter 11, M Query Optimization, Table.Buffer can indeed improve the performance of your queries and refreshes, but this really depends on the data you are working with. In the previous example, the model doesn’t take the characteristics, size, or complexity of your data into consideration as it isn’t plugged into your data at this stage. Also linking back to the example you walked through in Chapter 11, the placement of where you add Table.Buffer can really impact how your query performs. In the previous example, if you were connecting to a small dataset, you would likely cause it to run slower by adding the Table.Buffer function as the second variable in the query.Lastly, it’s worth mentioning that how you prompt these models is crucially important. In the previous example, we didn’t specify what type of data source we were using in our query. As such, the model hasn’t provided an insight or overview that using Table.Buffer on a data source supporting query folding will cause it to break the fold. Again, this is not so much of a problem if Table.Buffer is placed at the end of your query for smaller datasets, but it is a problem if you add it nearer to the beginning of the query, like in the previous example.Handling large datasets: Dealing with large datasets often poses a challenge in Power Query. OpenAI models, including ChatGPT, can provide insights into dividing and conquering large datasets. This includes strategies for parallel processing, filtering data early in the transformation pipeline, and using aggregations to reduce computational load.Dynamic query adjustments: ChatGPT’s interactive nature allows users to dynamically adjust queries based on evolving requirements. It can assist in crafting queries that adapt to changing data scenarios, ensuring that Power Query transformations remain flexible and responsive to varied datasets.Guidance on complex transformations: Power Query oft en involves intricate transformations. ChatGPT can act as a virtual assistant, guiding users through the process of complex transformations. It can suggest optimal function compositions, advise on conditional logic placement, and assist in structuring transformations to enhance efficiency. The best example of this can be seen in the following two screenshots of an active use case seen in many businesses. The example begins with a user asking the model for a description of what the query is doing. OpenAI then provides a breakdown of what the query is doing in each step to help the user interpret the code. It helps to break down the barriers to coding and also helps to decipher code that has not been documented well by previous employees.                                                     Figure – An example request to OpenAI to help translate my queryFigure – An example response from OpenAI to help describe my queryError handling strategies: Optimizing Power Query also entails robust error handling. ChatGPT can provide recommendations for anticipating and handling errors gracefully within a query. This includes strategies for logging errors, implementing fallback mechanisms, and ensuring the stability of the overall data preparation process.In this section, you learned how to optimize Power Query transformations with Azure OpenAI efficiently. Key takeaways include using ChatGPT for natural-language-based query planning and effective caching strategies. Insights include handling large datasets through parallel processing, early filtering, and aggregations. This knowledge equips you to streamline and enhance your Power Query processes effectively.In the next section, you will learn about Microsoft  Copilot, how to set up a Power BI instance with Copilot activated, and also how you can use this new AI technology to help clean and prepare your data.ConclusionIn conclusion, Azure OpenAI with ChatGPT presents a game-changing solution for maximizing Power BI's potential. From query optimization to error-handling strategies, this integration streamlines processes and enhances productivity. As users navigate complex data transformations, the guidance provided fosters efficient decision-making and empowers users to tackle challenges with confidence. With Azure OpenAI and ChatGPT, the possibilities for revolutionizing Power BI workflows are endless, offering a glimpse into the future of data transformation and analytics.Author BioGus Frazer is a seasoned Analytics Consultant focused on Business Intelligence solutions. With over 7 years of experience working for the two market-leading platforms, Power BI & Tableau, has amassed a wealth of knowledge and expertise. Gus has helped hundreds of customers to drive their digital and data transformations, scope data requirements, drive actionable insights, and most important of all, cleanse data ready for analysis. Most recently helping to set up, organize and run the Power BI UK community at Microsoft. He holds 6 Azure and Power BI certifications, including the PL-300 and DP-500 certifications. In this book, Gus offers readers invaluable guidance on ingesting, preparing, and cleansing data for analysis in Power BI. --This text refers to an out of print or unavailable edition of this title.
Read more
  • 0
  • 0
  • 352

article-image-setting-up-polars-for-data-analysis
Luca Zanna
23 Feb 2024
7 min read
Save for later

Setting Up Polars for Data Analysis

Luca Zanna
23 Feb 2024
7 min read
Dive deeper into the world of AI innovation and stay ahead of the AI curve! Subscribe to our AI_Distilled newsletter for the latest insights. Don't miss out – sign up today!This article is an excerpt from the book, Data Analysis with Polars, by Luca Zanna. Leverage Polars, the lightning-fast dataframe library, to take your Python data analysis skills to the next levelIntroductionIn the ever-evolving landscape of data analysis, harnessing the right tools and methodologies can make all the difference. Welcome to a world where Polars, a powerful data manipulation library, takes center stage. This article is your gateway to unlocking the potential of Polars, and it begins by unraveling the essential components of the data analysis journey. From setting up virtual environments to simplifying data analysis in the cloud with Google Colab, we explore how Polars streamlines your path to insights. Whether you're a seasoned data analyst or just starting your journey, this guide will equip you with the knowledge and tools needed to make your data analysis endeavors efficient and rewarding. Join us as we delve into the fascinating realm of Polars and embrace a new era of data exploration.Installation and virtual environments We will not go through the installation of Python as that is outside the scope of the book. A visit to python.org will give all the information necessary to install Python. Now on to virtual environments. Understanding Virtual Environments and Their Benefits Imagine you have built a fantastic data analysis project using Polars. Your project uses: Python 3.8Polars version 0.15.1 Numpy 1.23.0 Now, you start a new project, and you want to use a newer Polars (0.16.14), along with Numpy and Arrow. So, the new project requires: Python 3.10 Polars 0.16.14 Numpy 1.24.0 Pyarrow 11.0.0 Upgrading Polars and Numpy libraries globally isn't a good idea. If Polars functions have changed between versions, your first project might stop working or give incorrect results with the new version. This is where virtual environments come in. Virtual environments create separate 'spaces' for each project: one for your first data analysis project and another for your new data pipeline project. You can set up a virtual environment manually or have your IDE set-up a virtual environment for you. If you decide to set it up manually, you can check out the guide at https://packaging.python.org/en/latest/guides/installing-using-pip-and-virtual-environments/#creating-a-virtual-environment. Installing and using Polars on a machine To install Polars, first make sure you are in a virtual environment. Then, type: pip install polars If you already have Polars installed and want to upgrade it, type: pip install polars --upgrade In the book we will use other libraries, including numpy, pandas, matplotlib. You can install them with the syntax above, and you can also install multiple libraries at the same tine:  pip install numpy pandas matplotlib Let’s now get our development environment set-up. We will use Visual Studio code, but you are free to use any other IDE that you like. 1. Type code . in the command line to open Visual Studio Code. 2. Right-click on the left, choose New File, and create first_dataframe.ipynb.                                                                    Figure – Creating a new file in Visual Studio Code Files with extension .ipynb are Jupyter Notebook files, which are great for data analysis. To work with these files you need to install the Jupyter extension on VS Code. You can do that by clicking on ‘Extensions’ on the left bar, searching for Jupyter, installing it, and activating it.  Figure – Install Jupyter extension in Visual Studio Code 3. Now back to our file. The first thing to ensure is that we are using Python from our virtual environment. Click on Select Kernel at the top right, then click on the Python that starts with env/: that will be the Python for our virtual environment. Avoid the paths starting with /usr and /bin as those are the system Python instead of our virtual environment.  Figure – Select the Python interpreter in Visual Studio code Now, we're ready for Polars. 4. Type import polars as pl in the first cell and press Shift + Enter to run it. 5. Create a dataframe in the next cell by typing: df = pl.DataFrame({    'a': ['Hello', 'World!'] }) 6. Press Shift + Enter to run the cell. This creates a dataframe called df with one column named 'a' and two rows: 'Hello' and 'World!' To see the dataframe, type df in the next cell and run it.  Figure – Visual Studio code with first Polars dataframe We created our first Polars dataframe. Using Polars on the cloud with Google Colab Instead of installing Polars on your computer, you can also use it in the cloud. One popular cloud service for running code is Google Colab. This way, you don't need to install anything on your machine. To access Google Colab, visit https://colab.research.google.com/ in your web browser. Click on "New Notebook," and you'll see a page that looks similar to VS Code. Now, let's create the same Polars dataframe example in Google Colab: 1. In the first cell, type the following command to ensure we have the latest version of Polars: %pip install polars --upgrade 2. Next, enter this code to import Polars and create a dataframe: import polars as pl df = pl.DataFrame({    'a': ['Hello', 'World !'] }) Finally, display the dataframe by typing: df And that's it! You now have your first Polars dataframe in Google Colab.                                                                     Figure – Google Colab with first Polars dataframe ConclusionIn closing, Polars offers a bridge to the future of data analysis. With the knowledge and hands-on experience gained from this article, you're well-prepared to conquer the intricacies of data manipulation and visualization. The ability to effortlessly create, manipulate, and analyze data using Polars is a powerful tool in your arsenal. Whether you're a data enthusiast or a seasoned analyst, embracing Polars sets you on a path toward efficiency, precision, and data-driven success. As the data landscape continues to evolve, you're now equipped to stay ahead, make informed decisions, and revolutionize your approach to data exploration.Author BioLuca Zanna is a Data Engineer and Data Analyst with over 15 years of experience. He started his career as a financial data analyst after a Master's in Management and passing the Certified Public Accountant (CPA) exam. Luca spent a decade working on financial analysis systems at L’Oréal: developing the systems and training financial analysts across Europe and Asia.Currently, Luca helps companies with building data infrastructure to better leverage their data. Luca is also a corporate teacher for topics such as data analysis, SQL, Python, and cloud data engineering.
Read more
  • 0
  • 0
  • 470

article-image-enhancing-data-quality-with-cleanlab
Prakhar Mishra
21 Feb 2024
8 min read
Save for later

Enhancing Data Quality with Cleanlab

Prakhar Mishra
21 Feb 2024
8 min read
Dive deeper into the world of AI innovation and stay ahead of the AI curve! Subscribe to our AI_Distilled newsletter for the latest insights. Don't miss out – sign up today!IntroductionIt is a well-established fact that your machine-learning model is only as good as the data it is fed. ML model trained on bad-quality data usually has a number of issues. Here are a few ways that bad data might affect machine-learning models -1. Predictions that are wrong may be made as a result of errors, missing numbers, or other irregularities in low-quality data. The model's predictions are likely to be inaccurate if the data used to train is unreliable.2. Bad data can also bias the model. The ML model can learn and reinforce these biases if the data is not representative of the real-world situations, which can result in predictions that are discriminating.3. Poor data also disables the the ability of ML model to generalize on fresh data. Poor data may not effectively depict the underlying patterns and relationships in the data.4. Models trained on bad-quality data might need more retraining and maintenance. The overall cost and complexity of model deployment could rise as a result.As a result, it is critical to devote time and effort to data preprocessing and cleaning in order to decrease the impact of bad data on ML models. Furthermore, to ensure the model's dependability and performance, it is often necessary to use domain knowledge to recognize and address data quality issues.It might come as a surprise, but gold-standard datasets like ImageNet, CIFAR, MNIST, 20News, and more also contain labeling issues. I have put in some examples below for reference -The above snippet is from the Amazon sentiment review dataset , where the original label was Neutral in both cases, whereas Cleanlab and Mechanical turk said it to be positive (which is correct).The above snippet is from the MNIST dataset, where the original label was marked to be 8 and 0 respectively, which is incorrect. Instead, both Cleanlab and Mechanical Turk said it to be 9 and 6 (which is correct).Feel free to check out labelerrors to explore more such cases in similar datasets.Introducing CleanlabThis is where Cleanlab can come in handy as your best bet. It helps by automatically identifying problems in your ML dataset, it assists you in cleaning both data and labels. This data centric AI software uses your existing models to estimate dataset problems that can be fixed to train even better models. The graphic below depicts the typical data-centric AI model development cycle:Apart from the standard way of coding all the way through finding data issues, it also offers Cleanlab Studio - a no-code platform for fixing all your data errors. For the purpose of this blog, we will go the former way on our sample use case.Getting Hands-on with CleanlabInstallationInstalling cleanlab is as easy as doing a pip install. I recommend installing optional dependencies as well, you never know what you need and when. I also installed sentence transformers, as I would be using them for vectorizing the text. Sentence transformers come with a bag of many amazing models, we particularly use ‘all-mpnet-base-v2’ as our choice of sentence-transformers for vectorizing text sequences. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for tasks like clustering or semantic search. Feel free to check out this for the list of all models and their comparisons.pip install ‘cleanlab[all]’ pip install sentence-transformersDatasetWe picked the SMS Spam Detection dataset as our choice of dataset for doing the experimentation. It is a public set of labeled SMS messages that have been collected for mobile phone spam research with total instances of roughly ~5.5k. The below graphic gives a sneak peek of some of the samples from the dataset.Data PreviewCodeLet’s now delve into the code. For demonstration purposes, we inject a 5% noise in the dataset, and see if we are able to detect them and eventually train a better model.Note: I have also annotated every segment of the code wherever necessary for better understanding.import pandas as pd from sklearn.model_selection import train_test_split, cross_val_predict       from sklearn.preprocessing import LabelEncoder               from sklearn.linear_model import LogisticRegression       from sentence_transformers import SentenceTransformer       from cleanlab.classification import CleanLearning       from sklearn.metrics import f1_score # Reading and renaming data. Here we set sep=’\t’ because the data is tab       separated.       data = pd.read_csv('SMSSpamCollection', sep='\t')       data.rename({0:'label', 1:'text'}, inplace=True, axis=1)       # Dropping any instance of duplicates that could exist       data.drop_duplicates(subset=['text'], keep=False, inplace=True)       # Original data distribution for spam and not spam (ham) categories       print (data['label'].value_counts(normalize=True))       ham 0.865937       spam 0.134063       # Adding noise. Switching 5% of ham data to ‘spam’ label       tmp_df = data[data['label']=='ham']               examples_to_change = int(tmp_df.shape[0]*0.05)       print (f'Changing examples: {examples_to_change}')       examples_text_to_change = tmp_df.head(examples_to_change)['text'].tolist() changed_df = pd.DataFrame([[i, 'spam'] for i in examples_text_to_change])       changed_df.rename({0:'text', 1:'label'}, axis=1, inplace=True)       left_data = data[~data['text'].isin(examples_text_to_change)]       final_df = pd.concat([left_data, changed_df])               final_df.reset_index(drop=True, inplace=True)       Changing examples: 216       # Modified data distribution for spam and not spam (ham) categories       print (final_df['label'].value_counts(normalize=True))       ham 0.840016       spam 0.159984    raw_texts, raw_labels = final_df["text"].values, final_df["label"].values # Converting label into integers encoder = LabelEncoder() encoder.fit(raw_train_labels)       train_labels = encoder.transform(raw_train_labels)       test_labels = encoder.transform(raw_test_labels)       # Vectorizing text sequence using sentence-transformers transformer = SentenceTransformer('all-mpnet-base-v2') train_texts = transformer.encode(raw_train_texts)       test_texts = transformer.encode(raw_test_texts)       # Instatiating model instance model = LogisticRegression(max_iter=200) # Wrapping the sckit model around CL cl = CleanLearning(model) # Finding label issues in the train set label_issues = cl.find_label_issues(X=train_texts, labels=train_labels) # Picking top 50 samples based on confidence scores identified_issues = label_issues[label_issues["is_label_issue"] == True] lowest_quality_labels =       label_issues["label_quality"].argsort()[:50].to_numpy()       # Beauty print the label issue detected by CleanLab def print_as_df(index):    return pd.DataFrame(              {    "text": raw_train_texts,              "given_label": raw_train_labels,           "predicted_label": encoder.inverse_transform(label_issues["predicted_label"]),       },       ).iloc[index]       print_as_df(lowest_quality_labels[:5]) As we can see, Cleanlab assisted us in automatically removing the incorrect labels and training a better model with the same parameters and settings. In my experience, people frequently ignore data concerns in favor of building more sophisticated models to increase accuracy numbers. Improving data, on the other hand, is a pretty simple performance win. And, thanks to products like Cleanlab, it's become really simple and convenient.Feel free to access and play around with the above code in the Colab notebook hereConclusionIn conclusion, Cleanlab offers a straightforward solution to enhance data quality by addressing label inconsistencies, a crucial step in building more reliable and accurate machine learning models. By focusing on data integrity, Cleanlab simplifies the path to better performance and underscores the significance of clean data in the ever-evolving landscape of AI. Elevate your model's accuracy by investing in data quality, and explore the provided code to see the impact for yourself.Author BioPrakhar has a Master’s in Data Science with over 4 years of experience in industry across various sectors like Retail, Healthcare, Consumer Analytics, etc. His research interests include Natural Language Understanding and generation, and has published multiple research papers in reputed international publications in the relevant domain. Feel free to reach out to him on LinkedIn
Read more
  • 0
  • 0
  • 256

article-image-leveraging-google-cloud-for-custom-endpoint-with-openai
Henry Habib
20 Feb 2024
8 min read
Save for later

Leveraging Google Cloud for Custom Endpoint with OpenAI

Henry Habib
20 Feb 2024
8 min read
Dive deeper into the world of AI innovation and stay ahead of the AI curve! Subscribe to our AI_Distilled newsletter for the latest insights. Don't miss out – sign up today!This article is an excerpt from the book, OpenAI API Cookbook, by Henry Habib. Integrate the ChatGPT API into various domains ranging from simple wrappers to knowledge-based assistants, multi-model, and conversational applicationsIntroductionIn the realm of application development, the integration of OpenAI's capabilities through custom backend endpoints on Google Cloud is a pivotal step towards unlocking intelligent solutions. This article explores the process of creating such endpoints using Google Cloud's Cloud Functions, allowing for control, customization, and the seamless integration of OpenAI's features. Through this fusion of technologies, developers can craft innovative applications that leverage the power of artificial intelligence to enhance user experiences and drive creativity in their projects.Creating a public endpoint server that calls the OpenAI APIThere are many important benefits of creating your own public endpoint server that calls the OpenAI API, instead of connecting to the OpenAI API directly – the biggest being control and customization, which we will explore in this recipe and the next recipe.In this recipe, we will use GCP to host our public endpoint. When this endpoint is called, it will make a request to OpenAI for a slogan for an ice cream company and then will return the answer to the user. This sounds simple and almost unnecessary to make a public endpoint, but it is the final step we need to build a truly intelligent application that leverages OpenAI.To do this, we will create a GCP resource called Cloud Functions, which we will explore later in the How it works… section of the recipe.Getting readyEnsure you have an OpenAI platform account with available usage credits. If you don’t, please follow the Setting up your OpenAI API Playground environment recipe in Chapter 1. Furthermore, ensure you have created a GCP account. To do this, navigate to https://cloud. google.com/, then select Start Free from the top right, and follow the instructions that you see.You may need to provide a billing profile as well to create any GCP resources. Note that GCP does have a free tier, and in this recipe, we will not go above the free tier (so, essentially, you should not be billed for anything).You may need to create a project if this is your first time logging into Google Cloud Platform. After you log in, select Select a project from the top left and then select New Project. Provide a project name and then select Create.The next recipe in this chapter will also have this same requirement.How to do it…1.  Navigate to https://console.cloud.google.com/. In the Search field at the top of the page, type in Cloud Functions and select the top choice from the drop-down menu, Cloud Functions.Figure – Cloud Functions in the dropdown2. Select Create Function from the top of the page. This will begin to create our custom backend endpoint and start the configuration steps.On the Configuration page, fill in the following steps:Environment: Select 2nd gen from the drop-down menu.Function name: Since we’re creating a backend endpoint that will produce company slogans, the function name will be slogan_creator.Region: Choose the environment location nearest you.In the Trigger menu, choose HTTPS. In the Authentication sub-menu, select Allow unauthenticated invocation. We need to check this as we are going to create a public endpoint that will be accessible from our frontend services.                                                                                  Figure – Sample configuration settings of a Google Cloud Function3. Select the Next button on the bottom of the page to then move on to the Code section.4. From the Runtime dropdown, select Python 3.12. This ensures that our backend endpoint will be coded using the Python programming language.5. For that Entry point option, type in create_slogan. This refers to the name of the function in Python that is called when the public endpoint is reached and triggered.6. On the left-hand side menu, you will see two files: main.py and requirements.txt. Select the requirements.txt file. This will list all the Python packages that need to be installed for our Cloud Function to operate.7. In the center of the screen where the contents of requirements.txt are displayed, enter a new line and type in openai. This will ensure that the latest openai library package is installed. Your screen should look like what’s displayed in Figure below.Figure – Snapshot of the requirements.txt file8. From the left-hand side menu, select main.py. Copy and paste the following code into the center of the screen (where the content for that file is displayed). These are the instructions that the public endpoint will run when it is triggered:import functions_framework from openai import OpenAI @functions_framework.http def create_slogan(request): client = OpenAI(api_key = '<API Key here>') response = client.chat.completions.create( model="gpt-3.5-turbo", messages=[ { "role": "system", "content": "You are an AI assistant that creates one slogan based on company descriptions" }, { "role": "user", "content": "A company that sells ice cream" } ], temperature=1, max_tokens=256, top_p=1, frequency_penalty=0, presence_penalty=0 ) return response.choices[0].message.contentAs you can see, it simply calls the OpenAI endpoint, requests a chat completion, and then returns the output to the user. You will also need your OpenAI API key.9. Next, deploy the function by selecting the Deploy button at the bottom of your page.10.   Wait for your function to be fully deployed, which typically takes two minutes. You can verify whether the function has been deployed or not by observing the progress in the top left section of the page (shown in Figure below). Once it is green and checkmarked, the build is successful, and your function has been deployed.                                                                                      Figure – The Cloud Function deployment page11. Now, let’s verify that our function works. Select the endpoint URL, found on the top of the page near URL. It’s typically in the form https://[location]-[project-name]. cloudfunctions.net/[function-name]. It is also highlighted in the above Figure.12. This will open a new web page that will trigger our custom public endpoint, and return a chat completion, which, in this case, is the slogan for an ice cream business. Note that this is a public endpoint – this will work on your computer, phone, or any device connected to the internet.Figure – Output of a Google Cloud FunctionHow it works…In this recipe, we created a public endpoint. This endpoint can be accessed by anyone (including your application in future recipes). The logic of the endpoint is simple and something we have covered prior: return a slogan for a company that sells ice cream. What’s new, however, is that this is our very own public endpoint that is hosted in Google Cloud, using the Cloud Function resource.Note that we used the free tier of Google Cloud Functions, which does have limitations such as a cap on the number of function invocations per month, limited execution time, and constrained computational resources. However, for our current purposes, these limitations are not a hindrance, allowing us to deploy and test our functions effectively without incurring costs. This setup is ideal for small-scale applications or for learning and experimentation purposes, providing a practical way to understand cloud functionalities and serverless architecture in a cost-effective manner.ConclusionIn conclusion, the synergy between Google Cloud's infrastructure and OpenAI's capabilities offers developers a powerful platform for creating intelligent applications. By leveraging Cloud Functions to build custom backend endpoints, developers can unlock a world of possibilities for innovation and creativity. This article has provided a comprehensive guide to integrating OpenAI into Google Cloud, empowering developers to craft intelligent solutions that enhance user experiences and drive the evolution of application development. With this knowledge, developers are well-equipped to embark on their journey of building intelligent applications that push the boundaries of what is possible in the digital landscape.Author BioHenry Habib is a Manager at one of the world's top management consulting firms, advising F500 companies on analytics and operations, with a particular focus on building intelligent AI-driven solutions and tools to create impact. He is a passionate online instructor and educator, amassing a of more than 150K paid students and facilitating technical programs at large banks and governmental.A proponent in the no-code and LLM revolution, he believes that anyone can now create powerful and intelligent applications without any deep technical skills. Henry resides in Toronto, Canada with his wife, and enjoys reading AI research papers and playing tennis in his free time.
Read more
  • 0
  • 0
  • 567
article-image-ai-distilled-37-cutting-edge-updates-and-expert-guidance
Merlyn Shelley
16 Feb 2024
10 min read
Save for later

AI_Distilled 37: Cutting-Edge Updates and Expert Guidance

Merlyn Shelley
16 Feb 2024
10 min read
Dive deeper into the world of AI innovation and stay ahead of the AI curve! Subscribe to our AI_Distilled newsletter for the latest insights. Don't miss out – sign up today!👋 Hello ,“[Sovereign AI] codifies your culture, your society's intelligence, your common sense, your history – you own your own data…[Use AI to] activate your industry, build the infrastructure, as fast as you can.” - Jensen Huang, NVIDIA founder and CEO Huang strongly advocated for countries to rapidly develop their own national AI capabilities and systems. NVIDIA's dominance in GPUs positions it to be a major beneficiary of the AI revolution as its technologies are fundamental for running advanced AI applications. No wonder NVIDIA's market value recently surpassed Amazon’s. Embark on a new AI journey with AI_Distilled, a curated digest of the most recent developments in AI/ML, LLMs, NLP, GPT, and Gen AI. We’ll kick things off by tapping into the latest news and developments in the AI sector: OpenAI updates ChatGPT with memory retention Microsoft unveils new Copilot features Google updates Gemini and unveils mobile app Apple's new AI model called MGIE New open-source AI model Aya converses in 100+ languages Reka AI introduces two new state-of-the-art AI models DeepMind and USC develop new technique to improve LLMs’ reasoning abilities New open-source AI model Smaug-72B achieves top spot NVIDIA unveils new chatbot Chat with RTX AI helps identify birds and conserve an English wetland USPTO issues new guidance stating AI alone can't be named as an inventor We’ve also handpicked GPT and LLM resources and secret knowledge that’ll come in handy for your next project:  Building a Scalable Foundation Model Platform for Your Enterprise Making Bridges Between AI and Business Evaluating Large Language Models: A Guide to Benchmark Tests Code Generation Gets Smarter with Context Looking for hands-on tips and strategies straight from the developer community? We’ve got you covered with some incredible tutorials to get you started: Building a Question Answering Bot from Scratch Creating SMS Apps with Next.js and AI Assistants Harness the Power of LLMs Without GPUs Making the Switch to Open-Source Models Finally, feel free to check out our curated list of smoking hot GitHub repositories. arplaboratory/learning-to-fly time-series-foundation-models/lag-llama noahfarr/rlx uclaml/SPIN phidatahq/phidata  📥 Feedback on the Weekly EditionTake our weekly survey and get a free PDF copy of our best-selling book, "Interactive Data Visualization with Python - Second Edition." 📣 And here's the twist – we're tuning into YOUR frequency! Inspired by a reader's request, we're launching a column just for you. Got a burning question or a topic you're itching to dive into? Drop your suggestions in our content box – because your journey of discovery is our blueprint.We appreciate your input and hope you enjoy the book! Share your thoughts and opinions here! Writer’s Credit: Special shout-out to Vidhu Jain for their valuable contribution to this week’s newsletter content!  Cheers,  Merlyn Shelley  Editor-in-Chief, Packt  SignUp | Advertise | Archives⚡ TechWave: AI/GPT News & AnalysisOpenAI (whose annual revenue hit $2 billion in December) has updated its ChatGPT chatbot to retain information from past conversations, allowing it to apply context from previous discussions to new chats. This could help the bot respond with more relevant, personalized replies. Microsoft's AI assistant Copilot has also received upgrades like improved models and image editing tools. Google updated its conversational AI too with a new mobile app and advanced model (now called Gemini). A new paid tier for Gemini Ultra provides developers and users access to more advanced features and capabilities for $20 per month.  Courtesy: OpenAI Apple's new AI model called MGIE allows editing images through natural language commands, performing tasks from color adjustments to complex manipulations. Interestingly, a new open-source AI model called Aya can converse in over 100 languages, potentially increasing access for many. Reka AI has introduced two new state-of-the-art AI models, Flash and Edge, which achieve top performance on language and vision tasks while Edge maintains a smaller size. Researchers from DeepMind and USC have developed a new technique called SELF-DISCOVER to improve LLMs’ reasoning abilities. A new open-source AI model (available for all) called Smaug-72B has achieved the top spot on the leaderboard for language models, demonstrating skills that surpass proprietary competitors.Courtesy: Apple NVIDIA's market value surpassed Amazon's for the first time since 2002 thanks to strong demand for its AI chips. The company also released a new chatbot called Chat with RTX allowing users to run personalized generative AI models locally on PCs. Chatbots aside, AI is making waves across fields. Conservationists are using AI to identify birds by their songs to help restore an English wetland. Scientists are speeding discoveries and tackling climate change better as new multimodal and smaller language models enhance technologies. That said, the USPTO has issued new guidance stating that while AI alone can't be named as an inventor on patents, humans can use AI in the invention process as long as they make a significant creative contribution.   🔮 Expert Insights from Packt Community LLMs Under the Hood – Building Models for Your Unique Use Cases [Video] - By Maxime Labonne, Denis Rothman, Abi Aryan This video course is an invaluable resource for AI developers looking to master the art of building enterprise-grade Large Language Models (LLMs). Here's why it's a must-watch:  Key Takeaways: 1. Expert-Led Guidance: Learn from industry experts Maxime Labonne, Dennis Rothman, and Abi Aryan, who bring a wealth of experience in LLM development. 2. End-to-End Coverage: Gain comprehensive insights into the entire LLM lifecycle, from architecture to deployment. 3. Advanced Skills Development: Acquire advanced skills to architect high-performing LLMs tailored to your specific business needs. 4. Hands-On Learning: Engage in practical exercises that reinforce key concepts and techniques for building, refining, and deploying LLMs. 5. Real-World Impact: Learn how to create LLMs that deliver tangible business value and solve complex organizational challenges. Course Highlights: - Making informed architecture decisions for optimal performance. - Selecting the right model types and configuring hyperparameters effectively. - Curating high-quality training data for better model outcomes. - Mastering pre-training, fine-tuning, and rigorous model evaluation techniques. - Strategies for smooth productionization, proactive monitoring, and post-deployment maintenance. By the end of this masterclass, you'll be equipped with the practical knowledge and skills needed to develop and deploy LLMs that drive real-world impact for your organization. Watch Here 🌟 Secret Knowledge: AI/LLM Resources🌀 Building a Scalable Foundation Model Platform for Your Enterprise: This post outlines how enterprises can provide different teams governed access to powerful foundation models through a centralized API layer. The solution described captures model usage and costs for each team to enable chargebacks. It also allows controlling access and throttling usage on a per-team basis. Building on serverless AWS services ensures the solution scales to meet demand. Whether you need transparent access for innovation or just want to understand how teams are leveraging AI, implementing a solution like this can help unlock the potential of generative AI for your whole organization.  🌀 Making Bridges Between AI and Business: This article discusses how businesses can develop an AI platform to integrate generative technologies like RAG and CRAG safely. It covers collecting data, querying knowledge bases, and using prompt engineering to guide AI models. The goal is to leverage AI's potential while avoiding risks through a blended strategy of retrieval and generation. This overview provides a solid foundation for aligning cutting-edge models with your organization's needs. 🌀 Evaluating Large Language Models: A Guide to Benchmark Tests: As AI language models become more advanced, it's important we have proper ways to assess their abilities. This article outlines several benchmark tests that evaluate language models on tasks like reasoning, code generation and more. Tests like WinoGrande, Hellaswag and GLUE provide insights into models' strengths and weaknesses. The benchmarks also allow for comparisons between different models. They give us a more complete picture of a model's skills. 🌀 Code Generation Gets Smarter with Context: Google's Codey APIs now enhance code completion and generation using Retrieval Augmented Generation, which retrieves relevant code from repositories to provide more accurate responses. This "RAG" technique allows LLMs to leverage external context. The blog post explores how RAG works and demonstrates its ability to inject appropriate code snippets. While not perfect, RAG is a useful tool to explore coding variations and adapt to custom styles when used with Codey on Vertex AI.   Partnering with Notion Ever tried Notion? It's a workspace that helps you do things better and faster.You get AI for notes and teamwork, easy drag-and-drop for content, and cool new features to help manage projects and share knowledge.Give it a Try! 🔛 Masterclass: AI/LLM Tutorials🌀 Building a Question Answering Bot from Scratch: This tutorial shows you how to create a basic question answering bot by processing text from Wikipedia, generating embeddings with OpenAI, and storing the data in Momento Vector Index. It covers initializing clients, loading and chunking data, generating embeddings, indexing the embeddings, and searching to return answers. The bot is enhanced by using GPT-3 to provide concise responses instead of raw text. Following these steps will give you hands-on experience constructing a QA system from the ground up. 🌀 Creating SMS Apps with Next.js and AI Assistants: This article shows you how to build a texting app that uses Next.js for the frontend and backend. OpenAI's GPT-3 is utilized to generate meeting invite messages. Twilio handles sending the texts. React components collect invite details. API routes fetch GPT-3 responses and send data to Twilio. It's a clever way to enhance workflows with AI. 🌀 Harness the Power of LLMs Without GPUs: Google's new localllm tool allows developers to run large language models locally using just a CPU, eliminating the need for expensive GPUs. With localllm and Cloud Workstations, you can build AI-powered apps right in your browser-based development environment. Quantized models optimize performance on CPUs while localllm handles downloading and running the models. The post provides instructions for setting up a Cloud Workstation with localllm pre-installed to get started with this new way to develop with LLMs. 🌀 Making the Switch to Open-Source Models: Hugging Face's new Messages API allows developers to easily transition chatbots and conversational models from OpenAI to open-source options like Mixtral. The API maintains compatibility with OpenAI libraries so your code doesn't need updating. You can also deploy these models to Hugging Face's Inference Endpoints and use them with frameworks like LangChain and LlamaIndex. This unlocks greater control, lower costs and more transparency compared to closed-source options. 🚀 HackHub: Trending AI Tools🌀 arplaboratory/learning-to-fly: Train end-to-end quadrotor control policies using deep reinforcement learning on a laptop in seconds 🌀 time-series-foundation-models/lag-llama: Open-source foundation model for probabilistic time series forecasting to perform zero-shot predictions on new time series data and eventually fine-tune for their specific forecasting needs 🌀 noahfarr/rlx: Implements reinforcement learning algorithms using Apple's MLX framework, making the code optimized to run on M-series chips 🌀 uclaml/SPIN: Implement Self-Play Fine-Tuning to enhance language models through self-supervised learning 🌀 phidatahq/phidata: Open-source toolkit for building AI assistants using function calling  Affiliate Disclosure: This newsletter contains affiliate links. If you buy through them, we may earn a small commission at no extra cost to you. This supports our work and helps us keep providing useful content. We only recommend products and services we think will benefit our readers. Thanks for your support! 
Read more
  • 0
  • 0
  • 239

article-image-zapiers-ai-features-a-game-changer-for-automation
Kelly Goss
15 Feb 2024
8 min read
Save for later

Zapier's AI Features: A Game-Changer for Automation

Kelly Goss
15 Feb 2024
8 min read
Dive deeper into the world of AI innovation and stay ahead of the AI curve! Subscribe to our AI_Distilled newsletter for the latest insights. Don't miss out – sign up today!This article is an excerpt from the book, Automate It with Zapier and Generative AI, by Kelly Goss. Strategize and create automated business workflows with Zapier, including AI-integrated functionalities such as the ChatGPT plugin and the OpenAI integration, to minimize repetitive tasks without using codeIntroductionThis article delves into the exciting world of Zapier's AI-driven features, including Natural Language Action (NLA) and ChatGPT, uncovering how they can supercharge your workflow. Join us on a journey through the synergy of automation and artificial intelligence, where the tools of tomorrow are transforming the way we work today.Technical requirementsTo fully benefit from the content in this article, you will need access to a Zapier account. The Zapier Starter plan will provide you with the necessary features to effectively build and implement multistep Zaps with the features discussed in this chapter. You must join Zapier’s Early Access Program to get access to features in beta. To use the Zapier ChatGPT plugin, you must subscribe to a ChatGPT Plus account, and to use the OpenAI integration with Zapier, you must subscribe to a paid account.Running Zap AI Actions (beta) using the Zapier Chrome extension and the ChatGPT plugin (beta)Zapier has integrated AI-powered or AI-related features into a few Zapier built-in apps, and more developments are underway. For example, the Zapier Chrome extension built-in app now has Natural Language Action (NLA) and AI Actions (beta) features, and the Formatter by Zapier built-in app now has a transform function available named Split Text into Chunks for AI Prompt. Many of these features are currently in beta and may change. Before we explore these features, let’s cover NLA and AI Actions in more detail.NLA and AI ActionsWith the NLA API from Zapier, you can use the Zapier platform to power your own products, and it is optimized for products that use natural languages, such as chatbots, for example. You can read more about the NLA API and use cases in the article at https://zapier.com/l/naturallanguage-actions. The NLA API allows you to create AI Actions to use with Zapier’s 6,000+ app integrations and 30,000+ action events. You can read more about AI actions in the articleat https://help.zapier.com/hc/en-us/articles/17013994198925-ZapierAI-actions-in-other-apps.The Zapier Chrome extension and Zapier ChatGPT plugin (beta) are two examples where NLA features and AI Actions have been introduced. We will cover these features in the next two sections.The following Zapier help articles provide more details on creating, using, and managing AI actions:Create AI actions within an AI app: https://help.zapier.com/hc/en-us/ articles/17014153949709 • Use AI actions within an AI app: https://help.zapier.com/hc/en-us/ articles/17014427470477 • Manage your AI actions: https://help.zapier.com/hc/en-us/ articles/17014677921037Decide if AI should guess the value of specific fields in AI actions: https://help.zapier. com/hc/en-us/articles/17014876778381Let’s start with the Zapier Chrome extension NLA actions (beta).Zapier Chrome extension NLA actions (beta)The NLA API and the use of AI Actions are the basis for the new functionality in the Zapier Chrome extension, thus allowing you to run AI-powered actions right inside your Zapier Chrome extension with simple prompts. For example, you could use this functionality to draft a reply to an email.Using AI actions with the Zapier Chrome extensionTo get started, follow the instructions in Chapter 10, Other Useful Built-In Apps by Zapier, to set up the Zapier Chrome extension built-in app ahead of using the NLA functionality. Let’s explore the next steps once this is set up:1. Select the Zapier Chrome extension icon in your browser, then click on the Actions (beta) tab, and then click on the Set up actions button.2. In the popup that appears, click on the Allow button to give Zapier access to AI Actions in your account.3. In the new browser window that appears, click on Add a new action. You can also manage NLA access to your Zapier apps by clicking on the Manage access link.4.  Set up your Zapier Chrome extension action by mapping the fields. For example, we might want to send a direct message to ourselves in Slack with a random motivational quote. The setup of the action for this example is shown in the following screenshot:Figure 19.2 – Setting up a Zapier Chrome extension NLA action (beta)5. Turn on your action by clicking on the Enable action button.6. Navigate to your browser window and click on the Zapier Chrome extension icon, select your action from the dropdown, add your instructions in the Instructions field, and select the Preview button to show a preview or Run to run the action. This is shown in the following screenshot:Figure 19.3 – Creating a Zapier Chrome extension NLA runYou can also activate field hints by selecting the Use field hints (advanced) checkbox.The result in Slack is shown in the following screenshot:Figure 19.4 – The result of the NLA prompt using a Zapier Chrome extension run actionNow, let’s review how to use the ChatGPT plugin (beta) feature to connect and run Zapier actions straight from your ChatGPT chatbot interface.The Zapier ChatGPT plugin (beta) – running Zap actions from ChatGPTThe development and release of the OpenAI ChatGPT chatbot have encouraged users to take advantage of AI to perform a multitude of tasks that normally would have taken hours and might require specific skills, such as copywriting. Some examples of the tasks that ChatGPT is helping users to perform are as follows:Writing cold outreach emailsDrafting responses to emailsWriting blog articles and newslettersResearching topics and creating presentationsYou can now supercharge your newly found AI-enhanced productivity by connecting Zapier to ChatGPT with the Zapier ChatGPT plugin (beta) to run AI actions to perform a variety of tasks without copying and pasting text from the ChatGPT chatbot interface. For example, you could ask ChatGPT to perform the following tasks and then perform the relevant Zapier AI action:Write a response to an email sent by a specific person and create a draft email response in GmailWrite a blog article and create a new post in WordPressDraft a presentation and create a Google Slides presentation from a templateThe article at https://zapier.com/blog/announcing-zapier-chatgpt-plugin/ presents several more use cases of the Zapier ChatGPT plugin (beta).Important Note: You must be subscribed to a ChatGPT Plus account in order to use plugins.Using the Zapier ChatGPT plugin (beta)Before you can use the Zapier ChatGPT plugin (beta), you must connect your ChatGPT account to your Zapier account by installing the Zapier plugin in ChatGPT, and then set up your ChatGPT AI Actions. Comprehensive instructions can be found at https://zapier.com/blog/ use-the-zapier-chatgpt-plugin/ and https://help.zapier.com/hc/en-us/ articles/14058263394573.To illustrate how the Zapier ChatGPT plugin (beta) works, we will use the example of prompting ChatGPT to write a response to an email sent by a specific person and then an associated Zapier ChatGPT plugin (beta) AI action to create a draft email response in Gmail.You can set up your ChatGPT plugin AI actions by navigating to https://nla.zapier.com/ openai/actions/, similar to as described in the Zapier Chrome extension NLA actions (beta) section. The following screenshot shows how the ChatGPT action would be set up:Figure 19.5 – Setting up a ChatGPT action (beta)The following screenshot shows the Please draft an email for Joe Bloggs (joe@ sabcompany.com) and let them know the report that was due Friday is ready for review today. prompt and result in ChatGPT:Figure 19.6 – Using the Zapier ChatGPT plugin (beta)Clicking on the review and confirm the draft link opens another browser window for you to choose to alter the AI action by clicking on the Edit button or process the run request by clicking on the Run button. This is shown in the following screenshot:Figure 19.7 – Reviewing the ChatGPT plugin (beta) action result The result of running the action is shown in the following screenshot:The action is shown in the following screenshot:Figure 19.8 – The result of the ChatGPT plugin (beta) AI action run in GmailYou should now have a better understanding of how and when to use the Zapier ChatGPT plugin (beta).ConclusionIn conclusion, Zapier's integration of AI-powered features, including NLA and ChatGPT, opens a new realm of possibilities for workflow automation. With the potential to streamline tasks, generate content, and enhance productivity, these tools are transforming the way we work. As technology continues to evolve, Zapier remains at the forefront, empowering users to harness the power of AI to make their workflows more efficient and innovative than ever before. Embrace the future of automation and elevate your productivity with Zapier's AI-driven solutions.Author BioKelly Goss is a process automation specialist and company director for Solvaa, a cloud-based automation consultancy. She has worked across multiple industry verticals to provide Zapier consultancy, digital process improvement, process mapping, and process automation solutions. Kelly is a is one of less than 100 Zapier Certified Experts in the world and a speaker at multiple events related to automation.
Read more
  • 0
  • 0
  • 376

article-image-setting-up-openai-playground
Henry Habib
13 Feb 2024
9 min read
Save for later

Setting Up OpenAI Playground

Henry Habib
13 Feb 2024
9 min read
Dive deeper into the world of AI innovation and stay ahead of the AI curve! Subscribe to our AI_Distilled newsletter for the latest insights. Don't miss out – sign up today!This article is an excerpt from the book, OpenAI API Cookbook, by Henry Habib. Integrate the ChatGPT API into various domains ranging from simple wrappers to knowledge-based assistants, multi-model, and conversational applicationsIntroductionThe OpenAI Playground is an interactive web-based interface designed to allow users to experiment with OpenAI’s language models, including ChatGPT. It’s a place where you can learn about the capabilities of these models by entering prompts and seeing the responses generated in real time. This platform acts as a sandbox where developers, researchers, and curious individuals alike can experiment, learn, and even prototype their ideas.In the Playground, you have the freedom to engage in a wide range of activities. You can test out different versions of the AI models, experimenting with various prompts to see how the model responds, and you can play around with different parameters to influence the responses generated. It provides a real-time glimpse into how these powerful AI models think, react, and create based on your input.Setting up your OpenAI Playground environmentGetting readyBefore you start, you need to create an OpenAI Platform account.Navigate to https://platform.openai.com/and sign in to your OpenAI account. If you do not have an account, you can sign up for free with an email address. Alternatively, you can log in to OpenAI with a valid Google, Microsoft, or Apple account. Follow the instructions to complete the creation of your account. You may need to verify your identity with a valid phone number.How to do it…1. After you have successfully logged in, navigate to Profile in the top right-hand menu, select Personal, and then select Usage from the left-hand side menu. Alternatively, you can navigate to https://platform.openai.com/account/usage after logging in. This page shows the usage of your API, but more importantly, it shows you how many credits you have available.2. Normally, OpenAI provides you a $5 credit with a new account, which you should be able to see under the Free Trial Usage section of the page. If you do have credits, proceed to step 4. If, however, you do not have any credits, you will need to upgrade and set up a paid account.3. You need not set up a paid account if you have received free credits. If you run out of free credits, however, here is how you can set up a paid account: select Billing from the left-hand side menu and then select Overview. Then, select the Set up paid account button. You will be prompted to enter your payment details and set a dollar threshold, which can be set to any level of spend that you are comfortable with. Note that the amount of credits required to collectively execute every single recipe contained in this book is not likely to exceed $5.4. After you have created an OpenAI Platform account, you should be able to access the Playground by selecting Playground from the top menu bar, or by navigating to https://platform. openai.com/playground.How it works…The OpenAI Playground interface is, in my experience, clean, intuitive, and designed to provide users easy access to OpenAI’s powerful language models. The Playground is an excellent place to learn how the models perform under different settings, allowing you to experiment with parameters such as temperature and max tokens, which influence the randomness and length of the outputs respectively. The changes you make are instantly reflected in the model’s responses, offering immediate feedback.As shown in Figure 1.1, the Playground consists of three sections: the System Message, the Chat Log, and the Parameters. You will learn more about these three features in the Running a completion request in the OpenAI Playground recipe.Figure 1.1 – The OpenAI PlaygroundNow, your Playground is set up and ready to be used. You can use it to run completion requests and see how varying your prompts and parameters affect the response from OpenAI.Running a completion request in the OpenAI PlaygroundIn this recipe, we will actually put the Playground in action and execute a completion request from OpenAI. Here, you will see the power of the OpenAI API and how it can be used to provide completions for virtually any prompt.Getting readyEnsure you have an OpenAI Platform account with available usage credits. If you don’t, please follow the Setting up your OpenAI API Playground environment recipe. All the recipes in this chapter will have this same requirement.How to do it…Let’s go ahead and start testing the model with the Playground. Let’s create an assistant that writes marketing slogans:1. Navigate to the OpenAI Playground.2. In the System Message, type in the following: You are an assistant that creates marketing slogans based on descriptions of companies. Here, we are clearly instructing the model of its role and context.3. In the Chat Log, populate the USER message with the following: A company that writes engaging mystery novels.4. Select the Submit button on the bottom of the page.5. You should now see a completion response from OpenAI. In my case (Figure 1.2), the response isUnlock the Thrilling Pages of Suspense with Our Captivating Mystery Novels!Figure 1.2 – The OpenAI Playground with prompt and completionNoteSince OpenAI’s LLMs are probabilistic, you will likely not see the same outputs as me. In fact, if you run this recipe multiple times, you will likely see different answers, and that is expected because it is built into the randomness of the model.How it works…OpenAI’s text generation models utilize a specific neural network architecture termed a transformer. Before delving deeper into this, let’s unpack some of these terms:Neural network architecture: At a high level, this refers to a system inspired by the human brain’s interconnected neuron structure. It’s designed to recognize patterns and can be thought of as the foundational building block for many modern AI systems.Transformer: This is a type of neural network design that has proven particularly effective for understanding sequences, making it ideal for tasks involving human language. It focuses on the relationships between words and their context within a sentence or larger text segment.In machine learning, unsupervised learning typically refers to training a model without any labeled data, letting the model figure out patterns on its own. However, OpenAI’s methodology is more nuanced. The models are initially trained on a vast corpus of text data, supervised with various tasks. This helps them predict the next word in a sentence, for instance. Subsequent refinements are made using Reinforcement Learning through Human Feedback (RLHF), where the model is further improved based on feedback from human evaluators.Through this combination of techniques and an extensive amount of data, the model starts to capture the intricacies of human language, encompassing context, tone, humor, and even sarcasm.In this case, the completion response is provided based on both the System Message and the Chat Log. The System Message serves a critical role in shaping and guiding the responses you receive from Open AI, as it dictates the model’s persona, role, tone, and context, among other attributes. In our case, the System Message contains the persona we want the model to take: You are an assistant that creates marketing slogans based on descriptions of companies.The Chat Log contains the history of messages that the model has access to before providing its response, which contains our prompt, A company that writes engaging mystery novels.Finally, the parameters contain more granular settings that you can change for the model, such as temperature. These significantly change the completion response from OpenAI. We will discuss temperature and other parameters in greater detail in Chapter 3.There’s more…It is worth noting that ChatGPT does not read and understand the meaning behind text – instead, the responses are based on statistical probabilities based on patterns it observed during training.The model does not understand the text in the same way that humans do; instead, the completions are generated based on statistical associations and patterns that have been trained in the model’s neural network from a large body of similar text. Now, you know how to run completion requests with the OpenAI Playground. You can try this feature out for your own prompts and see what completions you get. Try creative prompts such as write me a song about lightbulbs or more professional prompts such as explain Newton's first law.ConclusionIn conclusion, the OpenAI Playground offers a dynamic environment for exploring the capabilities of language models like ChatGPT. By setting up your account and navigating through its features, you can unlock endless possibilities for creativity, learning, and innovation. Experiment with prompts, adjust parameters, and observe real-time responses to gain insights into AI's potential. Whether you're a developer, researcher, or curious individual, the Playground provides a sandbox for unleashing your imagination and understanding AI's intricacies. With each completion request, you delve deeper into the world of artificial intelligence, discovering its nuances and expanding your horizons. Start your journey today and witness the power of AI in action.Author BioHenry Habib is a Manager at one of the world's top management consulting firms, advising F500 companies on analytics and operations, with a particular focus on building intelligent AI-driven solutions and tools to create impact. He is a passionate online instructor and educator, amassing a of more than 150K paid students and facilitating technical programs at large banks and governmental.A proponent in the no-code and LLM revolution, he believes that anyone can now create powerful and intelligent applications without any deep technical skills. Henry resides in Toronto, Canada with his wife, and enjoys reading AI research papers and playing tennis in his free time.
Read more
  • 0
  • 0
  • 391
article-image-leveraging-chatgpt-and-gpt-3
Deborah A. Dahl
12 Feb 2024
8 min read
Save for later

Leveraging ChatGPT and GPT-3

Deborah A. Dahl
12 Feb 2024
8 min read
Dive deeper into the world of AI innovation and stay ahead of the AI curve! Subscribe to our AI_Distilled newsletter for the latest insights. Don't miss out – sign up today!This article is an excerpt from the book, Natural Language Understanding with Python, by Deborah A. Dahl. Combine natural language technology, deep learning, and large language models to create human-like language comprehension in computer systemsIntroductionIn the world of artificial intelligence, ChatGPT stands as a versatile conversational agent, adept at handling generic information interactions. While customization can be a challenge at present, ChatGPT offers a unique avenue for developers and AI enthusiasts alike. Beyond chat-based dialogue, it holds the potential to streamline the often time-consuming process of generating training data for conventional applications. In this article, we delve into the capabilities of ChatGPT and explore the journey of fine-tuning GPT-3 for specific use cases. By the end, you'll be equipped to harness the power of these language models, from data generation to AI customization, in your projects. Let's embark on this exciting AI journey together.ChatGPTChatGPT (https://openai.com/blog/chatgpt/) is a system that can interact with users about generic information in a very capable way. Although at the time of writing, it is hard to customize ChatGPT for specific applications, it can be useful for other purposes than customized natural language applications. For example, it can very easily be used to generate training data for a conventional application. If we wanted to develop a banking application using some of the techniques discussed earlier in this book, we would need training data to provide the system with examples of how users might ask the system questions. Typically, this involves a process of collecting actual user input, which could be very time-consuming. ChatGPT could be used to generate training data instead, by simply asking it for examples. For example, for the prompt give me 10 examples of how someone might ask for their checking balance, ChatGPT responded with the sentences in Figure 11.3:Figure 11.3 – GPT-3 generated training data for a banking applicationMost of these seem like pretty reasonable queries about a checking account, but some of them don’t seem very natural. For that reason, data generated in this way always needs to be reviewed. For example, a developer might decide not to include the second to the last example in a training set because it sounds stilted, but overall, this technique has the potential to save developers quite a bit of time.Applying GPT-3Another well-known LLM, GPT-3, can also be fine-tuned with application-specific data, which should result in better performance. To do this, you need an OpenAI key because using GPT-3 is a paid service. Both fine-tuning to prepare the model and using the fine-tuned model to process new data at inference time will incur a cost, so it is important to verify that the training process is performing as expected before training with a large dataset and incurring the associated expense.OpenAI recommends the following steps to fine-tune a GPT-3 model.1. Sign up for an account at https://openai.com/ and obtain an API key. The API key will be used to track your usage and charge your account accordingly.2.  Install the OpenAI command-line interface (CLI) with the following command:! pip install --upgrade openaiThis command can be used at a terminal prompt in Unix-like systems (some developers have reported problems with Windows or macOS). Alternatively, you can install GPT-3 to be used in a Jupyter notebook with the following code:!pip install --upgrade openaiAll of the following examples assume that the code is running in a Jupyter notebook:1. Set your API key:api_key =<your API key> openai.api_key = api_key2. The next step is to specify the training data that you will use for fine-tuning GPT-3 for your application. This is very similar to the process of training any NLP system; however, GPT-3 has a specific format that must be used for training data. This format uses a syntax called JSONL, where every line is an independent JSON expression. For example, if we want to fine-tune GPT-3 to classify movie reviews, a couple of data items would look like the following (omitting some of the text for clarity):{"prompt":"this film is extraordinarily horrendous and i'm not going to waste any more words on it . ","completion":" negative"} {"prompt":"9 : its pathetic attempt at \" improving \" on a shakespeare classic . 8 : its just another piece of teen fluff . 7 : kids in high school are not that witty . … ","completion":" negative"} {"prompt":"claire danes , giovanni ribisi , and omar epps make a likable trio of protagonists , …","completion":" negative"}Each item consists of a JSON dict with two keys, prompt and completion. prompt is the text to be classified, and completion is the correct classification. All three of these items are negative reviews, so the completions are all marked as negative.It might not always be convenient to get your data into this format if it is already in another format, but OpenAI provides a useful tool for converting other formats into JSONL. It accepts a wide range of input formats, such as CSV, TSV, XLSX, and JSON, with the only requirement for the input being that it contains two columns with prompt and completion headers. Table 11.2 shows a few cells from an Excel spreadsheet with some movie reviews as an example:promptcompletionkolya is one of the richest films i’ve seen in some time . zdenek sverak plays a confirmed old bachelor ( who’s likely to remain so ) , who finds his life as a czech cellist increasingly impacted by the five-year old boy that he’s taking care of …positivethis three hour movie opens up with a view of singer/guitar player/musician/ composer frank zappa rehearsing with his fellow band members . all the rest displays a compilation of footage , mostly from the concert at the palladium in new york city , halloween 1979 …positive`strange days’ chronicles the last two days of 1999 in los angeles . as the locals gear up for the new millenium , lenny nero ( ralph fiennes ) goes about his business …positiveTable 11.2 – Movie review data for fine-tuning GPT-3To convert one of these alternative formats into JSONL, you can use the fine_tunes.prepare_ data tool, as shown here, assuming that your data is contained in the movies.csv file:!openai tools fine_tunes.prepare_data -f ./movies.csv -qThe fine_tunes.prepare_data utility will create a JSONL file of the data and will also provide some diagnostic information that can help improve the data. The most important diagnostic that it provides is whether or not the amount of data is sufficient. OpenAI recommends several hundred examples of good performance. Other diagnostics include various types of formatting information such as separators between the prompts and the completions.After the data is correctly formatted, you can upload it to your OpenAI account and save the filename:file_name = "./movies_prepared.jsonl" upload_response = openai.File.create( file=open(file_name, "rb"), purpose='fine-tune' ) file_id = upload_response.idThe next step is to create and save a fine-tuned model. There are several different OpenAI models that can be used. The one we’re using here, ada, is the fastest and least expensive, and does a good job on many classification tasks:openai.FineTune.create(training_file=file_id, model="ada") fine_tuned_model = fine_tune_response.fine_tuned_modelFinally, we can test the model with a new prompt:answer = openai.Completion.create( model = fine_tuned_model, engine = "ada", prompt = " I don't like this movie ", max_tokens = 10, # Change amount of tokens for longer completion temperature = 0 ) answer['choices'][0]['text']In this example, since we are only using a few fine-tuning utterances, the results will not be very good. You are encouraged to experiment with larger amounts of training data.ConclusionIn conclusion, ChatGPT and GPT-3 offer invaluable tools for AI enthusiasts and developers alike. From data generation to fine-tuning for specific applications, these models present a world of possibilities. As we've seen, ChatGPT can expedite the process of creating training data, while GPT-3's customization can elevate the performance of your AI applications. As the field of artificial intelligence continues to evolve, these models hold immense promise. So, whether you're looking to streamline your development process or take your AI solutions to the next level, the journey with ChatGPT and GPT-3 is an exciting one filled with untapped potential. Embrace the future of AI with confidence and innovation.Author BioDeborah A. Dahl is the principal at Conversational Technologies, with over 30 years of experience in natural language understanding technology. She has developed numerous natural language processing systems for research, commercial, and government applications, including a system for NASA, and speech and natural language components on Android. She has taught over 20 workshops on natural language processing, consulted on many natural language processing applications for her customers, and written over 75 technical papers. This is Deborah’s fourth book on natural language understanding topics. Deborah has a PhD in linguistics from the University of Minnesota and postdoctoral studies in cognitive science from the University of Pennsylvania.
Read more
  • 0
  • 0
  • 747

article-image-chatgpt-for-business-intelligence
Chaitanya Yadav
08 Feb 2024
7 min read
Save for later

ChatGPT for Business Intelligence

Chaitanya Yadav
08 Feb 2024
7 min read
Dive deeper into the world of AI innovation and stay ahead of the AI curve! Subscribe to our AI_Distilled newsletter for the latest insights. Don't miss out – sign up today!IntroductionA large language model (LLM) chatbot called ChatGPT was created by OpenAI. It is a strong tool that may be applied to many different activities, including business intelligence (BI).Business Intelligence is the act of gathering, analyzing, and interpreting data to derive insights that may be applied to improve business choices. Many BI-related processes can be automated with ChatGPT, freeing up BI analysts to work on more strategic projects.What is Business Intelligence?Business intelligence is the process of converting unprocessed data into insights that can be used to make decisions. BI gives businesses a competitive edge by assisting them in making well-informed decisions and streamlining operations. Data gathering, processing, reporting, and visualization all have been associated. For BI, ChatGPT can be applied in a number of ways, including the following:Data preparation and cleaning: Data preparation and cleaning, which is frequently a tedious and time-consuming activity, can be automated by using ChatGPT. Data problems can be found and fixed by ChatGPT, and data can also be transformed into a format that BI tools can use.Data analysis: Data analysis and pattern recognition can be done with ChatGPT. Additionally, ChatGPT can be used to create predictive models that can be applied for predicting outcomes in the future.Data visualization: Data visualizations, such as graphs and charts, can be created using ChatGPT and utilized to share findings with others.Improved Efficiency: By streamlining operations and assisting firms in identifying areas for development, BI helps them become more productive and cost-effective.Real-Time Monitoring:  Businesses may use BI to track performance in real-time, which makes it simpler to respond to changes and trends as they emerge.Examples:Step 1: Import Libraries and Set API KeyIn the first step, we begin by importing and installing the openai package.pip install openaiNow you will need to enter the OpenAI API key before running the code as shown below. with the help of this code, you will be able to use the OpenAI API to ask queries about business intelligence and receive answers from ChatGPT.import openai # Enter your API key here api_key = "sk-kPe290Nfc5yjg08gYTR3T3B1bkFJfghIOkIvj1zObNvlc" openai.api_key = api_key[DP1] Step 2: Define the Interaction FunctionHere, we define a Python function called ask_chatgpt which will take a question as input and interact with ChatGPT using the OpenAI API. Inside this function, we make a request to the OpenAI API with the question provided as the prompt.def ask_chatgpt(question):      response = openai.Completion.create(            engine="text-davinci-002",             prompt=f"ChatGPT for Business Intelligence: {question}",             max_tokens=150  # Here you can adjust according to your need     )     return response.choices[0].textStep 3: Define Example QuestionsWe create a collection of sample questions in this step. You should use them as your questions or prompts when using ChatGPT.questions = [        "What is Business Intelligence?",        "How can BI benefit businesses?",        "Can you provide an example of data analysis in BI?", ]Step 4: Interact with ChatGPT and Print ResponsesWe go across the set of sample questions in the final stage. The ask_chatgpt function is used to communicate with ChatGPT and receive responses for each query. After that, the console is printed with the responses.# Now we interact with ChatGPT and print the responses for question in questions:      response = ask_chatgpt(question)      print(f"Q: {question}")      print(f"A: {response}\n") Output:Q: What is Business Intelligence? A: BI transforms data into actionable insights for informed decision-making. Q: How can BI benefit businesses? A: BI enhances decision-making, efficiency, and customer experiences. Q: Can you provide an example of data analysis in BI? A: A business might use BI to analyze sales data to identify trends, target specific customer segments, optimize marketing campaigns.In this output, ChatGPT perfectly answers each query, showcasing its capacity to understand and explain business intelligence-related ideas. The responses highlight key aspects of BI, emphasizing its function in turning data into insightful knowledge that informs strategic business choices. Example of ChatGPT PlaygroundWe can utilize ChatGPT Playground to obtain a hands-on experience with ChatGPT for BI. Through the ChatGPT Playground, a web-based interface, you may communicate with ChatGPT and create text, translate languages, create other types of creative output, and receive insightful responses to your queries.To start using the ChatGPT Playground, simply go to the ChatGPT website and click on the "Playground" button. Once you're in the Playground, you can start implementing your prompts and queries. After that text will be generated by ChatGPT in response to your queries and prompts.For example, we can use ChatGPT to analyze sales data. We can simply type in the following prompt:Analyze the sales data for the past quarter and identify any trends or patterns.ChatGPT will then generate a response that analyzes the sales data and identifies any trends or patterns. For example, ChatGPT might respond with the following:The sales data for the past quarter shows that sales of product A have increased by 15%, while sales of product B have decreased by 10%. This suggests that there is a growing demand for product A, and a declining demand for product B.Best Practices for Harnessing ChatGPT in Business IntelligenceUnderstand the limitations: ChatGPT's an extensive language model, but it doesn't act as a human analyst. It's able to produce creative and interesting texts, but it hasn't been perfect. It's possible to make mistakes, and it can be prejudiced. To minimize the risk of errors, it is important to be aware of this limitation and use ChatGPT as effectively as possible.Protect confidential information: If you are using ChatGPT as a cloud-based service, it is important to take into account the data that you're sharing with it. Do not divulge to ChatGPT any classified or sensitive information.Verify information: The ChatGPT is trained on a massive database of text and code, in order to make sure that it is accurate and up to date, crosscheck the output from ChatGPT with other sources of information.Provide feedback: The ChatGPT is still being developed, and there will always be improvements. Please give feedback to help developers make the model better if you do not feel satisfied with the output generated by ChatGPT.ConclusionChatGPT can automate lengthy BI operations including data cleaning, preparation, and analysis. In addition, it can provide data visualizations, predictive models, and real-time monitoring—all essential elements of efficient business intelligence. ChatGPT's automation of these tasks allows the BI analysts to work on more complex and significant initiatives, which in turn improves company productivity and efficiency.The article describes how to include ChatGPT into the BI process in a step-by-step manner. These processes include developing interaction functions, importing libraries, and offering sample queries. It also highlights the interactive aspect of the ChatGPT Playground, where users may engage with ChatGPT directly to analyze data, ask questions, and get intelligent answers. Overall, ChatGPT is a useful tool in the constantly changing field of business analytics and decision-making because of its capacity to automate operations and offer useful information.Author BioChaitanya Yadav is a data analyst, machine learning, and cloud computing expert with a passion for technology and education. He has a proven track record of success in using technology to solve real-world problems and help others to learn and grow. He is skilled in a wide range of technologies, including SQL, Python, data visualization tools like Power BI, and cloud computing platforms like Google Cloud Platform. He is also 22x Multicloud Certified.In addition to his technical skills, he is also a brilliant content creator, blog writer, and book reviewer. He is the Co-founder of a tech community called "CS Infostics" which is dedicated to sharing opportunities to learn and grow in the field of IT.
Read more
  • 0
  • 0
  • 1240