Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Free Learning
Arrow right icon

How-To Tutorials - AI Tools

89 Articles
article-image-hands-on-vector-similarity-search-with-milvus
Alan Bernardo Palacio
21 Aug 2023
14 min read
Save for later

Hands-On Vector Similarity Search with Milvus

Alan Bernardo Palacio
21 Aug 2023
14 min read
IntroductionIn the realm of AI and machine learning, effective management of vast high-dimensional vector data is critical. Milvus, an open-source vector database, tackles this challenge using advanced indexing for swift similarity search and analytics, catering to AI-driven applications.Milvus operates on vectorization and quantization, converting complex raw data into streamlined high-dimensional vectors for efficient indexing and querying. Its scope spans recommendation, image recognition, natural language processing, and bioinformatics, boosting result precision and overall efficiency.Milvus impresses not just with capabilities but also design flexibility, supporting diverse backends like MinIO, Ceph, AWS S3, Google Cloud Storage, alongside etcd for metadata storage.Local Milvus deployment becomes user-friendly with Docker Compose, managing multi-container Docker apps well-suited for Milvus' distributed architecture. The guide delves into Milvus' core principles—vectorization and quantization—reshaping raw data into compact vectors for efficient querying. Its applications in recommendation, image recognition, natural language processing, and bioinformatics enhance system accuracy and efficacy.The next article details deploying Milvus locally via Docker Compose. This approach's simplicity underscores Milvus' user-centric design, delivering robust capabilities within an accessible framework. Let’s get started.Standalone Milvus with Docker ComposeSetting up a local instance of Milvus involves a multi-service architecture that consists of the Milvus server, metadata storage, and object storage server. Docker Compose provides an ideal environment to manage such a configuration in a convenient and efficient way.The Docker Compose file for deploying Milvus locally consists of three services: etcd, minio, and milvus itself. etcd provides metadata storage, minio functions as the object storage server and milvus handles vector data processing and search. By specifying service dependencies and environment variables, we can establish seamless communication between these components. milvus, etcd, and minio services are run in isolated containers, ensuring operational isolation and enhanced security.To launch the Milvus application, all you need to do is execute the Docker Compose file. Docker Compose manages the initialization sequence based on service dependencies and takes care of launching the entire stack with a single command. The next is the docker-compose.yml which specifies all of the aforementioned components:version: '3' services: etcd:    container_name: milvus-etcd    image: quay.io/coreos/etcd:v3.5.5    environment:      - ETCD_AUTO_COMPACTION_MODE=revision      - ETCD_AUTO_COMPACTION_RETENTION=1000      - ETCD_QUOTA_BACKEND_BYTES=4294967296      - ETCD_SNAPSHOT_COUNT=50000    command: etcd -advertise-client-urls=http://127.0.0.1:2379 -listen-client-urls <http://0.0.0.0:2379> --data-dir /etcd minio:    container_name: milvus-minio    image: minio/minio:RELEASE.2022-03-17T06-34-49Z    environment:      MINIO_ACCESS_KEY: minioadmin      MINIO_SECRET_KEY: minioadmin    ports:      - "9001:9001"      - "9000:9000"    command: minio server /minio_data --console-address ":9001"    healthcheck:      test: ["CMD", "curl", "-f", "<http://localhost:9000/minio/health/live>"]      interval: 30s      timeout: 20s      retries: 3 milvus:    container_name: milvus-standalone    image: milvusdb/milvus:v2.3.0-beta    command: ["milvus", "run", "standalone"]    environment:      ETCD_ENDPOINTS: etcd:2379      MINIO_ADDRESS: minio:9000    ports:      - "19530:19530"      - "9091:9091"    depends_on:      - "etcd"      - "minio"After we have defined the docker-compose file we can deploy the services by first running docker compose build and then running docker compose up -d.In the next section, we'll move on to a practical example — creating sentence embeddings. This process leverages Transformer models to convert sentences into high-dimensional vectors. These embeddings capture the semantic essence of the sentences and serve as an excellent demonstration of the sort of data that can be stored and processed with Milvus.Creating sentence embeddingsCreating sentence embeddings involves a few steps: preparing your environment, importing necessary libraries, and finally, generating and processing the embeddings. We'll walk through each step in this section assuming that this code is being executed in a Python environment where the Milvus database is running.First, let’s start with the requirements.txt file:transformers==4.25.1 pymilvus==2.1.0 torch==2.0.1 protobuf==3.18.0 Now let’s import the packages. import numpy as np import torch import torch.nn.functional as F from transformers import AutoTokenizer, AutoModel from pymilvus import (    connections,    utility,    FieldSchema, CollectionSchema, DataType,    Collection, )Here, we're importing all the necessary libraries for our task. numpy and torch are used for mathematical operations and transformations, transformers is for language model-related tasks, and pymilvus is for interacting with the Milvus server.This Python code block sets up the transformer model we will be using and lists the sentences for which we will generate embeddings. We first specify a model checkpoint ("sentence-transformers/all-MiniLM-L6-v2") that will serve as our base model for sentence embeddings. We then define a list of sentences to generate embeddings for. To facilitate our task, we initialize a tokenizer and model using the model checkpoint. The tokenizer will convert our sentences into tokens suitable for the model, and the model will use these tokens to generate embeddings:# Transformer model checkpoint model_ckpt = "sentence-transformers/all-MiniLM-L6-v2" # Sentences for which we will compute embeddings sentences = [    "I took my dog for a walk",    "Today is going to rain",    "I took my cat for a walk", ] # Initialize tokenizer and model tokenizer = AutoTokenizer.from_pretrained(model_ckpt) model = AutoModel.from_pretrained(model_ckpt)Here, we define the model checkpoint that we will use to get the sentence embeddings. We then initialize a list of sentences for which we will compute embeddings. The tokenizer and model are initialized using the defined checkpoint.We've obtained token embeddings, but we need to aggregate them to obtain sentence-level embeddings. For this, we'll use a mean pooling operation. The upcoming section of the guide will define a function to accomplish this.Mean Pooling Function DefinitionThis function is used to aggregate the token embeddings into sentence embeddings. The token embeddings and the attention mask (which indicates which tokens are not padding and should be considered for pooling) are passed as inputs to this function. The function performs a weighted average of the token embeddings according to the attention mask and returns the aggregated sentence embeddings:# Mean pooling function to aggregate token embeddings into sentence embeddings def mean_pooling(model_output, attention_mask):    token_embeddings = model_output.last_hidden_state    input_mask_expanded = (        attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()    )    return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(       input_mask_expanded.sum(1), min=1e-9    )This function takes the model output and the attention mask as input and returns the sentence embeddings by performing a mean pooling operation over the token embeddings. The attention mask is used to ignore the tokens corresponding to padding during the pooling operation.Generating Sentence EmbeddingsThis code snippet first tokenizes the sentences, padding and truncating them as necessary. We then use the transformer model to generate token embeddings. These token embeddings are pooled using the previously defined mean pooling function to create sentence embeddings. The embeddings are normalized to ensure consistency and finally transformed into Python lists to make them compatible with Milvus:# Tokenize the sentences and compute their embeddings encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors="pt") with torch.no_grad():    model_output = model(**encoded_input) sentence_embeddings = mean_pooling(model_output, encoded_input["attention_mask"]) # Normalize the embeddings sentence_embeddings = F.normalize(sentence_embeddings, p=2, dim=1) # Convert the sentence embeddings into a format suitable for Milvus embeddings = sentence_embeddings.numpy().tolist()In this section, we're using the transformer model to tokenize the sentences and generate their embeddings. We then normalize these embeddings and convert them to a format suitable for insertion into Milvus (Python lists).With the pooling function defined, we're now equipped to generate the actual sentence embeddings. These embeddings will then be processed and made ready for insertion into Milvus.Inserting vector embeddings into MilvusWe're now ready to interact with Milvus. In this section, we will connect to our locally deployed Milvus server, define a schema for our data, and create a collection in the Milvus database to store our sentence embeddings.Now, it's time to put our Milvus deployment to use. We will define the structure of our data, set up a connection to the Milvus server, and prepare our data for insertion:# Establish a connection to the Milvus server connections.connect("default", host="localhost", port="19530") # Define the schema for our collection fields = [    FieldSchema(name="pk", dtype=DataType.INT64, is_primary=True, auto_id=True),    FieldSchema(name="sentences", dtype=DataType.VARCHAR, is_primary=False, description="The actual sentences",                max_length=256),    FieldSchema(name="embeddings", dtype=DataType.FLOAT_VECTOR, is_primary=False, description="The sentence embeddings",                dim=sentence_embeddings.size()[1]) ] schema = CollectionSchema(fields, "A collection to store sentence embeddings")We establish a connection to the Milvus server and then define the schema for our collection in Milvus. The schema includes a primary key field, a field for the sentences, and a field for the sentence embeddings.With our connection established and schema defined, we can now create our collection, insert our data, and build an index to enable efficient search operations.Create Collection, Insert Data, and Create IndexIn this snippet, we first create the collection in Milvus using the previously defined schema. We then organize our data to match our collection's schema and insert it into our collection. After the data is inserted, we create an index on the embeddings to optimize search operations. Finally, we print the number of entities in the collection to confirm the insertion was successful:# Create the collection in Milvus sentence_embeddings_collection = Collection("sentence_embeddings", schema) # Organize our data to match our collection's schema entities = [    sentences,  # The actual sentences    embeddings,  # The sentence embeddings ] # Insert our data into the collection insert_result = sentence_embeddings_collection.insert(entities) # Create an index to make future search queries faster index = {    "index_type": "IVF_FLAT",    "metric_type": "L2",    "params": {"nlist": 128}, } sentence_embeddings_collection.create_index("embeddings", index) print(f"Number of entities in Milvus: {sentence_embeddings_collection.num_entities}")We create a collection in Milvus using the previously defined schema. We organize our data ( sentences, and sentence embeddings) and insert this data into the collection. Primary keys are generated as auto IDs so we don't need to add them. Finally, we print the number of entities in the collection:This way, the sentences, and their corresponding embeddings are stored in a Milvus collection, ready to be used for similarity searches or other tasks.Now that we've stored our embeddings in Milvus, let's make use of them. We will search for similar vectors in our collection based on similarity to sample vectors.Search Based on Vector SimilarityIn the code, we're loading the data from our collection into memory and then defining the vectors that we want to find similar vectors for:# Load the data into memory sentence_embeddings_collection.load()This step is necessary to load the data in our collection into memory before conducting a search or a query. The search parameters are set, specifying the metric to use for calculating similarity (L2 distance in this case) and the number of clusters to examine during the search operation. The search operation is then performed, and the results are printed out:# Vectors to search vectors_to_search = embeddings[-2:] search_params = {    "metric_type": "L2",    "params": {"nprobe": 10}, } # Perform the search result = sentence_embeddings_collection.search(vectors_to_search, "embeddings", search_params, limit=3, output_fields=["sentences"]) # Print the search results for hits in result:    for hit in hits:        print(f"hit: {hit}, sentence field: {hit.entity.get('sentences')}")Here, we're searching for the two most similar sentence embeddings to the last two embeddings in our list. The results are limited to the top 3 matches, and the corresponding sentences of these matches are printed out:Once we're done with our data, it's a good practice to clean up. In this section, we'll explore how to delete entities from our collection using their primary keys.Delete Entities by Primary KeyThis code first gets the primary keys of the entities that we want to delete. We then query the collection before the deletion operation to show the entities that will be deleted. The deletion operation is performed, and the same query is run after the deletion operation to confirm that the entities have been deleted:# Get the primary keys of the entities we want to delete ids = insert_result.primary_keys expr = f'pk in [{ids[0]}, {ids[1]}]' # Query before deletion result = sentence_embeddings_collection.query(expr=expr, output_fields=["sentences", "embeddings"]) print(f"Query before delete by expr=`{expr}` -> result: \\\\n-{result[0]}\\\\n-{result[1]}\\\\n") # Delete entities sentence_embeddings_collection.delete(expr) # Query after deletion result = sentence_embeddings_collection.query(expr=expr, output_fields=["sentences", "embeddings"]) print(f"Query after delete by expr=`{expr}` -> result: {result}\\\\n")Here, we're deleting the entities corresponding to the first two primary keys in our collection. Before and after the deletion, we perform a query to see the result of the deletion operation:Finally, we drop the entire collection from the Milvus server:# Drop the collection utility.drop_collection("sentence_embeddings")This code first gets the primary keys of the entities that we want to delete. We then query the collection before the deletion operation to show the entities that will be deleted. The deletion operation is performed, and the same query is run after the deletion operation to confirm that the entities have been deleted.ConclusionCongratulations on completing this hands-on tutorial with Milvus! You've learned how to harness the power of an open-source vector database that simplifies and accelerates AI and ML applications. Throughout this journey, you set up Milvus locally using Docker Compose, transformed sentences into high-dimensional embeddings and conducted vector similarity searches for practical use cases.Milvus' advanced indexing techniques have empowered you to efficiently store, search, and analyze large volumes of vector data. Its user-friendly design and seamless integration capabilities ensure that you can leverage its powerful features without unnecessary complexity.As you continue exploring Milvus, you'll uncover even more possibilities for its application in diverse fields, such as recommendation systems, image recognition, and natural language processing. The high-performance similarity search and analytics offered by Milvus open doors to cutting-edge AI-driven solutions.With your newfound expertise in Milvus, you are equipped to embark on your own AI adventures, leveraging the potential of vector databases to tackle real-world challenges. Continue experimenting, innovating, and building AI-driven applications that push the boundaries of what's possible. Happy coding!Author Bio:Alan Bernardo Palacio is a data scientist and an engineer with vast experience in different engineering fields. His focus has been the development and application of state-of-the-art data products and algorithms in several industries. He has worked for companies such as Ernst and Young, Globant, and now holds a data engineer position at Ebiquity Media helping the company to create a scalable data pipeline. Alan graduated with a Mechanical Engineering degree from the National University of Tucuman in 2015, participated as the founder in startups, and later on earned a Master's degree from the faculty of Mathematics in the Autonomous University of Barcelona in 2017. Originally from Argentina, he now works and resides in the Netherlands.LinkedIn 
Read more
  • 0
  • 0
  • 621

article-image-getting-started-with-google-makersuite
Anubhav Singh
08 Aug 2023
14 min read
Save for later

Getting Started with Google MakerSuite

Anubhav Singh
08 Aug 2023
14 min read
MakerSuite, essentially a developer tool, enables everyone with a Google Account to access the power of PaLM API with a focus on building products and services using it. The MakerSuite interface allows rapid prototyping and testing of the configurations that are used while interacting with the PaLM API. Once the user is satisfied with the configurations, they can very easily port them to their backend codebases. We’re now ready to dive into exploring the MakerSuite interface. To get started, head over to https://makersuite.google.com/ on your browser. Make sure you’re logged in to your Google Account to be able to access the interface. You’ll be able to see the welcome dashboard.The available options on MakerSuite as of the date of writing this article are - Text Prompts, Data Prompts, and Chat Prompts. Let’s take a brief look at what each of these does.Text PromptsText prompts are the most basic and customizable form of prompts that can be provided to the models. You can choose to set it to any task or ask any question in a stateless manner. The user prompt and input are ingested by the model every time it is run and the model itself does not hold any context. Thus, text prompts are a great starting point and can be made as deterministic or creative in their output as required by the user.Let us create a Text prompt in MakerSuite. Click on the Create button on the Text prompt card and you’ll be presented with the prompt testing UI. On the top, MakerSuite allows users to save their prompts by name. It also provides starter samples which allow one to quickly test and understand how the product works. Below that, is the main working area where the users can define their own prompts and by adjusting the configuration parameters of the model at the bottom, run the prompts to produce an output.First, Click on the Pencil icon on the top left to give this prompt a suitable name. For our example, we’ll be building a prompt that asks the model to produce the etymology of any given word. We’re using the following valuesfield     valuename     Word Etymologydescription     Asking PaLM API to provide word etymologies.Click on “Save” to save these values and close the input modal. Kindly note that these values do not affect the model in any manner and are simply present for user convenience.Now, in the main working area below, we’ll write the required prompt. For our example, we write the prompt given below:For any given word that follows, provide its etymology in no more than 300 words.Aeroplane.Etymology: Now, let’s adjust the model parameters. Click on the button next to the Run button to change the model settings. For our example, we shall set the following values to the parameters: field    value       remarkmodel    Text Bison       Use defaultTemperature    0       Since word etymologies are dry facts and are not expected to be creativeAdd stop sequence        Use defaultMax outputs    1       Word etymologies are usually not going to benefit from variations of telling themDepending on the use case you’re building your generative AI-backed software for, you may wish to change the Safety settings of the model response. To do so, click on the Edit safety settings button. You can see the following options and can change them as per your requirement. For our use case, we shall leave it to default.At the bottom of the configuration menu, you can choose to adjust further advanced settings of the model output. Let’s see what these options are: We shall leave these options on default for now.Great, we’re now all set to run the prompt. Click on the Run button on the bottom and wait for the model to produce the output. In our case, the model outputs:The word "aeroplane" is derived from the Greek words "aēr" (air) and "planē" (to wander). The term was first used in the 1860s to describe a type of flying machine that was powered by a steam engine. In 1903, the Wright brothers made the first successful flight of a powered aeroplane.Note that, for you, the response might come out slightly different due to the inherently non-deterministic nature of how generative AI works. At this point, you might want to experiment by erasing the model output and running the prompt again. Does the output change? Re-run it several times to observe changes in the model output. Then, try adjusting the values of the model configuration and see how that affects the output of the model. If you had set the temperature configuration to 0, you will notice that the model likely produces the same output many times. Try increasing it to 1 and then run the model a few times. Does the output generated in each iteration remain the same now? It is highly possible that you’ll observe that the model output changes every time you re-run the prompt.It is interesting to note here that the prompt you provide to the model does not contain any examples of how the model should respond. This method of using the model is called Zero-shot learning in which the trained model is asked to produce predictions for an input that it may not have seen before. In our example, it is the task of providing word etymologies, which the model may or may not have been trained on.This makes us wonder if we gave the model an input that it has absolutely not trained on, is it likely to produce the correct response? Let us try this out. Change the word in our etymology prompt example to “xakoozifictation”. Hit the Run button to see what the model outputs. Instead of telling us that the word does not exist and thus, has no meaning, the model attempts to produce an etymology of the word. The output we got was: Instead of telling us that the word does not exist and thus, has no meaning, the model attempts to produce an etymology of the word. The output we got was: Xakoozifictation is a portmanteau of the words "xakooz" and "ification". Xakooz is a nonsense word created by combining the sounds of the words "chaos" and "ooze". ification is a suffix that can be added to verbs to create nouns that describe the process of doing something. In this case, xakoozifictation means the process of making something chaotic or oozy.What we observe here is called “model hallucination” - a phenomenon common among large language models wherein the model tries to produce output contrary to common logic or is inaccurate in real-world knowledge. It is highly recommended here to read more about Model Hallucations in the “Challenges in working with LLMs” section.Let us continue our discussion about Zero shot learning. We saw that when we provide only a prompt to the model and no examples of how to produce responses, the model tries its best to produce a response and in most general cases it succeeds. However, if we were to provide some examples to the model of the expected input-output pairs, can we program the model to perform more accurately and do away with the model hallucinations? Let us give this a try by providing some input-output examples of the model. Update your model prompt to the following:For any given word that follows, provide its etymology in no more than 300 words.Examples: Word: aeroplaneReasoning: Since it's a valid English word, produce an output.Etymology: Aeroplane is a compound word formed from the Greek roots "aer" (air) and "planus" (flat). Word: balloonReasoning: Since it's a valid English word, produce an output.Etymology: The word balloon comes from the Italian word pallone, which means ball. The Italian word is derived from the Latin word ballare, which means to dance. Word: oungopoloctousReasoning: Since this is not a valid English word, do not produce an etymology and say it's "Not available".Etymology: Not availableWord: kaploxicatingReasoning: Since this is not a valid English word, do not produce an etymology and say it's "Not available".Etymology: Not availableWord: xakoozifictationEtymology: In the above prompt, we have provided 2 examples of words that exist and 2 examples of words that do not exist. We expect the model to learn from these examples and produce output accordingly. Hit Run to see the output of the model, remember to set the temperature configuration of the model back to 0.You will see that the model responds with the “Not available” output for non-existent words now and with etymologies only for words that exist in the English dictionary. Hence, by providing a few examples of how we expect the model to behave, we were able to stop the model hallucination problem.This method of providing some samples of the expected input-output to the model in the prompt is called Few shot learning. In Few shot learning, the model is expected to predict output on unknown input based on a few similar samples it has received prior to the task of prediction. In special cases, the number of samples might be exactly one, which gets termed as “One-shot learning”.Now, let us explore the next type of prompt available on the MakerSuite - Data Prompt.Data PromptsIn Data prompts, the user is expected to use the model to generate more samples of data based on provided samples. The MakerSuite data prompt interface defines two sections of the prompt - the prompt itself which is now options and the samples of the data that the prompt has to work on, which is a required section.It is important to note here that at the bottom of the page, the model is still the Text Bison model. Thus, the Data prompts can be understood as specific use cases of the text generation using the Text Bison model.Further, there is no way to test the data prompts without specifying the inputs as one or more columns of the to-be-generated rows of the dataset. Let us build a prompt for this interface. Since providing a prompt text is now not necessary, we’ll skip it and instead fill the table as shown below: In order to add more columns than the number of columns present by default, use the Add button on the top right.Once this is done, we are now ready to provide the input column for the test inputs below. In the Test your prompt section at the bottom, fill in only the INPUT number column as shown below:Now, click on the Run button to see how the model produces outputs for this prompt. We see that the model produces the rest of the data for those rows correctly and using the format that we provided it with. This makes us wonder that if we provide historical data to the Data prompt, will it be able to predict future trends? Let us give this a try.Create a new Data prompt and on the data examples table, on the top right click on Add -> Import examples. You may choose any existing Google Sheets from the dialog box, or upload any supported file. We choose to upload a CSV file, notably the Iris flower dataset’s CSV. We use the one found at https://gist.github.com/netj/8836201/On selecting the file, the interface will ask you to assign the columns in the CSV to columns in your data examples. We choose to create new input columns for all the feature columns of the Iris dataset, and keep the labels column as an output column, as shown below:After importing the examples, let us manually move a few examples to the Test your prompt section. Remember to remove these examples from the data examples section above to ensure the model is not training on the same data that it is being tested on. Now, click the Run button to get the model’s output.We observe that the model is able to correctly output the label column values as per the examples it has received. Hence, besides generating more examples for a given dataset, the model is also capable of making predictions about the inputs to a degree. One would require a much more extensive testing of the same to determine the accuracy of the model, which is beyond the scope of this article.Finally, let us explore the Chat prompts.Chat promptsChatting with generative AI models is a form in which most people have interacted with them first. Made popular once more by the advent of ChatGPT, the concept of AI being able to have intelligent conversations has been around for a very long time and has been a part of popular culture on a regular basis. One of the most well-known examples of an AI being able to take instructions and produce output accordingly is the JARVIS from Iron Man series of comics. With the latest possibilities enabled by generative AI, building such systems is very much a realistic task, with efforts already underway.In this section, we shall see how we can have conversations with generative AI models that mimic human-like understanding and decision making skills.First, click on the Create New button on the top left of the MakerSuite interface and select the Chat prompt. You will be able to see the blank interface for designing a Chat prompt, as shown below: One immediate change to notice is that there is no longer a Run button at the bottom of the UI. This has been shifted to the Test your prompt section with a chat box-like interface, and the message send button for the chat box functions as the Run button.On the left the interface section reads “Write your prompt examples”, we’ll call this the prompts examples section. Also take note of the Context field available in this section, which can be used to set the rules of the interaction and the format in which the output is expected. Now, let us design a chat prompt, as shown in the table below:field    valueContext    You're a banker at the Gringotts bank, set in the Wizarding world of Harry Potter.User    I wish to access my accountModel    Very well, please present your key.User    How safe are the vaults at Gringotts?Model   Gringotts' vaults are considered extremely safe. Protected by complex magic, various creatures, and intricate security measures, these vaults are nearly impenetrable. Unauthorized access is extraordinarily challenging and dangerous, as demonstrated multiple times in the series. This reputation contributes to the bank's trustworthiness among wizards.We expect that the model will pretend to be a banker at the Gringotts bank, which is referenced from the popular book series Harry Potter. Since it's a fictional world and we expect the conversation to be similarly not bound to the real world, we should increase the model temperature, allowing it to be more creative. For this example, let’s set the model temperature to 0.7.Let us try having a conversation with the model. This is the conversation that happens with the model, in our case:We observe that although we have not provided the model with an example of how to respond when the user says they do not have the key, it correctly handles the response based on its existing knowledge about Gringotts Bank’s policies. Now that we have covered the different types of prompts available in MakerSuite, let’s explore how we can use them via code, making direct calls to the PaLM API.Author BioAnubhav Singh, Co-founder of Dynopii & Google Dev Expert in Google Cloud, is a  seasoned developer since the pre-Bootstrap era, Anubhav has extensive experience as a freelancer and AI startup founder. He authored "Hands-on Python Deep Learning for Web" and "Mobile Deep Learning with TensorFlow Lite, ML Kit, and Flutter." A Google Developer Expert in GCP, he co-organizes for TFUG Kolkata community and formerly led the team at GDG Cloud Kolkata. Anubhav is often found discussing System Architecture, Machine Learning, and Web technologies 
Read more
  • 0
  • 0
  • 766

article-image-build-enterprise-ai-workflows-with-airops
Julian Melanson
12 Jul 2023
4 min read
Save for later

Build Enterprise AI Workflows with AirOps

Julian Melanson
12 Jul 2023
4 min read
In the realm of Artificial Intelligence, immense potential is no longer an abstract concept but a palpable reality, and businesses are increasingly seizing the opportunities this technology affords. AirOps, a new player in the AI sphere, has emerged as a remarkable conduit for businesses to harness the transformative abilities of AI within their operations. The company has announced a $7 million seed funding round, showing the confidence investors place in its unique proposition.Founded by Alex Halliday, Berna Gonzalez, and Matt Hammel, the company encapsulates a blend of technological knowledge and industry expertise. Their collective backgrounds span a diverse range of sectors, including MasterClass, Bungalow, and more. This multifaceted perspective fuels the vision of AirOps, allowing it to offer dynamic and adaptable solutions tailored to a multitude of business needs.AirOps deploys a platform leveraging large language models (LLMs) such as GPT-3, GPT-4, and Claude, each with its unique capabilities and merits. The AI-driven tools developed by AirOps can be integrated within existing business systems, speeding up processes, revealing deep insights from data, and generating custom content. These services are readily available across various interfaces, including Google Sheets, web apps, data warehouses, or APIs, thereby allowing businesses to embed AI capabilities directly into their established workflows.Airops' Main FeaturesDespite the impressive abilities of LLMs like GPT-4, the challenge for businesses lies in their practical deployment. AirOps mitigates this hurdle, offering a robust platform that enables businesses to use these AI models in addressing their specific needs. The platform helps users automate laborious tasks, generate personalized content, extract valuable insights from data, and leverage natural language processing techniques.One of the salient features of AirOps' value proposition is cost efficiency. Utilizing AI models can often be a costly endeavor, but the AirOps platform presents an innovative solution. The system employs larger models such as GPT-4 for initial training, then switches to smaller, fine-tuned, open-source models for regular operations, significantly reducing the financial burden.As AI evolves, the demand for nuanced and adaptable models increases. AirOps is at the forefront of these developments, continually learning and adapting to offer the most suitable solutions for its customers. AirOps aids businesses in creating AI experiences and generating new content from their existing data corpus, paving the way for a streamlined and efficient approach to making the most of AI capabilities.The company's strategic vision is also worth noting. Initially, AirOps set out to help businesses in extracting value from their data. However, as large language models have gained public recognition, the company has astutely shifted its focus. Today, AirOps aims to facilitate businesses in merging their data with LLMs, leading to the creation of custom workflows and applications.As AI continues to permeate the professional sphere, AirOps is showing how businesses can capitalize on this trend. Their AI-powered tools are being used across a variety of sectors, such as real estate, e-learning, and financial services, among others. By automating complex tasks, streamlining workflows, and generating custom content at scale, AirOps is empowering businesses to harness the transformative capabilities of AI effectively and efficiently.With its recent seed funding, the company aims to expand its product suite, bolster its team, and extend its customer base. As Halliday, the CEO, stated, the company's goal is to enable businesses to bridge the gap between the theoretical prowess of AI and its practical implementation. Through its groundbreaking work, AirOps is ensuring that the AI revolution in the business world is not merely a utopian vision, but an attainable reality.Author BioJulian Melanson is one of the founders of Leap Year Learning. Leap Year Learning is a cutting-edge online school that specializes in teaching creative disciplines and integrating AI tools. We believe that creativity and AI are the keys to a successful future and our courses help equip students with the skills they need to succeed in a continuously evolving world. Our seasoned instructors bring real-world experience to the virtual classroom and our interactive lessons help students reinforce their learning with hands-on activities.No matter your background, from beginners to experts, hobbyists to professionals, Leap Year Learning is here to bring in the future of creativity, productivity, and learning!
Read more
  • 0
  • 0
  • 233
Banner background image

article-image-ai-powered-stock-selection
Julian Melanson
12 Jul 2023
4 min read
Save for later

AI-Powered Stock Selection

Julian Melanson
12 Jul 2023
4 min read
Artificial Intelligence continues to infiltrate every facet of modern life, from daily chores to complex decision-making procedures, including the stock market. The recent advent of AI-powered language models like ChatGPT by OpenAI serves as a notable testament to this statement. The potential of these models transcends conversational prowess, delving into the ability to guide investment decisions.A case in point is an experiment conducted by Finder.com, an international financial comparison site. The test pitted an AI-constructed portfolio against some of the most renowned investment funds in the United Kingdom, seeing the AI-curated selection outstrip its counterparts. The portfolio, an assortment of 38 stocks picked by ChatGPT, manifested a gain of 4.9% between March 6 and April 28. In comparison, ten top-tier investment funds noted an average decline of 0.8% in the same period. To put this into perspective, the S&P 500 index, an esteemed gauge of the American market, marked a rise of 3%, and the Stoxx Europe 600, its European equivalent, noted a modest increase of 0.5%.The experiment's dynamics are as intriguing as its outcome. Investment funds aggregate capital from a multitude of investors, a fund manager administering the investment decisions. However, Finder's analysts asked the AI chatbot to construct a stock portfolio based on prevalent selection criteria - low indebtedness and a solid growth trajectory. Noteworthy picks included industry behemoths like Microsoft, Netflix, and Walmart.This process's ingenuity lies in its accessibility. While AI has pervaded major funds for years, supplementing investment decisions, the advent of ChatGPT has democratized this expertise. Now, the public can use this technology, thereby revolutionizing retail investment.How dependable are these AI-driven stock predictions? A study by the University of Florida supplies an answer. Published in April, the study posits that ChatGPT could forecast specific companies' stock price movements more accurately than some fundamental analysis models.In fact, the democratization of AI, characterized by models like ChatGPT and BERT, could potentially upend the financial industry. Researchers across the globe have corroborated this sentiment. In two separate studies, researchers found that large language models (LLMs) can enhance stock market and public opinion predictions, evidenced by historical data.University of Florida professors Alejandro Lopez-Lira and Yuehua Tang further validated this argument in their paper "Can ChatGPT Forecast Stock Price Movements? Return Predictability and Large Language Models". They utilized ChatGPT to assess news headlines' sentiment, a metric that has become indispensable for quantitative analysis algorithms employed by stock traders.Sentiment analysis discerns whether a text, such as a news headline, conveys a positive, neutral, or negative sentiment about a subject or company. This evaluation enhances the accuracy of market predictions.Lopez-Lira and Tang applied ChatGPT to gauge the sentiment manifested in news headlines. Upon comparing ChatGPT's assessment of these news stories with the subsequent performance of company shares in their sample, they discovered statistically significant predictions, a feat unachieved by other LLMs.The professors asserted, "Our analysis reveals that ChatGPT sentiment scores exhibit a statistically significant predictive power on daily stock market returns." This statement, substantiated by their findings, shows a strong correlation between the ChatGPT evaluation and the subsequent daily returns of the stocks in their sample. It underscores the potential of ChatGPT as a potent tool for predicting stock market movements based on sentiment analysis.Author BioJulian Melanson is one of the founders of Leap Year Learning. Leap Year Learning is a cutting-edge online school that specializes in teaching creative disciplines and integrating AI tools. We believe that creativity and AI are the keys to a successful future and our courses help equip students with the skills they need to succeed in a continuously evolving world. Our seasoned instructors bring real-world experience to the virtual classroom and our interactive lessons help students reinforce their learning with hands-on activities.No matter your background, from beginners to experts, hobbyists to professionals, Leap Year Learning is here to bring in the future of creativity, productivity, and learning!
Read more
  • 0
  • 0
  • 1189

article-image-generating-text-effects-with-adobe-firefly
Joseph Labrecque
02 Jul 2023
9 min read
Save for later

Generating Text Effects with Adobe Firefly

Joseph Labrecque
02 Jul 2023
9 min read
Adobe Firefly Text EffectsAdobe Firefly is a new set of generative AI tools which can be accessed via https://firefly.adobe.com/ by anyone with an Adobe ID. To learn more about Firefly… have a look at their FAQ.  Image 1: Adobe FireflyOne of the more unique aspects of Firefly that sets it apart from other generative AI tools is Adobe’s exploration of procedures that go beyond prompt-based image generation. A good example of this is what is called Text Effects in Firefly.Text effects are also prompt-based… but use a scaffold determined by font choice and character set to constrain a generated set of styles to these letterforms. The styles themselves are based on user prompts – although there are other variants to consider as well.In the remainder of this article, we will focus on the text-to-image basics available in Firefly.Using Text Effects within FireflyAs mentioned in the introduction, we will continue our explorations of Adobe Firefly with the ability to generate stylized text effects from a text prompt. This is a bit different from the procedures that users might already be familiar with when dealing with generative AI – yet retains many similarities with such processes.When you first enter the Firefly web experience, you will be presented with the various workflows available.Image 2: Firefly modules can be either active and ready to work with or in explorationThese appear as UI cards and present a sample image, the name of the procedure, a procedure description, and either a button to begin the process or a label stating that it is “in exploration”. Those which are in exploration are not yet available to general users.We want to locate the Text Effects module and click Generate to enter the experience.Image 3: The Text effects module in FireflyFrom there, you’ll be taken to a view that showcases text styles generated through this process. At the bottom of this view is a unified set of inputs that prompt you to enter the text string you want to stylize… along with the invitation to enter a prompt to “describe the text effects you want to generate”.Image 4: The text-to-image prompt requests your input to beginIn the first part that reads Enter Text, I have entered the text characters “Packt”. For the second part of the input requesting a prompt, enter the following: “futuristic circuitry and neon lighting violet”Click the Generate button when complete. You’ll then be taken into the Firefly text effects experience.  Image 5: The initial set of four text effect variants is generated from your prompt with the characters entered used as a scaffoldWhen you enter the text effects module properly, you are presented in the main area with a preview of your input text which has been given a stylistic overlay generated from the descriptive prompt. Below this are a set of four variants, and below that are the text inputs that contain your text characters and the prompt itself.To the right of this are your controls. These are presented in a user-friendly way and allow you to make certain alterations to your text effects. We’ll explore these properties next to see how they can impact our text effect style.Exploring the Text Effect PropertiesAlong the right-hand side of the interface are properties that can be adjusted. The first section here includes a set of Sample prompts to try out.Image 6: A set of sample prompts with thumbnail displaysClicking on any of these sample thumbnails will execute the prompt attributed to it, overriding your original prompt. This can be useful for those new to prompt-building within Firefly to generate ideas for their own prompts and to witness the capabilities of the generative AI. Choosing the View All option will display even more prompts.Below the sample prompts, we have a very important adjustment that can be made in the form of Text effects fit.Image 7: Text effects fit determines how tight or loose the visuals are bound to the scaffoldThis section provides three separate options for you to choose from… Tight, Medium, or Loose. The default setting is Medium and choosing either of the other options will have the effect of either tightening up all the little visual tendrils that expand beyond the characters – or will let them loose, generating even more beyond the bounds of the scaffold.Let’s look at some examples with our current scaffold and prompt:Image 8: Tight - will keep everything bound within the scaffold of the chosen charactersImage 9: Medium - is the default and includes some additional visuals extending from the scaffoldImage 10: Loose - creates many visuals beyond the bounds of the scaffoldOne of the nice things about this set is that you can easily switch between them to compare the resulting images and make an informed decision.Next, we have the ability to choose a Font for the scaffold. There are currently a very limited set of fonts to use in Firefly. Similar to the sample prompts, choosing the View All option will display even more fonts.Image 11: The font selection propertiesWhen you choose a new font, it will regenerate the imagery in the main area of the Firefly interface as the scaffold must be rebuilt.I’ve chosen Source Sans 3 as the new typeface. The visual is automatically regenerated based on the new scaffold created from the character structure.Image 12: A new font is applied to our text and the effect is regeneratedThe final section along the right-hand side of the interface is for Color choices. We have options for Background Color and for Text Color. Image 13: Color choices are the final properties sectionThere are a very limited set of color swatches to choose from. The most important is whether you want to have the background of the generated image be transparent or not.Making Additional ChoicesOkay – we’ll now look to making final adjustments to the generated image and downloading the text effect image to our local computer. The first thing we’ll choose is a variant – which can be found beneath the main image preview. A set of 4 thumbnail previews are available to choose from.Image 14: Selecting from the presented variantsClicking on each will change the preview above it to reveal the full variant – as applied to your text effect.For instance, if I choose option #3 from the image above, the following changes would result:Image 15: A variant is selected and the image preview changes to matchOf course, if you do not like any of the alternatives, you can always choose the initial thumbnail to revert back.Once you have made the choice of variant, you can download the text effect as an image file to your local file system for use elsewhere. Hover over the large preview image and an options overlay appears.Image 16: A number of options appear in the hover overlay, including the download optionWe will explore these additional options in greater detail in a future article. Click the download icon to begin the download process for that image.As Firefly begins preparing the image for download, a small overlay dialog appears.Image 17: Content credentials are applied to the image as it is downloadedFirefly applies metadata to any generated image in the form of content credentials and the image download process begins.What are content credentials? They are driven as part of the Content Authenticity Initiative to help promote transparency in AI. This is how Adobe describes content credentials in their Firefly FAQ:Content Credentials are sets of editing, history, and attribution details associated with content that can be included with that content at export or download. By providing extra context around how a piece of content was produced, they can help content producers get credit and help people viewing the content make more informed trust decisions about it. Content Credentials can be viewed by anyone when their respective content is published to a supporting website or inspected with dedicated tools. -- AdobeOnce the image is downloaded, it can be viewed and shared just like any other image file.Image 18: The text effect image is downloaded and ready for useAlong with content credentials, a small badge is placed upon the lower right of the image which visually identifies the image as having been produced with Adobe Firefly (beta).There is a lot more Firefly can do, and we will continue this series in the coming weeks. Keep an eye out for an Adobe Firefly deep dive… exploring additional options for your generative AI creations!Author BioJoseph is a Teaching Assistant Professor, Instructor of Technology, University of Colorado Boulder / Adobe Education Leader / Partner by DesignJoseph Labrecque is a creative developer, designer, and educator with nearly two decades of experience creating expressive web, desktop, and mobile solutions. He joined the University of Colorado Boulder College of Media, Communication, and Information as faculty with the Department of Advertising, Public Relations, and Media Design in Autumn 2019. His teaching focuses on creative software, digital workflows, user interaction, and design principles and concepts. Before joining the faculty at CU Boulder, he was associated with the University of Denver as adjunct faculty and as a senior interactive software engineer, user interface developer, and digital media designer.Labrecque has authored a number of books and video course publications on design and development technologies, tools, and concepts through publishers which include LinkedIn Learning (Lynda.com), Peachpit Press, and Adobe. He has spoken at large design and technology conferences such as Adobe MAX and for a variety of smaller creative communities. He is also the founder of Fractured Vision Media, LLC; a digital media production studio and distribution vehicle for a variety of creative works.Joseph is an Adobe Education Leader and member of Adobe Partners by Design. He holds a bachelor’s degree in communication from Worcester State University and a master’s degree in digital media studies from the University of Denver.Author of the book: Mastering Adobe Animate 2023 
Read more
  • 0
  • 0
  • 1111

article-image-everything-you-need-to-know-about-agentgpt
Avinash Navlani
02 Jul 2023
4 min read
Save for later

Everything You Need to Know about AgentGPT

Avinash Navlani
02 Jul 2023
4 min read
Advanced language models have been used in the last couple of years to create a variety of AI products, including conversational AI tools and AI assistants. A web-based platform called AgentGPT allows users to build and use AI agents right from their browsers. Making AgentGPT available to everyone and promoting community-based collaboration are its key goals.ChatGPT provides accurate, meaningful, in-depth specific answers and discussion to given input questions while AgentGPT, on the other hand, is an AI agent platform that takes an objective and achieves the goal by thinking, learning, and taking actions.AgentGPT can assist you with your goals without installing and downloading. You just need to create an account and get the power of AI-enabled Conversational AI. You have to provide a name and objective for your agent, and the agent will achieve the goal.What is AgentGPT?AgentGPT is an open-source platform that is developed by openAI and uses the GPT3.5 architecture. AgentGPT is an NLP-based technology that generates human-like text with accuracy and fluency. It can engage in conversations, question-answers, generative content, and problem-solving assistance.How does AgentGPT work?AgentGPT breaks down a given prompt into smaller tasks, and the agent completes these specific tasks in order to achieve the goal. Its core strength is engaging in real and contextual conversation. It generates dynamic discussions while learning from the large dataset. It recognizes the intentions and responds in a way that is human-like.How to Use Agent GPT?Let’s first create an account on reworkd.ai. After creating the account, deploy the agent by providing the agent's name and objective.In the snapshot below, you can see that we are deploying an agent for Fake News Detection. As a user, we just need to provide two inputs: Name and Goal. For example, in our case, we have provided Fake News Detection as the name and Build Classifier for detecting fake news articles as a goal.Image 1: AgentGPT pageOnce you click the deploy agent. It starts identifying the task and add all the task in the queue. After that one by one, it executes all the tasks.Image 2: Queue of tasksIn the below snapshot, you can see it has completed the 2 tasks and working on the third task(Extract Relevant features). In all the tasks, it has also provided the code samples to implement the task.Image 3: Code samplesOnce your goal is achieved, you can save the results by clicking on the save button in the top-right corner.You can also improve the performance by providing relevant examples, using the ReAct approach for improving the prompting, and upgrading the version from local to Pro version.You also set up AgentGPT on the local machine. For detailed instructions, you can follow this link.SummaryCurrently, AgentGPT is in the beta phase, and the developer community is actively working on its features and use cases. It is one of the most significant milestones in the era of advanced large-language models. Its ability to generate human-like responses opens up potential opportunities for industrial applications such as customer service, content generation, decision support systems, and personal assistance.Author BioAvinash Navlani has over 8 years of experience working in data science and AI. Currently, he is working as a senior data scientist, improving products and services for customers by using advanced analytics, deploying big data analytical tools, creating and maintaining models, and onboarding compelling new datasets. Previously, he was a university lecturer, where he trained and educated people in data science subjects such as Python for analytics, data mining, machine learning, database management, and NoSQL. Avinash has been involved in research activities in data science and has been a keynote speaker at many conferences in India.Link - LinkedIn    Python Data Analysis, Third edition                                            
Read more
  • 0
  • 0
  • 2232
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-revolutionizing-business-productivity-using-cassidyai
Julian Melanson
28 Jun 2023
6 min read
Save for later

Revolutionizing Business Productivity using CassidyAI

Julian Melanson
28 Jun 2023
6 min read
In recent times, the entrepreneurial environment has seen a surge in innovation, particularly in the realm of artificial intelligence. Among the stalwarts of this revolution is Neo, a startup accelerator masterminded by Silicon Valley investor Ali Partovi. In a groundbreaking move in March, Neo entered a strategic partnership with renowned AI research organization OpenAI, and tech giant Microsoft Corp. The objective was clear: to offer no-cost software and expert advice to startups orienting their focus towards AI. This partnership's results are already tangible, with CassidyAI, a startup championed by content creator Justin Fineberg, being one of the companies benefiting from this initiative.CassidyAI: A Pioneer in AI-Driven Business AutomationFineberg recently announced that CassidyAI is stepping out from the shadows, shedding its stealth mode. CassidyAI's primary function is an embodiment of innovation: facilitating businesses to create customized AI assistants, thus automating tasks, optimizing productivity, and integrating AI across entire organizations. With this aim, CassidyAI is at the forefront of a paradigm shift in business process automation and management.Amplifying Productivity: CassidyAI's VisionAt its core, CassidyAI embraces an ambitious vision: to multiply the productivity of every team within an organization by a factor of ten. This tenfold increase isn't just a lofty goal; it is a transformative approach that involves deploying AI technology across business operations. CassidyAI accomplishes this by providing a platform for generating bespoke AI assistants that cater to individual departmental needs. This process involves training these virtual assistants using the specific knowledge base and data sets of each department.Harnessing AI Across Departments: Versatile Use-CasesThe potential applications of CassidyAI's platform are practically limitless. The diversity of use cases underscores the flexibility and versatility of the AI-assistant creation process. In marketing, for instance, teams can train CassidyAI on their unique writing style and marketing objectives, thereby crafting content that aligns perfectly with the brand image. Similarly, sales teams can enhance their outreach initiatives by leveraging CassidyAI's understanding of the sales pitch, process, and customer profiles.In customer service, AI assistants can respond to inquiries accurately and efficiently, with CassidyAI's ability to access comprehensive support knowledge. Engineering teams can train CassidyAI on their technical stack and engineering methods and architecture, enabling more informed technical decisions and codebase clarity. Product teams can use CassidyAI's profound understanding of their team dynamics and user experience principles to drive product ideation and roadmap collaboration. Finally, HR departments can provide employees with quick access to HR documentation through AI assistants trained to handle such inquiries.Data Security and Transparency: CassidyAI's AssuranceBeyond its vast application range, CassidyAI distinguishes itself through its commitment to data security and transparency. The platform's ability to import knowledge from various platforms ensures a deep understanding of a company's brand, operations, and unique selling propositions. Equally important, all interactions with CassidyAI remain reliable and secure due to their stringent data handling practices and clear citation of sources.Setting Up AI Automation: A No-Code ApproachCassidyAI's approach to implementing AI in businesses is straightforward and code-free, catering to those without programming skills. Businesses begin by securely uploading their internal data and knowledge to train CassidyAI on their unique products, strategies, processes, and more. They then construct AI assistants that are fine-tuned for their distinct use cases, without the need to write a single line of code. Once the AI assistants are ready, they can be shared across the team, fostering an atmosphere of AI adoption and collaboration throughout the organization.Interestingly, the onboarding process for each company joining CassidyAI is currently personally overseen by Fineberg. Although this may limit the pace of early access, it provides a personalized and detailed introduction to CassidyAI’s capabilities and potential. Companies interested in exploring CassidyAI's offerings can request a demo through their website.CassidyAI represents a revolutionary approach to adopting AI technology in businesses. By creating tailored AI assistants that cater to the specific needs of different departments, it offers an opportunity to substantially improve productivity and streamline operations. Its emergence from stealth mode signals a new era of AI-led business automation and provides an exciting glimpse into the future of work. It is anticipated that as CassidyAI gains traction, more businesses will leverage this innovative tool to their advantage, fundamentally transforming their approach to task automation and productivity enhancement.You can browse the website and request a demo here: https://www.cassidyai.comReal-World Use casesHere are some specific examples of how CassidyAI is being used by real businesses:Centrifuge: Centrifuge is using CassidyAI to originate real-world assets and to securitize them. This is helping Centrifuge to provide businesses with access to financing and to reduce risk.Tinlake: Tinlake is using CassidyAI to automate the process of issuing and managing loans backed by real-world assets. This is helping Tinlake to provide a more efficient and cost-effective lending solution for businesses.Invoice Finance: Invoice Finance is using CassidyAI to automate the process of processing invoices and to provide financing to businesses based on the value of their invoices. This is helping Invoice Finance to provide a more efficient and timely financing solution for businesses.Bondora: Bondora is using CassidyAI to assess the risk of loans and to provide investors with more information about the loans they are considering investing in. This is helping Bondora to provide a more transparent and efficient investment platform for investors.Upstart: Upstart is using CassidyAI to assess the creditworthiness of borrowers and to provide them with more personalized lending terms. This is helping Upstart to provide a more inclusive and affordable lending solution for borrowers.These are just a few examples of how CassidyAI is being used by real businesses to improve their operations and to provide better services to their customers. As CassidyAI continues to develop, it is likely that even more use cases will be discovered.SummaryCassidyAI, a startup in partnership with Neo, OpenAI, and Microsoft, is revolutionizing business productivity through AI-driven automation. Their platform enables businesses to create customized AI-assistants, optimizing productivity and integrating AI across departments. With a no-code approach, CassidyAI caters to various use-cases, including marketing, sales, customer service, engineering, product, and HR. The platform emphasizes data security and transparency while providing a personalized onboarding process. As CassidyAI emerges from stealth mode, it heralds a new era of AI-led business automation, offering businesses the opportunity to enhance productivity and streamline operations.Author BioJulian Melanson is one of the founders of Leap Year Learning. Leap Year Learning is a cutting-edge online school that specializes in teaching creative disciplines and integrating AI tools. We believe that creativity and AI are the keys to a successful future and our courses help equip students with the skills they need to succeed in a continuously evolving world. Our seasoned instructors bring real-world experience to the virtual classroom and our interactive lessons help students reinforce their learning with hands-on activities.No matter your background, from beginners to experts, hobbyists to professionals, Leap Year Learning is here to bring in the future of creativity, productivity, and learning!
Read more
  • 0
  • 0
  • 231

article-image-practical-ai-in-excel-create-a-linear-regression-model
M.T White
28 Jun 2023
12 min read
Save for later

Practical AI in Excel: Create a Linear Regression Model

M.T White
28 Jun 2023
12 min read
AI is often associated with complex algorithms and advanced programming, but for basic linear regression models, Excel is a suitable tool. While Excel may not be commonly linked with AI, it can be an excellent option for building statistical machine-learning models. Excel offers similar modeling capabilities as other libraries, without requiring extensive setup or coding skills. It enables leveraging machine learning for predictive analytics without writing code. This article focuses on using Excel to build a linear regression model for predicting story points completed by a software development team based on hours worked.What is Linear Regression?Before a linear regression model can be built it is important to understand what linear regression is and what it's used for.  For many, their first true shake with linear regression will come in the form of a machine learning library or machine learning cloud service. In terms of modern machine learning, linear regression is a supervised machine learning algorithm that is used for predictive analytics.  In short, linear regression is a very common and easy-to-use machine learning model that is borrowed from the field of statistics.  This means, at its core, linear regression is a statistical analysis technique that models a relationship between two or more variables.  In the most rudimentary sense, linear regression boils down to the following equation,y = mx + bAs can be seen, the equation (that is the linear regression model) is little more than the equation for a line.  No matter the library or machine learning service that is used, in its purest form linear regression will boil down to the above equation.  In short, linear regression is used for predictive, numerical models.  In other words, linear regression produces models that attempt to predict a numerical value.  This could be the weight of a person in relation to their height, the value of a stock in relation to the Dow, or anything similar to those two applications.  As stated before, the model that will be produced for this article will be used to predict the number of story points for a given number of hours worked.Why should Excel be used?Due to the statistical nature of linear regression, Excel is a prime choice for creating linear regression models.  This is especially true if (among other things) one or more of the following conditions are met,The person creating the model does not have a strong computer science or machine learning background. The person needs to quickly produce a model.The data set is very small.If a person simply needs to create a forecasting model for their team, forecast stocks, customer traffic, or whatever it may be, Excel will oftentimes be a better choice than creating a traditional program or using complex machine learning software. With that being established, how would one go about creating a linear regression model?Installing the Necessary Add-insTo build a linear regression model the following will be needed,A working copy of Excel.Analysis ToolPak add-in for Excel.The Analysis ToolPak is the workhorse for this tutorial.  As such, if it is not installed follow the steps in the next section; however, if the add-in is already installed the following section can be skipped.Installing Data Analysis ToolPak1. Click,  File -> Option -> Add-insOnce done the following wizard should appear:Figure 1 – Options Wizard2. Locate Analysis ToolPak and select it.  Once that is done the following popup will appear.Figure 2 – Add-ins WizardFor this tutorial, all that is technically needed is the Analysis ToolPak but it is a good idea to install the VBA add-in as well. 3. Verify the installation by navigating to the Data tab and verifying that the Data Analysis tools are installed.  If everything is installed properly, the following should be visible.  Figure 3 – Data Analysis ToolOnce the Analysis ToolPak is installed a linear regression model can be generated with a few clicks of the mouse. Building a Linear Regression Model to Predict Story Points. Once all the add-ins are installed, create a workbook and copy in the following data:HoursStory Points161315121511134228281830191032114117129251924172315 Before the model can be built the independent and dependent variables must be chosen.  This is a fancy way of determining which column is going to be the input and which is going to be the output for the model.  In this case, the goal is to predict the number of story points for a given number of hours worked. As such, when the model is created the number of hours will be inputted to return the number of predicted story points. This means that the number of hours worked will be the independent variable which will be on the X-Axis of the graph and the number of story points will be the dependent variable which will be on the Y-Axis. As such, to generate the model perform the following steps,1. Navigate to the Data tab and click Data Analysis.  When complete the following popup should appear.Figure 4 – Regression Analysis  Scroll down and select Regression then press the OK button.2. Once step 1 is completed the following wizard should appear.Figure 5 – Regression Setup Input the data the same way it is presented in Figure 5.  Once done The data should be rendered as in Figure 6.Figure 6 – Linear Regression Output.At this point, the linear regression model has been produced.  To make a prediction all one has to do is multiply the number of hours worked by the Hours value in the Coefficient column and add the Intercept value in the Coefficient column to that product. However, it is advisable to generate a trendline and add the line’s equation and the R-Squared value to the chart to make things easier to see.  This can be remedied by simply deleting the predicted dots and adding a trendline like in Figure 7.Figure 7 – TrendlineThe trendline will show the best fit for the model.  In other words, the model will use the equation that governs the trendline to predict a value.  To generate the line’s equation click the arrow button by Trendline and click More Options.  When this is done a sidebar should appear similar to the one in Figure 8.Figure 8 – Format Trendline MenuFrom here select the R-square value checkbox and the Display Equation on chart checkbox. When this is done those values should be displayed on the graph like in Figure 9. Figure 9 – Regression Model with line equation and R-squared valueTo create a prediction, all one has to do is plug in the number of hours for x in the equation and the computed value will be an approximation for the number of story points for the hours worked. Interperting the ModelRegression StatisticsMultiple R0.862529R Square0.743956Adjusted R Square0.722619Standard Error2.805677Observations14Now that the model is generated, how good is it?  This question can be answered with the data that was produced in Figure 6.  However, a whole book could be dedicated to interpreting those outputs, so for this article, the data in the observation group which can be thought of as the high-level summary of the model will be explored.   Consider, the following data:Regression StatisticsMultiple R0.862529R Square0.743956Adjusted R Square0.722619Standard Error2.805677Observations14 The first value is Multiple R or as it is sometimes called the Correlation Coefficient.  This value can range from -1 to 0 or 0 to 1 depending on whether the correlation is negative or positive respectively.  The closer the coefficient is to either -1 or 1 the better. With that, what is the difference between a negative and positive correlation?  Whether a correlation is negative or positive depends on the graph’s orientation which in turn means whether the correlation coefficient is positive or negative.  If the graph is downward oriented the correlation is negative. For these models, the correlation coefficient will be less than 0.  On the other hand, if the graph is upward oriented like the graph produced by the model it is said to have a positive correlation which in turn means the coefficient will be greater than 0.  Consider Figure 10,Figure 10 – Negative and Positive Correlation Ultimately it doesn’t matter if the model has a positive or negative correlation.  All the correlation means is that as one value rises the other will either rise with it or fall.  In terms of the model produced, the Multiple R-value is .86.  All things considered that is a really good correlation coefficient. The next important value to look at is the R-Squared value or the Coefficient of Determination.  This value describes how well the model fits the data.  In other words, it determines how many data points fall on the line.  The R-Squred value will range from 0 to 1.  As such, the closer the value is to 1 the better the model will be.  Though a value as close to 1 is desirable it is naïve to assume that an R-Squared of 1 will ever be achievable.  However, a lower R-Squared value is not necessarily a bad thing.  Depending on what is being measured, what constitutes a “good” R-Squared value will vary.  In the case of this model, the R-Squared is about .74 which means about 74% of the data can be explained by the model.  Depending on the context of the application that can be considered good, but it should be remembered that at most the model is only predicting 74% of what makes up the number of completed story points. Adjusted R-Squred is simply a more precise view of the R-Squared value. In simple terms, the adjusted R-Squared value determines how much of a variation in the dependent variables can be explained by the independent variables. The Adjusted R for this model is .72 which is in line with the R-Squard value.Finally, the Standard Error is the last fitting metric.  In a very simplistic sense, this metric is a measure of precision for the model.  As such, the standard error for this model is about 2.8.  Much like other metrics what constitutes good is subjective.  However, the closer the value is to 0 the more concise the model is. Using the modelNow that the model has been created, what would someone do with it, that is how would they use it?  The answer is surprisingly simple.  The whole model is a line equation.  That line will give an approximation of a value based on the given input.  In the case of this model, a person would input the number of hours worked to try to predict the number of story points. As such, someone could simply input the number of hours in a calculator, add the equation to a spreadsheet, or do anything they want with it.  Put simply, this or any other linear regression model is used by inputting a value or values and crunching the numbers.  For example, the equation rendered was as follows:y = 0.6983x - 1.1457The spreadsheet could be modified to include the followingIn this case, the user would simply have to input the number of hours worked to get a predicted number of story points. The important thing to remember is that this model along with any other regression model is not gospel.  Much like in any other machine learning system, these values are simply estimates based on the data that was fed into it.  This means if a different data set or subset is used, the model can and probably will be different. ConclusionIn summary, a simple Excel spreadsheet was used to create a linear regression model.  The linear regression model that was utilized will probably be very similar to a model generated with dedicated machine learning software.  Does this mean that everyone should abandon their machine-learning software packages and libraries and solely use Excel?  The long and the short of it is no! Excel, much like a library like Scikit-learn or any other, is a tool.  However, for laypersons that don’t have a strong computer science background and need to produce a quick regression model, Excel is an excellent tool to do so. Author BioM.T. White has been programming since the age of 12. His fascination with robotics flourished when he was a child programming microcontrollers such as Arduino. M.T. currently holds an undergraduate degree in mathematics, and a master's degree in software engineering, and is currently working on an MBA in IT project management. M.T. is currently working as a software developer for a major US defense contractor and is an adjunct CIS instructor at ECPI University. His background mostly stems from the automation industry where he programmed PLCs and HMIs for many different types of applications. M.T. has programmed many different brands of PLCs over the years and has developed HMIs using many different tools.Author of the book: Mastering PLC Programming
Read more
  • 0
  • 0
  • 571

article-image-taskmatrix-bridging-the-gap-between-text-and-visual-understanding
Rohan Chikorde
28 Jun 2023
11 min read
Save for later

TaskMatrix: Bridging the Gap Between Text and Visual Understanding

Rohan Chikorde
28 Jun 2023
11 min read
IntroductionIn the fast-paced digital landscape of today, the fusion of text and visual understanding has become paramount. As technology continues to advance, the integration of text and visuals has become essential for enhancing communication, problem-solving, and decision-making processes. With the advent of technologies like ChatGPT and Visual Foundation Models, we now have the ability to seamlessly exchange images during conversations and leverage their capabilities for various tasks. Microsoft's TaskMatrix system serves as a revolutionary solution that bridges the gap between text and visual understanding, empowering users to harness the combined power of these domains. TaskMatrix is an innovative system developed by Microsoft, designed to facilitate collaboration between ChatGPT and Visual Foundation Models. By seamlessly integrating text and visual inputs, TaskMatrix enables users to enhance their communication, perform image-related tasks, and extract valuable insights from visual data. In this technical blog, we will explore the functionalities, applications, and technical intricacies of TaskMatrix, providing a comprehensive understanding of its potential and the benefits it offers to users. Through an in-depth analysis of TaskMatrix, we aim to shed light on how this system can revolutionize the way we interact with text and visual elements. By harnessing the power of advanced machine learning models, TaskMatrix opens up new possibilities for communication, problem-solving, and decision-making, ultimately leading to improved user experiences and enhanced outcomes. Let us now dive deep into the world of TaskMatrix and uncover its inner workings and capabilities.Understanding TaskMatrixTaskMatrix is an open-source system developed by Microsoft with the aim of bridging the gap between ChatGPT and Visual Foundation Models. It serves as a powerful platform that enables the integration of image-related tasks within conversations, revolutionizing the way we communicate and solve problems. One of the key features of TaskMatrix is its ability to facilitate image editing. Users can now manipulate and modify images directly within the context of their conversations. This functionality opens up new avenues for creative expression and enables a richer visual experience during communication. Furthermore, TaskMatrix empowers users with the capability of performing object detection and segmentation tasks. By leveraging the advanced capabilities of Visual Foundation Models, the system can accurately identify and isolate objects within images. This functionality enhances the understanding of visual content and facilitates better communication by providing precise references to specific objects or regions of interest. The integration of TaskMatrix with ChatGPT is seamless, allowing users to combine the power of natural language processing with visual understanding. By exchanging images and leveraging the domain-specific knowledge of Visual Foundation Models, ChatGPT becomes more versatile and capable of handling diverse tasks effectively. TaskMatrix introduces the concept of templates, which are pre-defined execution flows for complex tasks. These templates facilitate collaboration between different foundation models, enabling them to work together cohesively. With templates, users can execute multiple tasks seamlessly, leveraging the strengths of different models and achieving more comprehensive results. Moreover, TaskMatrix supports both English and Chinese languages, making it accessible to a wide range of users across different linguistic backgrounds. The system is designed to be extensible, welcoming contributions from the community to enhance its functionalities and expand its capabilities.Key Features and FunctionalitiesTaskMatrix provides users with a wide range of powerful features and functionalities that empower them to accomplish complex tasks efficiently. Let's explore some of the key features in detail:Template-based Execution Flows: One of the standout features of TaskMatrix is its template-based approach. Templates are pre-defined execution flows that encapsulate specific tasks. They serve as a guide for executing complex operations involving multiple foundation models. Templates streamline the process and ensure smooth collaboration between different models, making it easier for users to achieve their desired outcomes.Language Support: TaskMatrix supports multiple languages, including English and Chinese. This broad language support ensures that users from various linguistic backgrounds can leverage the system's capabilities effectively. Whether users prefer communicating in English or Chinese, TaskMatrix accommodates their needs, making it a versatile and accessible platform for a global user base.Image Editing: TaskMatrix introduces a unique feature that allows users to perform real-time image editing within the conversation flow. This capability enables users to enhance and modify images seamlessly, providing a dynamic visual experience during communication. From basic edits such as cropping and resizing to more advanced adjustments like filters and effects, TaskMatrix equips users with the tools to manipulate images effortlessly.Object Detection and Segmentation: Leveraging the power of Visual Foundation Models, TaskMatrix facilitates accurate object detection and segmentation. This functionality enables users to identify and locate objects within images, making it easier to reference specific elements during conversations. By extracting valuable insights from visual content, TaskMatrix enhances the overall understanding and communication of complex concepts.Integration with ChatGPT: TaskMatrix seamlessly integrates with ChatGPT, a state-of-the-art language model developed by OpenAI. This integration enables users to combine the power of natural language processing with visual understanding. By exchanging images and leveraging the strengths of both ChatGPT and TaskMatrix, users can address a wide range of tasks and challenges, ranging from creative collaborations to problem-solving scenarios.Technical ImplementationTaskMatrix utilizes a sophisticated technical implementation that combines the power of machine learning models, APIs, SDKs, and specialized frameworks to seamlessly integrate text and visual understanding. Let's take a closer look at the technical intricacies of TaskMatrix.Machine Learning Models: At the core of TaskMatrix are powerful machine learning models such as ChatGPT and Visual Foundation Models. ChatGPT, developed by OpenAI, is a state-of-the-art language model that excels in natural language processing tasks. Visual Foundation Models, on the other hand, specialize in visual understanding tasks such as object detection and segmentation. TaskMatrix leverages the capabilities of these models to process and interpret both text and visual inputs.APIs and SDKs: TaskMatrix relies on APIs and software development kits (SDKs) to integrate with the machine learning models. APIs provide a standardized way for TaskMatrix to communicate with the models and send requests for processing. SDKs offer a set of tools and libraries that simplify the integration process, allowing TaskMatrix to seamlessly invoke the necessary functionalities of the models.Specialized Frameworks: TaskMatrix utilizes specialized frameworks to optimize the execution and resource management of the machine learning models. These frameworks efficiently allocate GPU memory for each visual foundation model, ensuring optimal performance and fast response times, even for computationally intensive tasks. By leveraging the power of GPUs, TaskMatrix can process and analyze images with speed and accuracy.Intelligent Routing: TaskMatrix employs intelligent routing algorithms to direct user requests to the appropriate model. When a user engages in a conversation that involves an image-related task, TaskMatrix analyzes the context and intelligently determines which model should handle the request. This ensures that the right model is invoked for accurate and relevant responses, maintaining the flow and coherence of the conversation.Seamless Integration: TaskMatrix seamlessly integrates the responses from the visual foundation models back into the ongoing conversation. This integration ensures a natural and intuitive user experience, where the information and insights gained from visual analysis seamlessly blend with the text-based conversation. The result is a cohesive and interactive communication environment that leverages the combined power of text and visual understanding.By combining machine learning models, APIs, SDKs, specialized frameworks, and intelligent routing algorithms, TaskMatrix achieves a technical implementation that seamlessly integrates text and visual understanding. This implementation optimizes performance, resource management, and user experience, making TaskMatrix a powerful tool for enhancing communication, problem-solving, and collaboration. System Architecture:Image 1: System ArchitectureGetting Started with TaskMatrixTo get started with TaskMatrix, you can follow the step-by-step instructions and documentation provided in the TaskMatrix GitHub repository. This repository serves as a central hub of information, offering comprehensive guidelines, code samples, and examples to assist users in setting up and utilizing the system effectively. Access the GitHub Repository: Begin by visiting the TaskMatrix GitHub repository, which contains all the necessary resources and documentation. You can find the repository by searching for "TaskMatrix" on the GitHub platform.Follow the Setup Instructions:The repository provides clear instructions on how to set up TaskMatrix. This typically involves installing the required dependencies, configuring the APIs and SDKs, and ensuring the compatibility of the system with your development environment. The setup instructions will vary depending on your specific use case and the programming language or framework you are using.# clone the repo git clone https://github.com/microsoft/TaskMatrix.git # Go to directory cd visual-chatgpt # create a new environment conda create -n visgpt python=3.8 # activate the new environment conda activate visgpt #  prepare the basic environments pip install -r requirements.txt pip install  git+https://github.com/IDEA-Research/GroundingDINO.git pip install  git+https://github.com/facebookresearch/segment-anything.git # prepare your private OpenAI key (for Linux) export OPENAI_API_KEY={Your_Private_Openai_Key} # prepare your private OpenAI key (for Windows) set OPENAI_API_KEY={Your_Private_Openai_Key} # Start TaskMatrix ! # You can specify the GPU/CPU assignment by "--load", the parameter indicates which # Visual Foundation Model to use and where it will be loaded to # The model and device are separated by underline '_', the different models are separated by comma ',' # The available Visual Foundation Models can be found in the following table # For example, if you want to load ImageCaptioning to cpu and Text2Image to cuda:0 # You can use: "ImageCaptioning_cpu,Text2Image_cuda:0" # Advice for CPU Users python visual_chatgpt.py --load ImageCaptioning_cpu,Text2Image_cpu # Advice for 1 Tesla T4 15GB  (Google Colab)                      python visual_chatgpt.py --load "ImageCaptioning_cuda:0,Text2Image_cuda:0"                               # Advice for 4 Tesla V100 32GB                           python visual_chatgpt.py --load "Text2Box_cuda:0,Segmenting_cuda:0,    Inpainting_cuda:0,ImageCaptioning_cuda:0,    Text2Image_cuda:1,Image2Canny_cpu,CannyText2Image_cuda:1,    Image2Depth_cpu,DepthText2Image_cuda:1,VisualQuestionAnswering_cuda:2,    InstructPix2Pix_cuda:2,Image2Scribble_cpu,ScribbleText2Image_cuda:2,    SegText2Image_cuda:2,Image2Pose_cpu,PoseText2Image_cuda:2,    Image2Hed_cpu,HedText2Image_cuda:3,Image2Normal_cpu,    NormalText2Image_cuda:3,Image2Line_cpu,LineText2Image_cuda:3" Explore Code Samples and Examples: The TaskMatrix repository offers code samples and examples that demonstrate how to use the system effectively. These samples showcase various functionalities and provide practical insights into integrating TaskMatrix into your projects. By exploring the code samples, you can better understand the implementation details and gain inspiration for incorporating TaskMatrix into your own applications.Engage with the Community: TaskMatrix has an active community of users and developers who are passionate about the system. You can engage with the community by participating in GitHub discussions, submitting issues or bug reports, and even contributing to the development of TaskMatrix through pull requests. The community is a valuable resource for support, knowledge sharing, and collaboration.DemoExample 1:Image 2: Demo Part 1  Image 3: Demo Part 2Example 2Image 5: Automatically generated description ConclusionTaskMatrix revolutionizes the synergy between text and visual understanding by seamlessly integrating ChatGPT and Visual Foundation Models. By enabling image-related tasks within conversations, TaskMatrix opens up new avenues for collaboration and problem-solving. With its intuitive template-based execution flows, language support, image editing capabilities, and object detection and segmentation functionalities, TaskMatrix empowers users to efficiently tackle diverse tasks.As the fields of natural language understanding and computer vision continue to evolve, TaskMatrix represents a significant step forward in bridging the gap between text and visual understanding. Its potential applications are vast, spanning industries such as e-commerce, virtual assistance, content moderation, and more. Embracing TaskMatrix unlocks a world of possibilities, where the fusion of text and visual elements enhances human-machine interaction and drives innovation to new frontiers.Author BioRohan Chikorde is an accomplished AI Architect professional with a post-graduate in Machine Learning and Artificial Intelligence. With almost a decade of experience, he has successfully developed deep learning and machine learning models for various business applications. Rohan's expertise spans multiple domains, and he excels in programming languages such as R and Python, as well as analytics techniques like regression analysis and data mining. In addition to his technical prowess, he is an effective communicator, mentor, and team leader. Rohan's passion lies in machine learning, deep learning, and computer vision.LinkedIn
Read more
  • 0
  • 0
  • 169

article-image-emotionally-intelligent-ai-transforming-healthcare-with-wysa
Julian Melanson
22 Jun 2023
6 min read
Save for later

Emotionally Intelligent AI: Transforming Healthcare with Wysa

Julian Melanson
22 Jun 2023
6 min read
Artificial Intelligence is reshaping the boundaries of industries worldwide, and healthcare is no exception. An exciting facet of this technological revolution is the emergence of empathetic AI, sophisticated algorithms designed to understand and respond to human emotions.The advent of AI in healthcare has promised efficiency, accuracy, and predictability. However, a crucial element remained largely unexplored: the human component of empathetic understanding. Empathy, defined as the capacity to understand and share the feelings of others, is fundamental to human interactions. Especially in healthcare, a practitioner's empathy can bolster patients' immune responses and overall well-being. Unfortunately, complex emotions associated with medical errors—such as hurt, frustration, and depression—often go unaddressed. Furthermore, emotional support is strongly correlated with the prognosis of chronic diseases such as cardiovascular disorders, cancer, and diabetes, underscoring the need for empathetic care. So how can we integrate empathy into AI? To find the answer, let's examine Wysa, an AI-backed mental health service platform.Wysa, leveraging AI's power, simulates a conversation with a chatbot to provide emotional support to users. It offers an interactive platform for people experiencing mood swings, stress, and anxiety, delivering personalized suggestions and tools to manage their mental health. This AI application extends beyond mere data processing and ventures into the realm of human psychology, demonstrating a unique fusion of technology and empathy.In 2022, the U.S. Food and Drug Administration (FDA) awarded Wysa the Breakthrough Device Designation. This designation followed an independent, peer-reviewed clinical trial published in the Journal of Medical Internet Research (JMIR). The study demonstrated Wysa's efficacy in managing chronic musculoskeletal pain and associated depression and anxiety, positioning it as a potential game-changer in mental health care.Wysa's toolset is primarily based on cognitive behavioral therapy (CBT), a type of psychotherapy that helps individuals change unhelpful thought patterns. It deploys a smartphone-based conversational agent to deliver CBT, effectively reducing symptoms of depression and anxiety, improving physical function, and minimizing pain interference.The FDA Breakthrough Device program is designed to expedite the development and approval of innovative medical devices and products. By granting this designation, the FDA acknowledged Wysa's potential to transform the treatment landscape for life-threatening or irreversibly debilitating diseases. This prestigious endorsement facilitates efficient communication between Wysa and the FDA's experts, accelerating the product's development during the premarket review phase.Wysa's success encapsulates the potential of empathetic AI to revolutionize healthcare. However, to fully capitalize on this opportunity, healthcare organizations need to revise and refine their strategies. An effective emotional support mechanism, powered by empathetic AI, can significantly enhance patient safety, satisfaction scores, and ultimately, the quality of life. For this to happen, continued development of technologies that cater to patients' emotional needs is paramount.While AI's emergence in healthcare has often been viewed through the lens of improved efficiency and decision-making, the human touch should not be underestimated. As Wysa demonstrates, AI has the potential to extend beyond its traditional boundaries and bring a much-needed sense of empathy into the equation. An emotionally intelligent AI could be instrumental in providing round-the-clock emotional support, thereby revolutionizing mental health care.As we advance further into the AI era, the integration of empathy into AI systems signifies an exciting development. AI platforms like Wysa, which blends technological prowess with human-like understanding, could be a pivotal force in transforming the healthcare landscape. As empathetic AI continues to evolve, it holds the promise of bridging the gap between artificial and human intelligence, ultimately enhancing patient care in the healthcare sector.A Step-By-Step Guide To Using WysaDownload the App: Android users can download Wysa from the Google Play Store. If you're an Apple user, you can find Wysa in the Apple App Store.Explore the App: Once installed, you can explore Wysa’s in-app activities which feature various educational modules, or “Packs”. These packs cover a range of topics, from stress management and managing anger, to coping with school stress and improving sleep.Engage with Wysa Bot: Each module features different “exercises” guided by the Wysa AI bot, a friendly penguin character. These exercises may involve question-answers, mindfulness activities, or short exercise videos. While all the modules can be viewed in the free app, only one exercise per module is accessible. To unlock the entire library, you’ll need to upgrade to the premium app.Consider Therapy Option: Wysa also offers a “therapy” option, which gives you access to a mental health coach and all the content in the premium version. Do note that this service is not formal therapy as provided by licensed therapists. The coaches are based in the US or India, and while they can offer support and encouragement, they are not able to provide diagnoses or treatment.Attend Live Sessions: Live sessions are carried out through instant messaging in the app, lasting for 30 minutes each week. In between these live sessions, you can message your coach at any time and usually expect at least a daily response.Complete Assigned Tasks: After each live session, your coach will assign you specific tasks to complete before your next session. You will complete these tasks guided by the Wysa AI bot.Maintain Anonymity: An important feature of Wysa is its respect for user privacy. The app doesn't require you to create an account, enter your real name, or provide an email address. To get started, all you need is a nickname.*Remember, Wysa is a tool designed to help manage stress and anxiety, improve sleep, and promote overall mental wellbeing. However, it does not replace professional psychological or medical advice. Always consult with a healthcare professional if you are in need of immediate assistance or dealing with severe mental health issues.SummaryArtificial intelligence (AI) is transforming healthcare in many ways, including by providing new tools for mental health management. One example of an AI-powered mental health app is Wysa, which uses conversational AI to help users cope with stress, anxiety, and depression. Wysa has been clinically proven to be effective in reducing symptoms of mental illness, and it can be used as a supplement to traditional therapy or as a standalone intervention.As AI continues to develop, it is likely that we will see even more innovative ways to use this technology to improve mental health care. AI-powered apps like Wysa have the potential to make mental health care more accessible and affordable, and they can also help to break down the stigma around mental illnesses.Author BioJulian Melanson is one of the founders of Leap Year Learning. Leap Year Learning is a cutting-edge online school that specializes in teaching creative disciplines and integrating AI tools. We believe that creativity and AI are the keys to a successful future and our courses help equip students with the skills they need to succeed in a continuously evolving world. Our seasoned instructors bring real-world experience to the virtual classroom and our interactive lessons help students reinforce their learning with hands-on activities.No matter your background, from beginners to experts, hobbyists to professionals, Leap Year Learning is here to bring in the future of creativity, productivity, and learning!
Read more
  • 0
  • 0
  • 156
article-image-create-an-ai-powered-coding-project-generator
Luis Sobrecueva
22 Jun 2023
8 min read
Save for later

Create an AI-Powered Coding Project Generator.

Luis Sobrecueva
22 Jun 2023
8 min read
OverviewMaking a smart coding project generator can be a game-changer for developers. With the help of large language models (LLM), we can generate entire code projects from a user-provided prompt.In this article, we are developing a Python program that utilizes OpenAI's GPT-3.5 to generate code projects and slide presentations based on user-provided prompts. The program is designed as a command-line interface (CLI) tool, which makes it easy to use and integrate into various workflows. Image 1: Weather App Features Our project generator will have the following features:Generates entire code projects based on user-provided promptsGenerates entire slide presentations based on user-provided prompts (watch a demo here)Uses OpenAI's GPT-3.5 for code generationOutputs to a local project directoryExample Usage Our tool will be able to generate a code project from a user-provided prompt, for example, this line will create a snake game:maiker "a snake game using just html and js"; We can then open the generated project in our browser: open maiker-generated-project/index.htmlImage 2: Generated ProjectImplementation To ensure a comprehensive understanding of the project, let's break down the process of creating the AI-powered coding project generator step by step: 1. Load environment variables: We use the `dotenv` package to load environment variables from a `.env` file. This file should contain your OpenAI API key.from dotenv import load_dotenv load_dotenv()2. Set up OpenAI API client: We set up the OpenAI API client using the API key loaded from the environment variables.import openai openai.api_key = os.getenv("OPENAI_API_KEY")3. Define the `generate_project` function: This function is responsible for generating code projects or slide presentations based on the user-provided prompt. Let's break down the function in more detail.def generate_project(prompt: str, previous_response: str = "", type: str = "code") -> Dict[str, str]: The function takes three arguments:prompt: The user-provided prompt describing the project to be generated.previous_response: A string containing the previously generated files, if any. This is used to avoid generating the same files again if it does more than one loop.type: The type of project to generate, either "code" or "presentation". Inside the function, we first create the system and user prompts based on the input type (code or presentation). if type == "presentation":      # ... (presentation-related prompts) else:      # ... (code-related prompts) For code projects, we create a system prompt that describes the role of the API as a code generator and a user prompt that includes the project description and any previously generated files. For presentations, we create a system prompt that describes the role of the API as a reveal.js presentation generator and a user prompt that includes the presentation description. Next, we call the OpenAI API to generate the code or presentation using the created system and user prompts. completion = openai.ChatCompletion.create(      model="gpt-3.5-turbo",      messages=[    {           "role": "system",           "content": system_prompt,    },    {           "role": "user",           "content": user_prompt,    },      ],      temperature=0, ) We use the openai.ChatCompletion.create method to send a request to the GPT-3.5 model. The `messages` parameter contains an array of two messages: the system message and the user message. The `temperature` parameter is set to 0 to encourage deterministic output. Once we receive the response from the API, we extract the generated code from the response. generated_code = completion.choices[0].message.contentGenerating the files to disk: We then attempt to parse the generated code as a JSON object. If the parsing is successful, we return the parsed JSON object, which is a dictionary containing the generated files and their content. If the parsing fails, we raise an exception with an error message.try:      if generated_code:    generated_code = json.loads(generated_code) except json.JSONDecodeError as e:      raise click.ClickException(    f"Code generation failed. Please check your prompt and try again. Error: {str(e)}, generated_code: {generated_code}"      ) return generated_code This dictionary is then used by the `main` function to save the generated files to the specified output directory.```4. Define the `main` function: This function is the entry point of our CLI tool. It takes a project prompt, an output directory, and the type of project (code or presentation) as input. It then calls the `generate_project` function to generate the project and saves the generated files to the specified output directory.def main(prompt: str, output_dir: str, type: str):      # ... (rest of the code) Inside the main function, we ensure the output directory exists, generate the project, and save the generated files.# ... (inside main function) os.makedirs(output_dir, exist_ok=True) for _loop in range(max_loops):      generated_code = generate_project(prompt, ",".join(generated_files), type)      for filename, contents in generated_code.items():    # ... (rest of the code) 5. **Create a Click command**: We use the `click` package to create a command-line interface for our tool. We define the command, its arguments, and options using the `click.command`, `click.argument`, and `click.option` decorators.import click @click.command() @click.argument("prompt") @click.option(      "--output-dir",      "-o",      default="./maiker-generated-project",      help="The directory where the generated code files will be saved.", ) @click.option('-t', '--type', required=False, type=click.Choice(['code', 'presentation']), default='code') def main(prompt: str, output_dir: str, type: str):      # ... (rest of the code) 6. Run the CLI tool: Finally, we run the CLI tool by calling the `main` function when the script is executed.if __name__ == "__main__":      main() In this article, we have used the`... (rest of the code)` as a placeholder to keep the explanations concise and focused on specific parts of the code. The complete code for the AI-powered coding project generator can be found in the GitHub repository at the following link: https://github.com/lusob/maiker-cliBy visiting the repository, you can access the full source code, which includes all the necessary components and functions to create the CLI tool. You can clone or download the repository to your local machine, install the required dependencies, and start using the tool to generate code projects and slide presentations based on user-provided prompts.   ConclusionWith the current AI-powered coding project generator, you can quickly generate code projects and slide presentations based on user-provided prompts. By leveraging the power of OpenAI's GPT-3.5, you can save time and effort in creating projects and focus on other important aspects of your work. However, it is important to note that the complexity of the generated projects is currently limited due to the model's token limitations. GPT-3.5 has a maximum token limit, which restricts the amount of information it can process and generate in a single API call. As a result, the generated projects might not be as comprehensive or sophisticated as desired for more complex applications. The good news is that with the continuous advancements in AI research and the development of new models with larger context windows (e.g., models with more than 100k context tokens), we can expect significant improvements in the capabilities of AI-powered code generators. These advancements will enable the generation of more complex and sophisticated projects, opening up new possibilities for developers and businesses alike.Author BioLuis Sobrecueva is a software engineer with many years of experience working with a wide range of different technologies in various operating systems, databases, and frameworks. He began his professional career developing software as a research fellow in the engineering projects area at the University of Oviedo. He continued in a private company developing low-level (C / C ++) database engines and visual development environments to later jump into the world of web development where he met Python and discovered his passion for Machine Learning, applying it to various large-scale projects, such as creating and deploying a recommender for a job board with several million users. It was also at that time when he began to contribute to open source deep learning projects and to participate in machine learning competitions and when he took several ML courses obtaining various certifications highlighting a MicroMasters Program in Statistics and Data Science at MIT and a Udacity Deep Learning nano degree. He currently works as a Data Engineer at a ride-hailing company called Cabify, but continues to develop his career as an ML engineer by consulting and contributing to open-source projects such as OpenAI and Autokeras.Author of the book: Automated Machine Learning with AutoKeras
Read more
  • 0
  • 0
  • 399

article-image-creating-stunning-images-from-text-prompt-using-craiyon
Julian Melanson
21 Jun 2023
6 min read
Save for later

Creating Stunning Images from Text Prompt using Craiyon

Julian Melanson
21 Jun 2023
6 min read
In an era marked by rapid technological progress, Artificial Intelligence continues to permeate various fields, fostering innovation and transforming paradigms. One area that has notably experienced a surge of AI integration is the world of digital art. A range of AI-powered websites and services have emerged, enabling users to transform text descriptions into images, artwork, and drawings. Among these innovative platforms, Craiyon, formerly known as DALL-E mini, stands out as a compelling tool that transforms the artistic process through its unique text-to-image generation capabilities.Developed by Boris Dayma, Craiyon was born out of an aspiration to create a free, accessible AI tool that could convert textual descriptions into corresponding visuals. The concept of Craiyon was not conceived in isolation; rather, it emerged as a collaborative effort, with contributions from the expansive open-source community playing a significant role in the evolution of its capabilities.The AI behind Craiyon leverages the computing power of Google's PU Research Cloud (TRC). It undergoes rigorous training that imbues it with the ability to generate novel images based on user-provided textual inputs. In addition, Craiyon houses an expansive library of existing images that can further assist users in refining their queries.While the abilities of such an image generation model are undeniably impressive, certain limitations do exist. For instance, the model sometimes generates unexpected or stereotypical imagery, reflecting inherent biases in the datasets it was trained on. Notwithstanding these constraints, Craiyon's innovative technology holds substantial promise for the future of digital art.Image 1: Craiyon home page Getting Started with CraiyonIf you wish to test the waters with Craiyon's AI, the following steps can guide you through the process:Accessing Craiyon: First, navigate to the Craiyon website: https://www.craiyon.com. While you have the option to create a free account, you might want to familiarize yourself with the platform before doing so.Describing Your Image: Upon landing on the site, you will find a space where you can type a description of the image you wish to generate. To refine your request, consider specifying elements you want to be excluded from the image. Additionally, decide whether you want your image to resemble a piece of art, a drawing, or a photo.Initiating the Generation Process: Once you are satisfied with your description, click the "Draw" button. The generation process might take a moment, but it will eventually yield multiple images that match your description.Selecting and Improving Your Image: Choose an image that catches your eye. For a better viewing experience, you can upscale the resolution and quality of the image. If you wish to save the image, use the "Screenshot" button.Revising Your Prompt: If you are not satisfied with the generated images, consider revising your prompt. Craiyon might suggest alternative prompts to help you obtain improved results.Viewing and Saving Your Images: If you have created a free account, you can save the images you like by clicking the heart icon. You can subsequently access these saved images through the "My Collection" option under the "Account" button.Use CasesCreating art and illustrations: Craiyon can be used to create realistic and creative illustrations, paintings, and other artworks. This can be a great way for artists to explore new ideas and techniques, or to create digital artworks that would be difficult or time-consuming to create by hand.Generating marketing materials: Craiyon can be used to create eye-catching images and graphics for marketing campaigns. This could include social media posts, website banners, or product illustrations.Designing products: Craiyon can be used to generate designs for products, such as clothing, furniture, or toys. This can be a great way to get feedback on new product ideas, or to create prototypes of products before they are manufactured.Educating and communicating: Craiyon can be used to create educational and informative images. This could include diagrams, charts, or infographics. Craiyon can also be used to create images that communicate complex ideas in a more accessible way.Personalizing experiences: Craiyon can be used to personalize experiences, such as creating custom wallpapers, avatars, or greeting cards. This can be a great way to add a touch of individuality to your devices or communicationCraiyon is still under development, but it has the potential to be a powerful tool for a variety of uses. As the technology continues to improve, we can expect to see even more creative and innovative use cases for Craiyon in the future.Here are some additional use cases for Craiyon:Generating memes: Craiyon can be used to generate memes, which are humorous images that are often shared online. This can be a fun way to express yourself or to join in on a current meme trend.Creating custom content: Craiyon can be used to create custom content for your website, blog, or social media channels. This could include images, graphics, or even videos.Experimenting with creative ideas: Craiyon can be used to experiment with creative ideas. If you have an idea for an image or illustration, you can use Craiyon to see how it would look. This can be a great way to get feedback on your ideas or to explore new ways of thinking about art. SummaryThe emergence of AI tools like Craiyon marks a pivotal moment in the intersection of art and technology. While it's crucial to acknowledge the limitations of such tools, their potential to democratize artistic expression and generate creative inspiration is indeed remarkable. As Craiyon and similar platforms continue to evolve, we can look forward to a future where the barriers between language and visual expression are further blurred, opening up new avenues for creativity and innovation.Author BioJulian Melanson is one of the founders of Leap Year Learning. Leap Year Learning is a cutting-edge online school that specializes in teaching creative disciplines and integrating AI tools. We believe that creativity and AI are the keys to a successful future and our courses help equip students with the skills they need to succeed in a continuously evolving world. Our seasoned instructors bring real-world experience to the virtual classroom and our interactive lessons help students reinforce their learning with hands-on activities.No matter your background, from beginners to experts, hobbyists to professionals, Leap Year Learning is here to bring in the future of creativity, productivity, and learning!
Read more
  • 0
  • 0
  • 151

article-image-chatgpt-on-your-hardware-gpt4all
Valentina Alto
20 Jun 2023
7 min read
Save for later

ChatGPT on Your Hardware: GPT4All

Valentina Alto
20 Jun 2023
7 min read
We all got familiar with conversational UI powered by Large Language Models: ChatGPT has been the first and most powerful example of how LLMs can boost our productivity daily. To be so accurate, LLMs are, by design, “large”, meaning that they are made of billions of parameters, hence they are hosted in powerful infrastructure, typically in the public cloud (namely, OpenAI’s models including ChatGPT are hosted in Microsoft Azure). As such, those models are accessible with an internet connection.But what if you could run those powerful models on your local PC, having a ChatGPT-like experience?Introducing GPT4AllGTP4All is an ecosystem to train and deploy powerful and customized large language models that run locally on consumer-grade CPUs – no GPU is required. A GPT4All model is a 3GB - 8GB file that you can download and plug into the GPT4All open-source ecosystem software, which is optimized to host models of size between 7 and 13 billion of parameters.To start working with the GPT4All Desktop app, you can download it from the official website hereàhttps://gpt4all.io/index.html.At the moment, there are 3 main LLMs families you can use within GPT4All:LlaMa - a collection of foundation language models ranging from 7B to 65B parameters. They were developed by Meta AI, Facebook’s parent company, and trained on trillions of tokens from 20 languages that use Latin or Cyrillic scripts. LLaMA can generate human-like conversations and outperforms GPT-3 on most benchmarks despite being 10x smaller. LLaMA is designed to run on less computing power and to be versatile and adaptable to many different use cases. However, LLaMA also faces challenges such as bias, toxicity, and hallucinations in large language models. Meta AI has released all the LLaMA models to the research community for open science.GPT-J- An open-source artificial intelligence language model developed by EleutherAI. It is a GPT-2-like causal language model trained on the Pile dataset, which is an open-source 825-gigabyte language modeling data set that is split into 22 smaller datasets. GPT-J comes in sizes ranging from 7B to 65B parameters and can generate creative text, solve mathematical theorems, predict protein structures, answer reading comprehension questions, and more. GPT-J performs very similarly to similarly-sized OpenAI’s GPT-3 versions on various zero-shot down-streaming tasks and can even outperform it on code generation tasks. GPT-J is designed to run on less computing power and to be versatile and adaptable to many different use cases. MPT - series of open source, commercially usable large language models developed by MosaicML. MPT-7B is a decoder-style transformer pretrained from scratch on 1T tokens of English text and code. MPT-7B is part of the family of MosaicPretrainedTransformer (MPT) models, which use a modified transformer architecture optimized for efficient training and inference. These architectural changes include performance-optimized layer implementations and the elimination of context length limits by replacing positional embeddings with Attention to Linear Biases (ALiBi). MPT-7B can handle highly long inputs thanks to ALiBi (up to 84k tokens vs. 2k-4k for other open-source models). MPT-7B also has several finetuned variants for different tasks, such as story writing, instruction following, and dialogue generation.You can download various versions of these models’ families directly within the GPT4All app:Image 1: Download section in GPT 4Image 2: Download PathIn my case, I downloaded a GPT-J with 6 billion parameters. You can select the model you want to use in the upper menu list in your application. Once selected the model, you can use it via the well-known ChatGPT UI, with the difference that it is now running on your local PC:Image 3: GPT4All responseAs you can see, the user experience is almost identical to the well-known ChatGPT, with the difference that we are running it locally and with different underlying LLMs.Chatting with your own dataAnother great thing about GPT4All is its integration with your local docs via plugins (currently in beta). To do so, you can go to settings (in the upper bar menu) and select LocalDocs Plugin. Here, you can browse the folder path you want to connect to and then “attach” it to the model knowledge base via the database icon in the upper right. In this example, I used the SQL licensing documentation in PDF format. Image 4: SQL documentationIn this case, the model is going to answer taking into consideration also (but not only) the attached documentation, which will be quoted in case the answer is based on it:Image 5: Automatically generated computer descriptionImage 6: Computer description with medium confidenceThe technique used to store and index the knowledge provided by our document is called Retrieval Augmented Generation (RAG), a type of language generation model that combines two types of memories:Pre-trained parametric memoryàthe one stored in the model’s parameters, derived from the training dataset;Non-parametric memoryàthe one derived from the attached knowledge provided, which is in the form of a vector database where the documents’ embeddings are stored.Finally, the LocalDocs plugin supports a variety of data formats including txt, docx, pdf, html and xlsx. For a comprehensive list of the supported formats, you can visit the page https://docs.gpt4all.io/gpt4all_chat.html#localdocs-capabilities.Using GPT4All with APIIn addition to the Desktop app mode, GPT4All comes with two additional ways of consumption, which are: Server mode- once enabled the server mode in the settings of the Desktop app, you can start using the API key of GPT4All at localhost 4891, embedding in your app the following code:import openai openai.api_base = "http://localhost:4891/v1" openai.api_key = "not needed for a local LLM" prompt = "What is AI?" # model = "gpt-3.5-turbo" #model = "mpt-7b-chat" model = "gpt4all-j-v1.3-groovy" # Make the API request response = openai.Completion.create(    model=model,    prompt=prompt,    max_tokens=50,    temperature=0.28,    top_p=0.95,    n=1,    echo=True,    stream=False )Python API-It comes with a lightweight SDK of GPT4All which you can easily install via pip install gpt4all. Here you can find a sample notebook on how to use this SDK: https://colab.research.google.com/drive/1QRFHV5lj1Kb7_tGZZGZ-E6BfX6izpeMI?usp=sharingConclusionsRunning LLMs on local hardware opens a new spectrum of possibilities, especially when we think about disconnected scenarios. Plus, having the possibility to chat with local documents with an easy-to-use interface adds the custom non-parametric memory to the model in such a way that we can use it already as a sort of copilot.Henceforth, even though it is in an initial phase, this ecosystem is paving the way for new interesting scenarios.Referenceshttps://docs.gpt4all.io/GPT4All Chat UI - GPT4All Documentation[2005.11401] Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks (arxiv.org)Author BioValentina Alto graduated in 2021 in Data Science. Since 2020 she has been working in Microsoft as Azure Solution Specialist and, since 2022, she focused on Data&AI workloads within the Manufacturing and Pharmaceutical industry. She has been working on customers’ projects closely with system integrators to deploy cloud architecture with a focus on datalake house and DWH, data integration and engineering, IoT and real-time analytics, Azure Machine Learning, Azure cognitive services (including Azure OpenAI Service), and PowerBI for dashboarding. She holds a BSc in Finance and an MSc degree in Data Science from Bocconi University, Milan, Italy. Since her academic journey, she has been writing Tech articles about Statistics, Machine Learning, Deep Learning, and AI in various publications. She has also written a book about the fundamentals of Machine Learning with Python. LinkedIn  Medium
Read more
  • 0
  • 0
  • 3251
article-image-the-transformative-potential-of-google-bard-in-healthcare
Julian Melanson
20 Jun 2023
6 min read
Save for later

The Transformative Potential of Google Bard in Healthcare

Julian Melanson
20 Jun 2023
6 min read
The integration of artificial intelligence and cloud computing in healthcare has revolutionized various aspects of patient care. Telemedicine, in particular, has emerged as a valuable resource for remote care, and the Google Bard platform is at the forefront of making telemedicine more accessible and efficient. With its comprehensive suite of services, secure connections, and advanced AI capabilities, Google Bard is transforming healthcare delivery. This article explores the applications of AI in healthcare, the benefits of Google Bard in expanding telemedicine, its potential to reduce costs and improve access to care, and its impact on clinical decision-making and healthcare quality.AI in HealthcareArtificial intelligence has made significant strides in revolutionizing healthcare. It has proven to be instrumental in disease diagnosis, treatment protocol development, new drug discovery, personalized medicine, and healthcare analytics. AI algorithms can analyze vast amounts of medical data, detect patterns, and identify potential risks or effective treatments that may go unnoticed by human experts. This technology holds immense promise for improving patient outcomes and enhancing healthcare delivery.Understanding Google BardGoogle's AI chat service, Google Bard, now employs its latest large language model, PaLM 2, launched at Google I/O 2023. As a progression from the original PaLM released in April 2022, it elevates Bard's efficacy and performance. Initially, Bard operated on a scaled-down version of LaMDA for broader user accessibility with minimal computational needs. Leveraging Google's extensive database, Bard aims to provide precise text responses to a variety of inquiries, poised to become a dominant player in the AI chatbot arena. Despite its later launch than ChatGPT, Google's broader data access and Bard's lower computational requirements potentially position it as a more efficient and user-friendly contender.Expanding Telemedicine with Google Bard:Google Bard has the ability to play a pivotal role in expanding telemedicine and remote care. Its secure connections enable doctors to diagnose and treat patients without being physically present. Additionally, it provides easy access to patient records, and medical history, and facilitates seamless communication through appointment scheduling, messaging, and sharing medical images. These features empower doctors to provide better-informed care and enhance patient engagement.The Impact on Healthcare Efficiency and Patient Outcomes:Google Bard's integration of AI and machine learning capabilities elevate healthcare efficiency and patient outcomes. The platform's AI system quickly and accurately analyzes patient records, identifies patterns and trends, and aids medical professionals in developing effective treatment plans.For example, here is a prompt based on a sample HPI text:Image 1: Diagnosis from this clinical HPIHere is Google Bard’s response:Image 2: main problems of the Clinical HPIGoogle Bard effectively identified key diagnoses and the primary issue from the HPI sample text. Beyond simply listing diagnoses, it provided comprehensive details on the patient's primary concerns, aiding healthcare professionals in grasping the depth and complexity of the condition.When prompted further, Google Bard also suggested a treatment strategy:Image 3: Suggesting treatment planThis can ultimately assist medical practitioners in developing fitting interventions.Google Bard also serves as a safety net by detecting potential medical errors and alerting healthcare providers. Furthermore, Google Bard's user-friendly interface expedites access to patient records, enabling faster and more effective care delivery. The platform also grants medical professionals access to the latest medical research and clinical trials, ensuring they stay updated with advancements in healthcare. Ultimately, Google Bard's secure platform and powerful analytics tools contribute to better patient outcomes and informed decision-making.Reducing Healthcare Costs and Improving Access to CareOne of the key advantages of Google Bard lies in its potential to reduce healthcare costs and improve access to care. The platform's AI-based technology identifies cost-efficient treatment options, optimizes resource allocation, and enhances care coordination. By reducing wait times and unnecessary visits, Google Bard minimizes missed appointments, repeat visits, and missed diagnoses, resulting in lower costs and improved access to care. Additionally, the platform's comprehensive view of patient health, derived from aggregated data, enables more informed treatment decisions and fewer misdiagnoses. This integrated approach ensures better care outcomes while controlling costs.Supporting Clinical Decision-Making and Healthcare QualityGoogle Bard's launch signifies a milestone in the use of AI to improve healthcare. The platform provides healthcare providers with a suite of computational tools, empowering them to make more informed decisions and enhance the quality of care. Its ability to quickly analyze vast amounts of patient data enables the identification of risk factors, potential diagnoses, and treatment recommendations. Moreover, Google Bard supports collaboration and comparisons of treatment options among healthcare teams. By leveraging this technology, healthcare professionals can provide personalized care plans, improving outcomes and reducing medical errors. The platform's data-driven insights and analytics also support research and development efforts, allowing researchers to identify trends and patterns and develop innovative treatments.Google Bard, with its AI-driven capabilities and secure cloud-based platform, holds immense potential to revolutionize healthcare delivery. By enhancing efficiency, accessibility, and patient outcomes, it is poised to make a significant impact on the healthcare industry. The integration of AI, machine learning, and cloud computing in telemedicine enables more accurate diagnoses, faster treatments, and improved care coordination. Moreover, Google Bard's ability to reduce healthcare costs and improve access to care reinforces its value as a transformative tool. As the platform continues to evolve, it promises to shape the future of healthcare, empowering medical professionals and benefiting patients worldwide.SummaryGoogle Bard is revolutionizing healthcare with its practical applications. For instance, it enhances healthcare efficiency and improves patient outcomes through streamlined data management and analysis. Reducing administrative burdens and optimizing workflows, it reduces healthcare costs and ensures better access to care. Furthermore, it supports clinical decision-making by providing real-time insights, aiding healthcare professionals in delivering higher-quality care. Overall, Google Bard's transformative technology is reshaping healthcare, benefiting patients, providers, and the industry as a whole.Author BioJulian Melanson is one of the founders of Leap Year Learning. Leap Year Learning is a cutting-edge online school that specializes in teaching creative disciplines and integrating AI tools. We believe that creativity and AI are the keys to a successful future and our courses help equip students with the skills they need to succeed in a continuously evolving world. Our seasoned instructors bring real-world experience to the virtual classroom and our interactive lessons help students reinforce their learning with hands-on activities.No matter your background, from beginners to experts, hobbyists to professionals, Leap Year Learning is here to bring in the future of creativity, productivity, and learning!
Read more
  • 0
  • 0
  • 3039

article-image-use-bard-to-implement-data-science-projects
Darren Broemmer
20 Jun 2023
7 min read
Save for later

Use Bard to Implement Data Science Projects

Darren Broemmer
20 Jun 2023
7 min read
Bard is a large language model (LLM) from Google AI, trained on a massive dataset of text and code. Bard can be used to generate Python code for data science projects. This can be extremely helpful for data scientists who want to save time on coding or are unfamiliar with Python. It also empowers those of us who are not full-time data scientists but have an interest in leveraging machine learning (ML) technologies.The first step is to define the problem you are trying to solve. In this article, we will use Bard to create a binary text classifier. It will take a news story as input and classify it as either fake or real. Given a problem to solve, you can brainstorm solutions. If you are familiar with machine learning technologies, you are able to do this yourself. Alternatively, you can ask Bard for help in finding an appropriate algorithm that meets your requirements. The classification of text documents often uses term frequency techniques. We don’t need to know more than that at this point, as we can have Bard help us with the details of the implementation.The overall design of your project could also involve feature engineering and visualization methods. As in most software engineering efforts, you will likely need to iterate on the design and implementation. However, Bard can help you do this much faster.Reading and parsing the training dataAll of the code from this article can be found on GitHub. Bard will guide you, but to run this code, you will need to install a few packages using the following commands.python -m pip install pandas python -m pip install scikit-learn To train our model, we can use the news.csv data set within this project found here, originally sourced from a Data Flair training exercise. It contains the title and text of almost 8,000 news articles labeled as REAL or FAKE.To get started , Bard can help us write code to parse and read this data file. Pandas is a popular open-source data analysis and manipulation tool for Python. We can prompt Bard to use this library to read the file.Image 1: Using Pandas to read the fileRunning the code shows the format of the csv file and the first few data rows, just as Bard described. It has an unnamed article id in the first column followed by the title, text, and classification label.broemmerd$ python test.py   Unnamed: 0                                              title                                              text label 0        8476                       You Can Smell Hillary’s Fear  Daniel Greenfield, a Shillman Journalism Fello...  FAKE 1       10294  Watch The Exact Moment Paul Ryan Committed Pol...  Google Pinterest Digg Linkedin Reddit Stumbleu...  FAKE 2        3608        Kerry to go to Paris in gesture of sympathy  U.S. Secretary of State John F. Kerry said Mon...  REAL 3       10142  Bernie supporters on Twitter erupt in anger ag...  — Kaydee King (@KaydeeKing) November 9, 2016 T...  FAKE 4         875   The Battle of New York: Why This Primary Matters  It's primary day in New York and front-runners...  REALTraining our machine learning modelNow that we can read and understand our training data, we can prompt Bard to write code to train an ML model using this data. Our prompt is detailed regarding the input columns in the file used for training. However, it specifies a general ML technique we believe is applicable to the solution.The text column in the news.csv contains a string with the content from a new article. The label column contains a classifier label of either REAL or FAKE. Modify the Python code to train a machine-learning model using term frequency based on these two columns of data. Image 2: Bard for training ML modelsWe can now train our model. The output of this code is shown below: broemmerd$ python test.py Accuracy: 0.9521704814522494We have our model working. Now we just need a function that will apply it to a given input text. We use the following prompt. Modify this code to include a function called classify_news that takes a text string as input and returns the classifier, either REAL or FAKE.Bard generates the following code for this function. Note that it also refactored the previous code to include the use of the TfidfVectorizor in order to support this function.Image 3: Including classify_news function Testing the classifierTo test the classifier with a fake story, we will use an Onion article entitled “Chill Juror Good With Whatever Group Wants To Do For Verdict.” The Onion is a satirical news website known for its humorous and fictional content. Articles in The Onion are intentionally crafted to appear as genuine news stories, but they contain fictional, absurd elements for comedic purposes.Our real news story is a USA Today article entitled “House blocks push to Censure Adam Schiff for alleging collusion between Donald Trump and Russia.”Here is the code that reads the two articles and uses our new function to classify each one. The results are shown below.with open("article_the_onion.txt", "r") as f:   article_text = f.read()   print("The Onion article: " + classify_news(article_text)) with open("article_usa_today.txt", "r") as f:   article_text = f.read()   print("USA Today article: " + classify_news(article_text)) broemmerd$ python news.py Accuracy: 0.9521704814522494 The Onion article: FAKE USA Today article: REALOur classifier worked well on these two test cases.Bard can be a helpful tool for data scientists who want to save time on coding or who are not familiar with Python. By following a process similar to the one outlined above, you can use Bard to generate Python code for data science projects.Additional guidance on coding with BardWhen using Bard to generate Python code for data science projects, be sure to use clear and concise prompts. Provide the necessary detail regarding the inputs and the desired outputs. Where possible, use specific examples. This can help Bard generate more accurate code. Be patient and expect to go through a few iterations until you get the desired result. Test the generated code at each step in the process. It will be difficult to determine the cause of errors if you wait until the end to start testing.Once you get familiar with the process, you can use Bard to generate Python code that can help you solve data science problems more quickly and easily.Author BioDarren Broemmer is an author and software engineer with extensive experience in Big Tech and Fortune 500. He writes on topics at the intersection of technology, science, and innovation.LinkedIn  
Read more
  • 0
  • 0
  • 180