Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases now! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon

AI_Distilled #25: OpenAI’s GPT Store and GPT-4 Turbo, xAI’s Grok, Stability AI’s 3D Model Generator, Microsoft’s Phi 1.5, Gen AI-Powered Vector Search Apps

Save for later
  • 12 min read
  • 10 Nov 2023

article-image

Dive deeper into the world of AI innovation and stay ahead of the AI curve! Subscribe to our AI_Distilled newsletter for the latest insights. Don't miss out – sign up today!

The AI Product Manager's Handbook ($35.99 Value) FREE for a limited time! 

ai-distilled-25-openais-gpt-store-and-gpt-4-turbo-xais-grok-stability-ais-3d-model-generator-microsofts-phi-15-gen-ai-powered-vector-search-apps-img-0

Gain expertise as an AI product manager to effectively oversee the design, development and deployment of AI products. Master the skills needed to bring tangible value to your organization through successful AI implementation.

Seize this exclusive opportunity and grab your copy now before it slips away on November 16th! 

 

👋 Hello ,

Step into another edition of AI_Distilled, brimming with updates in AI/ML, LLMs, NLP, GPT, and Gen AI. Our aim is to help you enhance your AI skills and stay abreast of the ever-evolving trends in this domain. 

Let’s get started with our news and analysis with an industry expert’s opinion. 

“Unfortunately, we have biases that live in our data, and if we don’t acknowledge that and if we don’t take specific actions to address it then we’re just going to continue to perpetuate them or even make them worse.” - Kathy Baxter, Responsible AI Architect, Salesforce

Baxter made an important point, for data underlies ML models, and errors will simply result in a domino effect that can have drastic consequences. Equally important is how AI handles data privacy, especially when you consider how apps like ChatGPT have now crossed 100 million weekly users. The Apple CEO recently hinted at major investments in responsible AI, which will likely transform smart handheld devices in 2024 with major AI upgrades. 

In this issue, we’ll talk about OpenAI unveiling major upgrades and features including GPT-4 Turbo model and DALL-E 3 API, Microsoft’s new breakthrough with smaller AI model, Elon Musk unveiling xAI’s "Grok" competing with GPT, and OpenAI launching the GPT Store for user-created custom AI models. 

We’ve also got you your fresh dose of AI secret knowledge and tutorials on unlocking zero-shot adaptive prompting for LLMs, creating a Python chat web app with OpenAI's API and Reflex, and 9 open source tools to boost your AI app. 

📥 Feedback on the Weekly Edition

What do you think of this issue and our newsletter?

Please consider taking the short survey below to share your thoughts and you will get a free PDF of the The Applied Artificial Intelligence Workshop eBook upon completion. 

Complete the Survey. Get a Packt eBook for Free!

Writer’s Credit: Special shout-out to Vidhu Jain for their valuable contribution to this week’s newsletter content!  

Cheers,  

Merlyn Shelley  

Editor-in-Chief, Packt 

 
ai-distilled-25-openais-gpt-store-and-gpt-4-turbo-xais-grok-stability-ais-3d-model-generator-microsofts-phi-15-gen-ai-powered-vector-search-apps-img-1

Ready to level up your coding game?  

🚀 Dive into the Software Supply Chain Security Survey and let's talk vulnerabilities, security practices, and all things code!  

🤓 Share your insights and stand a chance to snag some epic prizes, including the coveted MX Master 3S, Raspberry Pi 4 Model B 4GB, $5 Udemy gift credits, and more!  

🌟 Your code-savvy opinions could be your ticket to tech greatness.  

Don't miss out—join the conversation now! 👩‍💻 

 Interested? Tell us what you think!

 

SignUp | Advertise | Archives

⚡ TechWave: AI/GPT News & Analysis

🔹 OpenAI Unveils Major Upgrades and Features Including GPT-4 Turbo Model, DALL-E 3 API, Crosses 100 million Weekly Users: At its DevDay event, OpenAI announced significant new capabilities and lower pricing for its AI platform. This includes a more advanced GPT-4 Turbo model with 128K context size and multimodal abilities. OpenAI also released new developer products like the Assistants API and DALL-E 3 integration. Additional updates include upgraded models, customization options, expanded rate limits, and the Copyright Shield protection. Together these represent major progress in features, accessibility and affordability. ChatGPT also achieved 100 million weekly users and over two million developers, marking a significant milestone in its growth.  

🔹 Stability AI Launches AI-Powered 3D Model Generator: Stability AI debuts Stable 3D, empowering non-experts to craft 3D models through simple descriptions or image uploads. The tool generates editable .obj files, marking the company's entry into the AI-driven 3D modeling landscape. Questions about training data origin and prior copyright controversies arise, highlighting a strategic move amid financial struggles. 

🔹 Apple CEO Hints at Generative AI Plans: Apple CEO Tim Cook hinted at significant investments in generative AI during the recent earnings call. While specifics were not disclosed, Cook emphasized responsible deployment over time. Apple's existing AI in iOS and Apple Watch showcases its commitment, with rumors suggesting major AI updates in 2024, solidifying Apple's leadership in the space. 

Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €18.99/month. Cancel anytime

🔹 Microsoft Unveils Breakthrough with Smaller AI Model: Microsoft researchers revealed a major new capability added to their small AI model Phi 1.5. It can now interpret images, a skill previously limited to much larger models like OpenAI's ChatGPT. Phi 1.5 has only 1.3 billion parameters compared to GPT-4's 1.7 trillion, making it exponentially more efficient. This shows less expensive AI can mimic bigger models. Smaller models need less computing power, saving costs and emissions. Microsoft sees small and large models as complementary, optimizing tasks between them. The breakthrough signals wider access to advanced AI as smaller models spread.

🔹 OpenAI Unveils GPT Store for User-Created Custom AI Models: OpenAI introduces GPTs, allowing users to build custom versions of ChatGPT for specific purposes with no coding experience required, opening up the AI marketplace. These GPTs can range from simple tasks like recipe assistance to complex ones such as coding or answering specific questions. The GPT Store will soon host these creations, enabling users to publish and potentially monetize them, mirroring the App Store model's success. OpenAI aims to pay creators based on their GPTs' usage, encouraging innovation. However, this move may create challenges in dealing with industry giants like Apple and Microsoft, who have their app models and platforms. 

🔹 Elon Musk Drops xAI's Game-Changer: Meet Grok, the LLM with Real-Time Data, Efficiency, and a Dash of Humor! Named after the slang term for "understanding," Grok is intended to compete with AI models like OpenAI's GPT. It's currently available to a limited number of users in the United States through a waitlist on xAI's website. Grok is designed with impressive efficiency, utilizing half the training resources of comparable models. It brings humor and wit to AI interactions, aligning with Musk's goal of creating a "maximum truth-seeking AI." 

***************************************************************************************************************************************************************

🔮 Expert Insights from Packt Community 

Machine Learning with PyTorch and Scikit-Learn - By Sebastian Raschka, Yuxi (Hayden) Liu, Vahid Mirjalili 

Solving interactive problems with reinforcement learning 

Another type of machine learning is reinforcement learning. In reinforcement learning, the goal is to develop a system (agent) that improves its performance based on interactions with the environment. Since the information about the current state of the environment typically also includes a so-called reward signal, we can think of reinforcement learning as a field related to supervised learning. However, in reinforcement learning, this feedback is not the correct ground truth label or value, but a measure of how well the action was measured by a reward function. Through its interaction with the environment, an agent can then use reinforcement learning to learn a series of actions that maximizes this reward via an exploratory trial-and-error approach or deliberative planning. 

Discovering hidden structures with unsupervised learning 

In supervised learning, we know the right answer (the label or target variable) beforehand when we train a model, and in reinforcement learning, we define a measure of reward for particular actions carried out by the agent. In unsupervised learning, however, we are dealing with unlabeled data or data of an unknown structure. Using unsupervised learning techniques, we are able to explore the structure of our data to extract meaningful information without the guidance of a known outcome variable or reward function. 

Finding subgroups with clustering 

Clustering is an exploratory data analysis or pattern discovery technique that allows us to organize a pile of information into meaningful subgroups (clusters) without having any prior knowledge of their group memberships. Each cluster that arises during the analysis defines a group of objects that share a certain degree of similarity but are more dissimilar to objects in other clusters, which is why clustering is also sometimes called unsupervised classification. Clustering is a great technique for structuring information and deriving meaningful relationships from data. For example, it allows marketers to discover customer groups based on their interests, in order to develop distinct marketing programs. 

Dimensionality reduction for data compression 

Another subfield of unsupervised learning is dimensionality reduction. Often, we are working with data of high dimensionality—each observation comes with a high number of measurements—that can present a challenge for limited storage space and the computational performance of machine learning algorithms. Unsupervised dimensionality reduction is a commonly used approach in feature preprocessing to remove noise from data, which can degrade the predictive performance of certain algorithms. Dimensionality reduction compresses the data onto a smaller dimensional subspace while retaining most of the relevant information. 

This content is from the book Machine Learning with PyTorch and Scikit-Learn writtern by Sebastian Raschka, Yuxi (Hayden) Liu, Vahid Mirjalili (Feb 2022). Start reading a free chapter or access the entire Packt digital library free for 7 days by signing up now. To learn more, click on the button below. 

Read through the Chapter 1 unlocked here... 

 

🌟 Secret Knowledge: AI/LLM Resources

💡 Enhancing User Experiences with AI-PWAs in Web Development: This article explores the integration of AI and Progressive Web Applications (PWAs) to revolutionize website development. Learn how AI chatbots and generative AI, such as OpenAI's GPT-3, can personalize content and streamline coding. Discover the benefits of combining AI technology with PWAs, including improved user engagement, streamlined content generation, and enhanced scalability.  

💡 Boosting Your AI App with 9 Open Source Tools: From LLM queries to chatbots and AI app quality, explore projects like LLMonitor for cost analytics and user tracking, Guidance for complex agent flows, LiteLLM for easy integration of various LLM APIs, Zep for chat history management, LangChain for building powerful AI apps, DeepEval for LLM application testing, pgVector for embedding storage and similarity search, promptfoo for testing prompts and models, and Model Fusion, a TypeScript library for AI applications. These tools can help you optimize and streamline your AI projects, improving user experiences and productivity. 

💡 Creating Gen AI-Powered Vector Search Applications with Vertex AI Search: Learn how to harness the power of generative AI and vector embeddings to build user experiences and applications. Vector embeddings are a way to represent various types of data in a semantic space, enabling developers to create applications such as finding relevant information in documents, personalized product recommendations, and more. The article introduces vector search, a service within the Vertex AI Search platform, which helps developers find relevant embeddings quickly. It offers scalability, adaptability to changing data, security features, and easy integration with other AI tools. 

 

🔛 Masterclass: AI/LLM Tutorials

🔑 Integrating Amazon MSK with CockroachDB for Real-Time Data Streams: This guide offers a comprehensive step-by-step process for integrating Amazon Managed Streaming for Apache Kafka (Amazon MSK) with CockroachDB, creating a robust and scalable pipeline for real-time data processing. The integration enables various use cases, such as real-time analytics, event-driven microservices, and audit logging, enhancing businesses' ability to provide immediate, personalized experiences for customers. 

🔑 Understanding GPU Workload Monitoring on Amazon EKS with AWS Managed Services: As the demand for GPU-accelerated ML workloads grows, this post offers valuable insights into monitoring GPU utilization on Amazon Elastic Kubernetes Service (EKS) using AWS managed open-source services. Amazon EC2 instances with NVIDIA GPUs are crucial for efficient ML training. The article explains how GPU metrics can provide essential information for optimizing resource allocation, identifying anomalies, and enhancing system performance.  

🔑 Unlocking Zero-Shot Adaptive Prompting for LLMs: This study explores LLMs, emphasizing their prowess in solving problems in both few-shot and zero-shot scenarios. It introduces "Consistency-Based Self-Adaptive Prompting (COSP)" and "Universal Self-Adaptive Prompting (USP)" to generate robust prompts for diverse tasks in natural language understanding and generation. 

🔑 Exploring Interactive AI Applications with OpenAI's GPT Assistants and Streamlit: This post unveils a cutting-edge Streamlit app integrating OpenAI's GPT models for interactive Wardley Mapping instruction. It details development, emphasizing GPT-4-1106-preview, covering setup, session management, UI configuration, user input, and real-time content generation, showcasing Streamlit's synergy with OpenAI for dynamic applications. 
 

🔑 Utilizing GPT-4 Code Interpreter API for CSV Analysis: A Step-by-Step Guide: Learn to analyze CSV files with OpenAI's GPT-4 Code Interpreter API. The guide covers step-by-step processes, from uploading files via Postman to creating an Assistant, forming a thread, and executing a run. Gain insights for efficient CSV analysis, unlocking data-driven insights and automation power. 

🔑 Creating a Python Chat Web App with OpenAI's API and Reflex: In this tutorial, you'll learn how to develop a chat web application in pure Python, utilizing OpenAI's API for intelligent responses. The guide explains how to use the Reflex open-source framework to build both the backend and frontend entirely in Python. The tutorial also covers styling and handling user input, making it easy for those without JavaScript experience to create a chat application with an AI-driven chatbot. By the end, you'll have a functional AI chatbot web app built in Python. 

 

🚀 HackHub: Trending AI Tools

📐tigerlab-ai/tiger: Build customized AI models and language applications, bridging the gap between general LLMs and domain-specific knowledge. 

📐 langchain-ai/langchain/tree/master/templates: Reference architectures for various LLM use cases, enabling developers to quickly build production-ready LLM applications. 

📐 ggerganov/whisper.cpp/tree/master/examples/talk-llama: Uses the SDL2 library to capture audio from the microphone and combines Whisper and LLaMA models for real-time interactions. 

📐 explosion/thinc: Lightweight deep learning library for model composition, offering type-checked, functional-programming APIs, and support for PyTorch, TensorFlow, and MXNet.