




















































This book covers the fundamentals of ChatGPT, its applications in conversation design, and practical uses in various contexts. The book delves into LangChain, a framework for working with language models, teaching readers about prompt engineering, chatbot memory, vector stores, and response validation. It also explores the creation of ChatGPT-powered chatbots that can interact with custom data sources, and guides readers through building chatbot user interfaces.
Welcome to AI_Distilled. Today, we’ll talk about:
Techwave:
Google launches new Gemini models
Artifacts are now generally available \ Anthropic
Salesforce introduces two new AI sales agents
System Prompts Release Notes for Claude.ai and Mobile Apps
Awesome AI:
LM Studio - Discover, download, and run local LLMs
Painless Data Extraction and Web Automation
Fleak AI Serverless API Builder
Listen to Actual Clients' Feedback
Theysaid - Conversational AI Surveys
Masterclass:
Unlocking 7B+ language models in your browser: A deep dive with Google AI Edge's MediaPipe
Deploying Attention-Based Vision Transformers to Apple Neural Engine
Mistral-NeMo: 4.1x Smaller with Quantized Minitron
Connect the Amazon Q Business generative AI coding companion to your GitHub repositories
Augmenting recommendation systems with LLMs
HackHub:
high-performance, multiplayer code editor from the creators of Atom and Tree-sitter.
Multi-Platform Package Manager for Stable Diffusion
Sharpen your low-resolution pictures with the power of AI upscaling
Transform your database into your AI platform
Large language model series developed by Qwen team, Alibaba Cloud.
Cheers!
Shreyans Singh
Editor-in-Chief, Packt
Google launches new Gemini models
Google has announced updates to its experimental Gemini models, including a smaller, improved variant called Gemini 1.5 Flash-8B and a more powerful version named Gemini 1.5 Pro. These models show significant performance gains in areas like coding and handling complex prompts. The updates aim to gather feedback from developers before a full-scale release, with the models available for free testing via Google AI Studio and the Gemini API. While some praise the rapid improvements, others criticize the models for still struggling with longer tasks and coding reliability.
AI startup Cursor, founded by four MIT friends, has gained popularity for its AI-powered code completion tools, now used by engineers at top AI companies like OpenAI and Midjourney. Recently, Cursor raised $60 million in a Series A funding round, bringing its valuation to $400 million. The software, built on large language models like GPT-4, helps developers automate tedious coding tasks, making it easier to fix bugs and build prototypes. With over 30,000 users, Cursor aims to revolutionize coding by allowing engineers to focus more on creativity and complex problem-solving.
Artifacts are now generally available \ Anthropic
Claude has made its Artifacts feature available to all users across Free, Pro, and Team plans, including on iOS and Android apps. Artifacts allow users to create, view, and iterate on various work products, like code snippets, flowcharts, and interactive dashboards, directly within their conversations with Claude. Since its preview launch in June, tens of millions of Artifacts have been created.
Salesforce introduces two new AI sales agents
Salesforce has introduced two new AI-powered sales agents: Einstein SDR Agent and Einstein Sales Coach Agent, both launching in October. Einstein SDR Agent autonomously manages inbound leads, answering questions, handling objections, and scheduling meetings, freeing up sales teams to focus on more complex tasks. Einstein Sales Coach Agent helps sales representatives improve their skills by simulating buyer interactions and providing feedback. These tools, built on Salesforce’s Einstein 1 Agentforce Platform, aim to enhance sales productivity and effectiveness, with companies like Accenture planning to use them to manage complex deals and scale operations.
System Prompts Release Notes for Claude.ai and Mobile Apps
Anthropic has introduced a new section in their documentation to log updates to the default system prompts used in conversations on Claude.ai and its mobile apps. These prompts guide how Claude interacts with users, providing up-to-date information and encouraging specific behaviors, like using Markdown for code snippets. The updates to these system prompts aim to improve Claude’s responses but do not affect the Anthropic API.
LM Studio - Discover, download, and run local LLMs
LM Studio 0.3.0 is a major update to the local LLM desktop application that enhances its offline capabilities with new features. Users can now chat with documents, using either full document context or "Retrieval Augmented Generation" (RAG) for longer texts. The update also introduces an OpenAI-like JSON output API, customizable UI themes, and automatic hardware detection for optimal performance.
Painless Data Extraction and Web Automation (agentql.com)
AgentQL is a powerful tool for data extraction and web automation that uses AI to reliably find and interact with web elements, even as websites change. Unlike traditional methods that rely on fragile XPath or DOM selectors, AgentQL allows users to locate elements using natural language descriptions, making it easier to automate tasks like filling forms, gathering data, and conducting end-to-end testing.
Fleak AI Workflows. Simplified | Serverless API Builder | fleak.ai
Fleak is a low-code, serverless API builder designed for data teams to quickly and easily create, integrate, and scale AI and data workflows without managing any infrastructure. It allows users to configure and deploy workflows in minutes, seamlessly integrating with tools like large language models, vector databases, and modern storage technologies.
Listen to Actual Clients' Feedback | Seven24 AI
Seven24 helps you capture and act on user feedback with ease. Integrate their tool into your product to collect feedback via text or voice, and their AI transforms this feedback into actionable tasks. With features like sentiment analysis, you can boost positive reviews and address issues quickly.
Theysaid - Conversational AI Surveys
TheySaid offers the world’s first conversational AI survey, designed to significantly increase response rates and improve customer engagement. By integrating seamlessly with your existing tech stack, the AI tool generates personalized survey questions based on your website content and follows up with users through conversational interactions.
Unlocking 7B+ language models in your browser: A deep dive with Google AI Edge's MediaPipe
Google AI Edge's MediaPipe has developed a new system that allows large language models (LLMs) to run directly in web browsers, overcoming memory and performance limitations. By using WebAssembly and WebGPU, MediaPipe can now load and execute models like Gemma 1.1 with 7 billion parameters, which was previously unfeasible in-browser. The approach includes breaking down models into manageable parts and leveraging efficient memory usage techniques to handle the massive size of LLMs.
Deploying Attention-Based Vision Transformers to Apple Neural Engine
The concept of Vision Transformers (ViTs) was introduced to leverage transformer models, which were originally used in natural language processing, for image recognition tasks. Unlike traditional Convolutional Neural Networks (CNNs), Vision Transformers process images by dividing them into smaller patches and applying attention mechanisms. This approach can handle various computer vision tasks such as image classification and object detection more effectively.
Mistral-NeMo: 4.1x Smaller with Quantized Minitron
NVIDIA's Minitron technique makes large language models (LLMs) like Mistral-NeMo smaller and more efficient by removing less critical parts and retraining them. This process reduces the models' sizes while keeping their performance high. The Minitron version of Mistral-NeMo, for instance, shrinks the model from 12 billion to 8 billion parameters. Combining Minitron with 4-bit quantization further compresses these models, allowing them to run on smaller GPUs and reducing operational costs.
Connect the Amazon Q Business generative AI coding companion to your GitHub repositories
You can link Amazon Q Business, an AI-powered assistant, to your GitHub repositories using the Amazon Q GitHub (Cloud) connector. This setup allows you to use natural language queries to access information like commits, issues, and pull requests from your GitHub repositories. By integrating this tool, your development team can boost productivity, reduce context switching, and quickly retrieve information from your GitHub data through a conversational interface.
Augmenting recommendation systems with LLMs
Large language models (LLMs), like Google's PaLM, can significantly enhance recommendation systems by integrating advanced AI capabilities. By incorporating LLMs into the recommendation pipeline, you can improve features like conversational recommendations, sequential recommendations based on user activity, and rating predictions. LLMs can interactively suggest items, understand the sequence of user preferences, and predict ratings with high accuracy.
Zed is a high-performance, multiplayer code editor developed by the team behind Atom and Tree-sitter. It can be installed on macOS and Linux directly or through package managers, though it’s not yet available for Windows or web platforms.
Stability Matrix is a multi-platform tool designed for managing Stable Diffusion Web UI packages across Windows, Linux, and macOS. It features a customizable interface with a syntax-highlighted terminal, a model browser for importing models from CivitAI and HuggingFace, and a shared model directory for all packages.
SuperImage is an Android app that uses AI to enhance low-resolution images by upscaling them to higher resolutions. Built with the MNN framework and Real-ESRGAN, it processes images in tiles on the device's GPU, merging them into a high-resolution final image. It requires Android 7 or above and support for Vulkan or OpenCL.
Integrate AI models and machine learning workflows with your database to implement custom AI applications, without moving your data. Including streaming inference, scalable model hosting, training and vector search.
Qwen2 is a suite of advanced language models available in various sizes, including up to 72 billion parameters. It offers state-of-the-art performance in tasks like coding and math, and supports up to 128K tokens for extended context. The models are pretrained and instruction-tuned, and they are available for use through Hugging Face and ModelScope.
📢 If your company is interested in reaching an audience of developers and, technical professionals, and decision makers, you may want toadvertise with us.
If you have any comments or feedback, just reply back to this email.
Thanks for reading and have a great day!