Datadog Acquires QuickwitCloudPro #78: Kubernetes health checks: Best practices for configuringCloud Conversations: A Fireside Chat with Forrest Brazeal and RubrikJoin us on Jan. 28th @ 10 AM PST for a captivating fireside chat where storytelling meets cloud innovation. Forrest Brazealâacclaimed cloud architect, author, and the creative mind behind cloud computing's most beloved cartoonsâteams up with Rubrikâs Chief Business Officer, Mike Tornincasa to explore the evolving challenges of data protection in a multi-cloud world.Save Your SpotâMasterclassKubernetes health checks: Best practices for configuringHow to manage secrets with Azure Key Vault in Kubernetes?Self-Hosting a Container RegistryHow I tuned my CI/CD pipeline to be done in 60 secondsWhat Karpenter v1.0.0 means for Kubernetes autoscalingđSecret KnowledgeFive Lessons from a Minor Production IncidentMaking a Postgres Compound Index 50x FasterSQLite Index VisualizationNetworking Costs CalculatorWriting secure Go codeâĄTechwaveDatadog Acquires QuickwitAzure StorageâA look back and a look forwardOpenTelemetry and Grafana Labs: whatâs new and whatâs next in 2025Introducing Amazon Nova foundation models: Frontier intelligence and industry leading price performanceIntroducing the next generation of Amazon SageMaker: The center for all your data, analytics, and AIđ ď¸HackhubGoliat Dashboard: Manage, visualize, and optimize Terraform deploymentspv-migrate:CLI tool to easily migrate Kubernetes persistent volumesGit-remote-s3:Library that enables using Amazon S3 as a git remote and LFS serverToolGit:Git Productivity ToolkitDatabend: Modern alternative to SnowflakeCheers,Shreyans SinghEditor-in-ChiefWorldâs first 16 Hour LIVE Training to become an AI-Powered human in 2025 đ¤The world of AI is evolving at lightning speed, and the only way to stay relevant is to MASTER AI before it masters you.Join the Worldâs first 2-Day Mastermind Challenge to learn the Tools, Tactics, and Strategies to Automate Your Work Like Never Before!Best part? It is usually for $395, but the first 100 of you get in for free.Claim your FREE spot now!âMasterClass: Tutorials & GuidesKubernetes health checks: Best practices for configuringKubernetes health checks are essential for maintaining the reliability, performance, and availability of applications. They use probes to monitor container health and take corrective actions when necessary. The three main types of probesâLiveness, Readiness, and Startupâserve distinct purposes. Liveness probes ensure the application is running and can restart containers in case of failure. Readiness probes determine if a container is ready to handle traffic, temporarily removing it from service if it fails. Startup probes focus on verifying successful initialization for slow-starting applications. Probes can use methods like HTTP, TCP, commands, or gRPC to perform health checks.How to manage secrets with Azure Key Vault in Kubernetes?To manage secrets with Azure Key Vault in Kubernetes, you can use tools like the External Secrets Operator (ESO) and a service principal for authentication. Start by creating an Azure Key Vault, adding your sensitive data (e.g., API tokens) as secrets, and assigning the required permissions to a service principal. Install ESO on your Kubernetes cluster to synchronize secrets from Azure Key Vault to Kubernetes secrets. Then, configure a SecretStore resource in Kubernetes to connect to the Key Vault, using the service principal credentials for authentication. With this setup, applications running in Kubernetes can securely access secrets from Azure Key Vault without exposing sensitive data.Self-Hosting a Container RegistryA self-hosted container registry allows you to store and manage container images on your own infrastructure, giving you full control and independence from third-party services. It involves setting up a server with Docker, configuring a container to run the registry, securing it with user authentication (e.g., via htpasswd), and enabling HTTPS using Nginx and SSL certificates. Once configured, you can push and pull images securely from your registry. While self-hosting ensures privacy and compliance with strict regulations, it requires maintaining and securing the system yourself, making it ideal for enterprises needing tight control over their containerized workflows.How I tuned my CI/CD pipeline to be done in 60 secondsThe process of optimizing my CI/CD pipeline to run in under 60 seconds involved strategic improvements in parallelization, caching, and job refinement. Initially, my pipeline was a simple setup that took over five minutes to execute, which hampered my productivity. I split the pipeline into multiple parallel jobs, grouped similar tasks to save cost and debug time, and leveraged GitHub's caching for dependencies, linting tools, and test data to drastically reduce redundant downloads and processing. By using a Makefile for local testing, I accelerated iterations and ensured the GitHub YAML was simple and reliable. Further tuning, like combining related jobs and adding task-specific cache keys, helped balance speed and cost. These optimizations allowed me to reduce the runtime for building, testing, linting, and deploying my Golang app to under a minute, making the pipeline more efficient and developer-friendly.What Karpenter v1.0.0 means for Kubernetes autoscalingKarpenter v1.0.0 marks a significant milestone for Kubernetes autoscaling, offering a mature and stable solution for dynamic node lifecycle management. As an open-source tool designed to optimize workload placement and reduce costs, Karpenter automatically provisions and deprovisions nodes based on application demands and Kubernetes scheduling constraints. With its vendor-neutral design and integration with cloud-specific APIs like AWS, Azure, and GCP, Karpenter enhances scalability, cost-efficiency, and ease of management across diverse cloud environments. The 1.0 release ensures API stability, supports features like workload consolidation and rolling updates for node images, and enables seamless integration with other CNCF tools, empowering organizations to build intelligent and scalable cloud-native infrastructure.đSecret Knowledge: Learning ResourcesFive Lessons from a Minor Production IncidentA minor production incident in the AWS News platform highlighted five key lessons about software operations. First, investing in observability early paid off, as comprehensive dashboards allowed for quick identification and resolution of the issue within an hour. Second, a robust software architecture and testing regime enabled safe and confident adjustments to the system during a crisis. Third, the YAGNI principle (You Aren't Gonna Need It) has trade-offs; while simpler designs work initially, anticipating growth with safeguards like alarms could prevent issues. Fourth, bugs often travel in pairs, as one problem often uncovers or triggers another, underscoring the need for thorough debugging processes. Lastly, data lineage simplifies troubleshooting, as stored intermediate data made it easy to pinpoint and fix the root causes. These lessons underscore the importance of building resilient systems even for small-scale projects.Making a Postgres Compound Index 50x FasterOptimizing a compound index reduced query latency by 50x, showcasing the importance of index field order in PostgreSQL. Initially, a query filtering by status and event_type, and sorting by occurred_at, was slow due to an index ordered by occurred_at first. This structure forced PostgreSQL to scan millions of rows inefficiently. By reordering the index to prioritize filter fields (status, event_type) before the sort field (occurred_at), the search space narrowed significantly, enabling PostgreSQL to process only relevant subsets. This simple yet impactful adjustment improved endpoint latency from ~500ms to under 10ms, highlighting how understanding index design can drastically enhance database performance.SQLite Index VisualizationSQLite uses a B-Tree structure to organize indexes, ensuring efficient data storage and quick searches. A B-Tree consists of nodes, with each node storing cells that contain the indexed data, a row ID, and links to child nodes. The data is saved on pages, which have fixed sizes, and every index is structured hierarchically for balance and fast lookups. Using tools like sqlite3_analyzer, we can inspect indexes and visualize their layout, which includes pages, cells, and relationships. For better understanding, visualizations can be created from index data dumps, showcasing how SQLite handles different types of indexes (e.g., ASC/DESC, multi-column, and unique indexes) and optimizations through commands like VACUUM or REINDEX. This approach makes it possible to compare index designs, analyze efficiency, and explore SQLiteâs inner workings.Networking Costs CalculatorThe Networking Costs Calculator is a self-hosted tool designed to estimate AWS networking costs. It includes a serverless backend that fetches updated prices for networking services using AWS Price List Query APIs, storing them in a DynamoDB table, and a ReactJS frontend hosted on S3 and CloudFront for user interaction. Users can select an AWS region, specify services, and input data transfer details to view estimated monthly costs. Deployment requires a Linux OS, NodeJS, AWS CLI, and AWS CDK, with setup guided by a provided script. The tool helps users calculate costs for features like Data Transfer, NAT Gateways, and Transit Gateway Attachments.Writing secure Go codeWriting secure Go code involves following best practices to ensure that your code is robust, secure, and performs well. Key steps include staying informed about security updates by subscribing to the Go mailing list, keeping Go versions up to date for security patches, and regularly checking for vulnerabilities using tools like go vet, staticcheck, and golangci-lint. It's also important to test code for race conditions using Goâs built-in race detector and scan for known vulnerabilities with tools like govulncheck and gosec. Regular fuzz testing and keeping dependencies updated can help prevent security issues and improve the overall quality of your code.âĄTechWave: Cloud News & AnalysisDatadog Acquires QuickwitDatadog has acquired Quickwit, an open-source, cloud-native search engine designed for fast, scalable, and cost-effective log management. This acquisition will help Datadog address the needs of organizations in regulated industries, such as finance and healthcare, that must meet strict data residency, privacy, and regulatory requirements. By integrating Quickwit, Datadog aims to provide seamless observability and real-time insights without compromising data ownership or requiring multiple logging tools. Quickwit will continue to support its open-source community with a major update under the Apache License 2.Azure StorageâA look back and a look forwardAzure Storage has played a critical role in supporting AI advancements and cloud adoption in 2024, with innovations like Azure Blob Storage enabling large-scale AI model training and Azure Elastic SAN providing cloud-native SAN capabilities. Key highlights include rapid growth in Premium SSD v2 adoption, enhanced Kubernetes support through Azure Container Storage, and improved security measures like Microsoft Defender for Storage. Looking ahead to 2025, Azure Storage aims to empower businesses with smarter data solutions, including seamless integration of unstructured data with AI services, advanced disaster recovery options, and optimized storage for mission-critical workloads, all while collaborating with key partners to drive innovation.OpenTelemetry and Grafana Labs: whatâs new and whatâs next in 2025OpenTelemetry, a rapidly growing open-source observability project, achieved major milestones in 2024, including support for profiling, stability for the Spring Boot starter, and updates to Semantic Conventions for databases, AI, and more. Grafana Labs actively contributed to OpenTelemetry advancements, integrating it with Prometheus and introducing tools like Grafana Alloy and Beyla for enhanced compatibility and eBPF-based auto-instrumentation. Looking ahead to 2025, the OpenTelemetry Collector is expected to reach stability with its v1 release, signaling long-term support, while new innovations like expanded eBPF capabilities and enhanced protocol support aim to simplify trace-to-profile correlation and drive broader adoption across the observability ecosystem.Introducing Amazon Nova foundation models: Frontier intelligence and industry leading price performanceAmazon Nova is Amazon's latest suite of advanced foundation models available on Amazon Bedrock, designed for both text and multimodal (text, image, and video) tasks. With models tailored for understanding (like text analysis, document processing, and multimodal reasoning) and creative content generation (producing images and videos), Nova combines top-tier intelligence with cost efficiency. Models like Nova Micro, Lite, and Pro cater to diverse business needs, from fast, low-cost tasks to complex, high-accuracy workflows, and all support extensive customization for specific industries.Introducing the next generation of Amazon SageMaker: The center for all your data, analytics, and AIAmazon SageMaker has launched its next-generation platform, integrating tools for data exploration, analytics, machine learning (ML), and generative AI into a unified environment. The revamped platform features the SageMaker Unified Studio (preview), which consolidates data and AI workflows, enabling users to process data, develop ML models, and create generative AI applications seamlessly. It introduces key capabilities like the SageMaker Lakehouse for unified data access, a visual ETL tool for data transformation, and the Amazon Bedrock IDE for building advanced generative AI solutions.đ ď¸HackHub: Best Tools for CloudGoliat Dashboard:The Goliat Dashboard is an open-source project built with Astro that provides an interactive interface for managing Terraform Cloud resources. It integrates seamlessly with the Terraform Cloud API to display real-time metrics and organize projects and workspaces for better resource visibility. The dashboard also supports the DigitalOcean API and plans to add Azure, AWS, and OpenAI integrations for enhanced insights. With dynamic routes and automatic updates, no additional configuration is needed after API connections.pv-migrate:pv-migrate is a command-line tool and kubectl plugin designed to simplify the migration of Kubernetes PersistentVolumeClaim (PVC) data. It addresses challenges in renaming, resizing, or moving PVCs between namespaces, clusters, or cloud providers by securely transferring data using rsync over SSH. With support for in-cluster and cross-cluster migrations, customizable manifests, and multiple migration strategies, pv-migrate enables efficient and flexible volume data handling. It supports various architectures, including arm64 and amd64, and offers shell completions for popular terminals like bash and zsh.Git-remote-s3:git-remote-s3 is a Python-based tool that enables using Amazon S3 as a Git remote and Git LFS (Large File Storage) server. It provides a seamless way to manage Git repositories and LFS files directly on S3 buckets. Users can push, pull, and manage branches in their repositories stored on S3 while ensuring encryption for security. The tool also integrates with AWS services like CodePipeline by allowing zipped repository archives for pipeline source actions. It supports concurrent users, IAM-based access control, and debug logging, making it versatile for managing versioned code or assets on AWS.ToolGit:ToolGit is a productivity toolkit for Git that extends its functionality with various custom commands and aliases to simplify and automate common Git tasks. It includes utilities for cleaning up branches, force-pulling remote changes, restoring file modes, managing branch history, and more. Easy to install, ToolGit integrates seamlessly into your workflow by adding its scripts to your PATH environment variable, enabling them as Git sub-commands. Each command comes with detailed help text for user-friendly operation, making it a practical enhancement for developers seeking efficiency in version control.Databend:Databend is an open-source cloud data warehouse built in Rust, designed as a cost-effective alternative to Snowflake. It focuses on high-speed query execution and data ingestion, supporting complex analysis of large datasets. Databend offers features such as full ACID compliance, schema flexibility, advanced indexing, and real-time data updates. It can be deployed on both cloud and on-prem environments, providing enterprise-level performance with reduced costs.đ˘ If your company is interested in reaching an audience of developers and, technical professionals, and decision makers, you may want toadvertise with us.If you have any comments or feedback, just reply back to this email.Thanks for reading and have a great day!*{box-sizing:border-box}body{margin:0;padding:0}a[x-apple-data-detectors]{color:inherit!important;text-decoration:inherit!important}#MessageViewBody a{color:inherit;text-decoration:none}p{line-height:inherit}.desktop_hide,.desktop_hide table{mso-hide:all;display:none;max-height:0;overflow:hidden}.image_block img+div{display:none}sub,sup{font-size:75%;line-height:0} @media (max-width: 100%;display:block}.mobile_hide{min-height:0;max-height:0;max-width: 100%;overflow:hidden;font-size:0}.desktop_hide,.desktop_hide table{display:table!important;max-height:none!important}}
Read more