Search icon CANCEL
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
Mastering Prometheus

You're reading from   Mastering Prometheus Gain expert tips to monitoring your infrastructure, applications, and services

Arrow left icon
Product type Paperback
Published in Apr 2024
Publisher Packt
ISBN-13 9781805125662
Length 310 pages
Edition 1st Edition
Concepts
Arrow right icon
Author (1):
Arrow left icon
William Hegedus William Hegedus
Author Profile Icon William Hegedus
William Hegedus
Arrow right icon
View More author details
Toc

Table of Contents (21) Chapters Close

Preface 1. Part 1: Fundamentals of Prometheus FREE CHAPTER
2. Chapter 1: Observability, Monitoring, and Prometheus 3. Chapter 2: Deploying Prometheus 4. Chapter 3: The Prometheus Data Model and PromQL 5. Chapter 4: Using Service Discovery 6. Chapter 5: Effective Alerting with Prometheus 7. Part 2: Scaling Prometheus
8. Chapter 6: Advancing Prometheus: Sharding, Federation, and High Availability 9. Chapter 7: Optimizing and Debugging Prometheus 10. Chapter 8: Enabling Systems Monitoring with the Node Exporter 11. Part 3: Extending Prometheus
12. Chapter 9: Utilizing Remote Storage Systems with Prometheus 13. Chapter 10: Extending Prometheus Globally with Thanos 14. Chapter 11: Jsonnet and Monitoring Mixins 15. Chapter 12: Utilizing Continuous Integration (CI) Pipelines with Prometheus 16. Chapter 13: Defining and Alerting on SLOs 17. Chapter 14: Integrating Prometheus with OpenTelemetry 18. Chapter 15: Beyond Prometheus 19. Index 20. Other Books You May Enjoy

Thanos Query Frontend

Thanos Query Frontend is a service that can be deployed in front of Thanos Query to improve query performance by splitting large-range queries into smaller ones and also caching results. It is based on a similar component implemented by Cortex (https://github.com/cortexproject/cortex), the predecessor to Mimir. You can think of it as a pre-processor of queries, where the majority of actual work is still done by the downstream queries.

Query sharding and splitting

Presuming you run multiple top-level Thanos Query instances, you can put Query Frontend in front of them to share the load between them more efficiently than simply load balancing between the two of them with something such as Nginx. This can be accomplished through query splitting based on time ranges and/or vertical sharding.

Query splitting

By default, the --query-range.split-interval flag is set to split range queries on a 24h interval. This means that if you query sum(my_metric) over...

lock icon The rest of the chapter is locked
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at €18.99/month. Cancel anytime