Search icon CANCEL
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon
Business Intelligence with Databricks SQL
Business Intelligence with Databricks SQL

Business Intelligence with Databricks SQL: Concepts, tools, and techniques for scaling business intelligence on the data lakehouse

eBook
₹799.99 ₹2800.99
Paperback
₹3500.99
Subscription
Free Trial
Renews at ₹800p/m

What do you get with Print?

Product feature icon Instant access to your digital eBook copy whilst your Print order is Shipped
Product feature icon Paperback book shipped to your preferred address
Product feature icon Download this book in EPUB and PDF formats
Product feature icon Access this title in our online reader with advanced features
Product feature icon DRM FREE - Read whenever, wherever and however you want
Product feature icon AI Assistant (beta) to help accelerate your learning
Table of content icon View table of contents Preview book icon Preview Book

Business Intelligence with Databricks SQL

Introduction to Databricks

Databricks is one of the most recognizable names in the big data industry. They are the providers of the lakehouse platform for data analytics and artificial intelligence (AI). This book is about Databricks SQL, a product within the Databricks Lakehouse platform that powers data analytics and business intelligence.

Databricks SQL is a rapidly evolving product. It is not a traditional data warehouse, yet its users are the traditional data warehouse and business intelligence users. It claims to provide all the functionality of data warehouses on what is essentially a data lake. This concept can be a bit jarring. It can create resistance in adoption as you might be wondering if your skills are transferrable, or if your work might be disrupted as a result of a new learning curve.

Hence, I am writing this book.

The primary intent of this book is to help you learn the fundamental concepts of Databricks SQL in a fun, follow-along interactive manner. My aim is that by the time you complete this book, you will be confident in your adoption of Databricks SQL as the enabler of your business intelligence.

This book does not intend to be a definitive guide or a complete reference, nor does it intend to be a replacement for the official documentation. It is too early for either of those. This book is your initiation into business intelligence on the data lakehouse, the Databricks SQL way.

Let’s begin!

In this chapter, we’ll cover the following topics:

  • An overview of Databricks, the company
  • An overview of the Lakehouse architecture
  • An overview of the Databricks Lakehouse platform

Technical requirements

There are no technical requirements for this chapter. However, familiarity with the concept of databases, data warehouses, and data lakes will help.

An overview of Databricks, the company

Databricks was founded in 2013 by seven researchers at the University of California, Berkeley.

This was the time when the world was learning how the Meta, Amazon, Netflix, Google, and Apple (MANGA) companies had built their success by scaling up their use of AI techniques in all aspects of their operations. Of course, they could do this because they invested heavily in talent and infrastructure to build their data and AI systems. Databricks was founded with the mission to enable everyone else to do the same – use data and AI in service of their business, irrespective of their size, scale, or technological prowess.

The mission was to democratize AI. What started as a simple platform, leveraging the open source technologies that the co-founders of Databricks had created, has now evolved into the lakehouse platform, which unifies data, analytics, and AI in one place.

As an interesting side note, and my opinion: To this date, I meet people and organizations that equate Databricks with Apache Spark. This is not correct. The platform indeed debuted with a cloud service for running Apache Spark. However, it is important to understand that Apache Spark was the enabling technology for the big data processing platform. It was not the product. The product is a simple platform that enables the democratization of data and AI.

Databricks is a strong proponent of the open source community. A lot of popular open source projects trace their roots to Databricks, including MLflow, Koalas, and Delta Lake. The profile of these innovations demonstrates the commitment to Databricks’s mission statement of democratizing data and AI. MLflow is an open source technology that enables machine learning (ML) operations or MLOps. Delta Lake is the key innovation that brings reliability, governance, and simplification to data engineering and business intelligence operations on the data lake. It is the key to building the lakehouse on top of cloud storage systems such as Amazon Web Service’s Simple Storage Service (S3), Microsoft Azure’s Azure Data Lake Storage (ADLS), and Google Cloud Storage (GCS), as well as on-premises HDFS systems.

Within the Databricks platform, these open source technologies are firmed up for enterprise readiness. They are blended with platform innovations for various data personas such as data engineers, data scientists, and data analysts. This means that MLflow within the Databricks Lakehouse platform powers enterprise-grade MLOps. Delta Lake within the Databricks Lakehouse platform powers enterprise-grade data engineering and data governance. With the Databricks SQL product, the Databricks Lakehouse platform can power all the business intelligence needs for the enterprise as well!

Technologies and Trademarks

Throughout this book we will refer to trademarked technologies and products. Some notable examples are Apache Spark™, Hive™, Delta Lake™, Power BI™, Tableau™ and others that are inadvertently mentioned.

All such trademarks are implied whenever we mention them in the book. For the sake of brevity and readability, I will omit the use of the ™ symbol in the rest of the book.

An overview of the Lakehouse architecture

If, at this point, you are a bit confused with so many terms such as databricks, lakehouse, Databricks SQL, and more – worry not. We are just at the beginning of our learning journey. We will unpack all of these throughout this book.

First, what is Databricks?

Databricks is a platform that enables enterprises to quickly build their Data Lakehouse infrastructure and enable all data personas – data engineers, data scientists, and business intelligence personnel – in their organization to extract and deliver insights from the data. The platform provides a curated experience for each data persona, enabling them to execute their daily workflows. The foundational technologies that enable these experiences are open source – Apache Spark, Delta lake, MLflow, and more.

So, what is the Lakehouse architecture and why do we need it?

The Lakehouse architecture was formally presented at the Conference on Innovative Data Systems Research (CIDR) in January 2021. You can download it from https://databricks.com/research/lakehouse-a-new-generation-of-open-platforms-that-unify-data-warehousing-and-advanced-analytics. This is an easily digestible paper that I encourage you to read for the full details. That said, I will now summarize the salient points from this paper.

Attribution, Where it is Due

In my summary of the said research paper, I am recreating the images that were originally provided. Therefore, they are the intellectual property of the authors of the research paper.

According to the paper, most of the present-day data analytics infrastructures look like a two-tier system, as shown in the following diagram:

Figure 1.1 – Two-tier data analytics infrastructures

Figure 1.1 – Two-tier data analytics infrastructures

In this two-tier system, first, data from source systems is brought onto a data lake. Examples of source systems could be your web or mobile application, transactional databases, ERP systems, social media data, and more. The data lake is typically an on-premises HDFS system or cloud object storage. Data lakes allow you to store data in big data-optimized file formats such as Apache Parquet, ORC, and Avro. The use of these open file formats enables flexibility in writing to the data lake (due to schema-on-read semantics). This flexibility enables faster ingestion of data, which, in turn, enables faster access to data for end users. It also enables more advanced analytics use cases in ML and AI.

Of course, this architecture still needs to support the traditional BI workloads and decision support systems. Hence, a second process, typically in the form of Extract, Transform, and Load (ETL), is built to copy data from the data lake to a dedicated data warehouse.

Close inspection of the two-tier architecture reveals several systemic problems:

  • Duplication of data: This architecture requires the same data to be present in two different systems. This results in an increased cost of storage. Constant reconciliation between these two systems is of utmost importance. This results in increased ETL operations and its associated costs.
  • Security and governance: Data lakes and data warehouses have very different approaches to the security of data. This results in different security mechanisms for the same data that must always be in synchronization to avoid data security violations.
  • Latency in data availability: In the two-tier architecture, the data is only moved to the warehouse by a secondary process, which introduces latency. This means analysts do not get access to fresh data. This also makes it unsuitable for tactical decision support such as operations.
  • Total cost of ownership: Enterprises end up paying double for the same data. There are two storage systems, two ETL processes, two engineering debts, and more.

As you can see, this is unintuitive and unsustainable.

Hence, the paper presents the Lakehouse architecture as the way forward.

Simply put, the data lakehouse architecture is a data management system that implements all the features of data warehouses on data lakes. This makes the data lakehouse a single unified platform for business intelligence and advanced analytics.

This means that the lakehouse platform will implement data management features such as security controls, ACID transaction guarantees, data versioning, and auditing. It will implement query performance features such as indexing, caching, and query optimizations. These features are table stakes for data warehouses. The Lakehouse architecture brings these features to you in the flexible, open format data storage of data lakes. A Lakehouse is a platform that provides data warehousing capabilities and advanced analytics capabilities for the same platform, with cloud data lake economics.

What is the Formal Definition of the Lakehouse?

Section 3 in the CIDR paper officially defines the Lakehouse. Check it out.

The following is a visual depiction of the Lakehouse:

Figure 1.2 – Lakehouse architecture

Figure 1.2 – Lakehouse architecture

The idea of the Lakehouse is deceptively simple – as all good things in life are! The Lakehouse architecture immediately solves the problems we highlighted about present-day two-tier architectures:

  • A single storage layer means no duplication of data and no extra effort to reconcile data. Reduced ETL requirements and ACID guarantees equate to the stability and reliability of the system.
  • A single storage layer means a single model of security and governance for all data assets. This reduces the risk of security breaches.
  • A single storage layer means the availability of the freshest data possible for the consumers of the data.
  • Cheap cloud storage with elastic, on-demand cloud compute reduces the total cost of ownership.
  • Open source technologies in the storage layer reduce the chances of vendor lock-in and make it easy to integrate with other tools.

Of course, any implementation of the Lakehouse will have to ensure the following:

  • Reliable data management: The Lakehouse proposes to eliminate (or reduce) data warehouses. Hence, the Lakehouse implementation must efficiently implement data management and governance – features that are table stakes in data warehouses.
  • SQL performance: The Lakehouse will have to provide state-of-the-art SQL performance on top of the open-access filesystems and file formats typical in data lakes.

This is where the Databricks Lakehouse platform, and within it, the Databricks SQL product, comes in.

An overview of the Databricks Lakehouse platform

The Databricks Lakehouse platform enables enterprises to build their Lakehouse by providing simplified data engineering and data management techniques. The Databricks Lakehouse platform also provides one of the best ML experiences for data scientists and ML engineers.

Finally, Databricks SQL brings in the last piece of the puzzle – a home for the business intelligence and data analyst personas with a first-class workbench that allows query editing, building visualizations, and publishing dashboards. It also allows plug-and-play with downstream business intelligence tools such as Power BI, Tableau, Looker, and more. All of this is backed by state-of-the-art SQL query performance.

The following diagram represents the Databricks Lakehouse platform:

Figure 1.3 – The Databricks Lakehouse platform

Figure 1.3 – The Databricks Lakehouse platform

The lakehouse platform by Databricks is a simple, open, and collaborative platform that combines the reliability, performance, and governance capabilities of data warehouses with the openness, flexibility, and economies of cloud data lakes.

Summary

In this chapter, we learned about Databricks as a company and the Databricks Lakehouse platform as the product of this company, which enables the democratization of data and AI for all organizations. We are now ready to begin exploring Databricks SQL.

In the next chapter, Chapter 2, The Databricks Product Suite A Visual Tour, we will start with a tour of the Databricks Lakehouse platform.

Left arrow icon Right arrow icon
Download code icon Download Code

Key benefits

  • Learn about business intelligence on the lakehouse with features and functions of Databricks SQL
  • Make the most of Databricks SQL by getting to grips with the enablers of its data warehousing capabilities
  • A unique approach to teaching concepts and techniques with follow-along scenarios on real datasets

Description

In this new era of data platform system design, data lakes and data warehouses are giving way to the lakehouse – a new type of data platform system that aims to unify all data analytics into a single platform. Databricks, with its Databricks SQL product suite, is the hottest lakehouse platform out there, harnessing the power of Apache Spark™, Delta Lake, and other innovations to enable data warehousing capabilities on the lakehouse with data lake economics. This book is a comprehensive hands-on guide that helps you explore all the advanced features, use cases, and technology components of Databricks SQL. You’ll start with the lakehouse architecture fundamentals and understand how Databricks SQL fits into it. The book then shows you how to use the platform, from exploring data, executing queries, building reports, and using dashboards through to learning the administrative aspects of the lakehouse – data security, governance, and management of the computational power of the lakehouse. You’ll also delve into the core technology enablers of Databricks SQL – Delta Lake and Photon. Finally, you’ll get hands-on with advanced SQL commands for ingesting data and maintaining the lakehouse. By the end of this book, you’ll have mastered Databricks SQL and be able to deploy and deliver fast, scalable business intelligence on the lakehouse.

Who is this book for?

This book is for business intelligence practitioners, data warehouse administrators, and data engineers who are new to Databrick SQL and want to learn how to deliver high-quality insights unhindered by the scale of data or infrastructure. This book is also for anyone looking to study the advanced technologies that power Databricks SQL. Basic knowledge of data warehouses, SQL-based analytics, and ETL processes is recommended to effectively learn the concepts introduced in this book and appreciate the innovation behind the platform.

What you will learn

  • Understand how Databricks SQL fits into the Databricks Lakehouse Platform
  • Perform everyday analytics with Databricks SQL Workbench and business intelligence tools
  • Organize and catalog your data assets
  • Program the data security model to protect and govern your data
  • Tune SQL warehouses (computing clusters) for optimal query experience
  • Tune the Delta Lake storage format for maximum query performance
  • Deliver extreme performance with the Photon query execution engine
  • Implement advanced data ingestion patterns with Databricks SQL
Estimated delivery fee Deliver to India

Premium delivery 5 - 8 business days

₹630.95
(Includes tracking information)

Product Details

Country selected
Publication date, Length, Edition, Language, ISBN-13
Publication date : Sep 16, 2022
Length: 348 pages
Edition : 1st
Language : English
ISBN-13 : 9781803235332
Vendor :
Databricks
Category :
Languages :

What do you get with Print?

Product feature icon Instant access to your digital eBook copy whilst your Print order is Shipped
Product feature icon Paperback book shipped to your preferred address
Product feature icon Download this book in EPUB and PDF formats
Product feature icon Access this title in our online reader with advanced features
Product feature icon DRM FREE - Read whenever, wherever and however you want
Product feature icon AI Assistant (beta) to help accelerate your learning
Estimated delivery fee Deliver to India

Premium delivery 5 - 8 business days

₹630.95
(Includes tracking information)

Product Details

Publication date : Sep 16, 2022
Length: 348 pages
Edition : 1st
Language : English
ISBN-13 : 9781803235332
Vendor :
Databricks
Category :
Languages :

Packt Subscriptions

See our plans and pricing
Modal Close icon
₹800 billed monthly
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Simple pricing, no contract
₹4500 billed annually
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just ₹400 each
Feature tick icon Exclusive print discounts
₹5000 billed in 18 months
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just ₹400 each
Feature tick icon Exclusive print discounts

Frequently bought together


Stars icon
Total 11,246.97
Azure Data Engineering Cookbook
₹3872.99
SQL Server Query Tuning and Optimization
₹3872.99
Business Intelligence with Databricks SQL
₹3500.99
Total 11,246.97 Stars icon

Table of Contents

20 Chapters
Part 1: Databricks SQL on the Lakehouse Chevron down icon Chevron up icon
Chapter 1: Introduction to Databricks Chevron down icon Chevron up icon
Chapter 2: The Databricks Product Suite – A Visual Tour Chevron down icon Chevron up icon
Chapter 3: The Data Catalog Chevron down icon Chevron up icon
Chapter 4: The Security Model Chevron down icon Chevron up icon
Chapter 5: The Workbench Chevron down icon Chevron up icon
Chapter 6: The SQL Warehouses Chevron down icon Chevron up icon
Chapter 7: Using Business Intelligence Tools with Databricks SQL Chevron down icon Chevron up icon
Part 2: Internals of Databricks SQL Chevron down icon Chevron up icon
Chapter 8: The Delta Lake Chevron down icon Chevron up icon
Chapter 9: The Photon Engine Chevron down icon Chevron up icon
Chapter 10: Warehouse on the Lakehouse Chevron down icon Chevron up icon
Part 3: Databricks SQL Commands Chevron down icon Chevron up icon
Chapter 11: SQL Commands – Part 1 Chevron down icon Chevron up icon
Chapter 12: SQL Commands – Part 2 Chevron down icon Chevron up icon
Part 4: TPC-DS, Experiments, and Frequently Asked Questions Chevron down icon Chevron up icon
Chapter 13: Playing with the TPC-DS Dataset Chevron down icon Chevron up icon
Chapter 14: Ask Me Anything Chevron down icon Chevron up icon
Index Chevron down icon Chevron up icon
Other Books You May Enjoy Chevron down icon Chevron up icon

Customer reviews

Top Reviews
Rating distribution
Full star icon Full star icon Full star icon Full star icon Half star icon 4.6
(13 Ratings)
5 star 61.5%
4 star 38.5%
3 star 0%
2 star 0%
1 star 0%
Filter icon Filter
Top Reviews

Filter reviews by




AL Oct 25, 2022
Full star icon Full star icon Full star icon Full star icon Full star icon 5
Reading this book is like learning from a private tutor. The book covers the Databricks Lakehouse architecture, data security and governance. The author uses simple, easy to understand diagrams and examples to illustrate how various components such as Delta Lake, Unity Catalog, SQL Warehouse, Photon etc fits within the Lakehouse architecture. Whether you are new or an experienced Databricks users, you will find something useful and refreshing. The lab exercises on Databricks SQL is fun to follow as it uses real world data sets and the benchmarking using TPC-DS dataset is a bonus!
Amazon Verified review Amazon
amazonVerifiedBuyer Dec 10, 2022
Full star icon Full star icon Full star icon Full star icon Full star icon 5
It is well written, with good explanations, and concrete examples, and it covers a large variety of topics. The author knows what he is talking about.The writing and examples are all very clear, and there are plenty of figures and code snippets to help you follow along, as well as some diagrams to explain certain concepts. What is particularly good about this book, it's that it is written from experience on the fields. It's not academic book describing how things should work. It is actually coming from experience with real projects.I had the opportunity to review the book Business Intelligence with Databricks SQL by Vihag Gupta, a master practitioner of the Databricks platform.This book is great for both newcomers and experienced users of the Databricks platform. It starts with a nice overview that can get anyone up to speed in no time. However, as you progress, Vihag will start to dig deeper into traditional SQL and Databricks SQL (including Photon).If you have been curious about Photon, this book will allow you to better understand the internals and know how to write efficient, optimal queries.
Amazon Verified review Amazon
Kieran O'Driscoll Nov 13, 2022
Full star icon Full star icon Full star icon Full star icon Full star icon 5
This book walks through each component of the Databricks Lakehouse platform and covers the basics along with more technical aspects. The section on the Photon engine was extremely helpful in understanding how Databricks SQL is so powerful.A great read for anyone getting started on the Lakehouse journey!
Amazon Verified review Amazon
vfortier Oct 27, 2022
Full star icon Full star icon Full star icon Full star icon Full star icon 5
Great book about Databricks SQL, how to set it up, how to use it and how to make the best of it. Highly recommended
Amazon Verified review Amazon
Hales Sep 20, 2022
Full star icon Full star icon Full star icon Full star icon Full star icon 5
This book provides both high-level context and detailed examples and explanations, which are useful for novice to experienced BI practitioners and data engineers supporting BI workloads. As the book does not cover set-up, it will be most beneficial for those with access to an existing environment and some experience with SQL. The book is written in simple, easy-to-understand explanations and loaded with screenshots, code snippets, and diagrams to support the concepts covered.The book's first section is helpful if you are new to cloud architecture and jump into the product with a tour. The data and security chapters cover storing and accessing data and the latest Unity catalog components. While the book doesn’t go deep on user provisioning, it has enough detail to understand privileges to secure your tables, views, and dashboards. The SQL warehouse chapter dives into the architecture in the cloud and includes case studies around how to scale your data and manage performance. The BI section has code and screenshots using built-in datasets that you can follow along to build a dashboard or connect to an external tool. The second section goes deep into the nuts and bolts of Databricks SQL, optimizations and performance features, and the underlying engines. It ends with an example implementation of a cloud data warehouse using the lakehouse architecture and is geared towards people familiar with data modeling and warehouse design.The third section has SQL commands with many code examples, including CDC commands, time travel, and dealing with nested data. The last section has exercises around a TPC-DS dataset, using IDEs, more case studies, and FAQs. Note that you will need the jar file from GitHub. The last two sections are dense with information but are still easy to follow.One thing worth mentioning is that the features outlined in this book are not available in the free Databricks Community edition, so you will need a standard or enterprise account to follow along with the examples and create the dashboard, alerts, etc.
Amazon Verified review Amazon
Get free access to Packt library with over 7500+ books and video courses for 7 days!
Start Free Trial

FAQs

What is the delivery time and cost of print book? Chevron down icon Chevron up icon

Shipping Details

USA:

'

Economy: Delivery to most addresses in the US within 10-15 business days

Premium: Trackable Delivery to most addresses in the US within 3-8 business days

UK:

Economy: Delivery to most addresses in the U.K. within 7-9 business days.
Shipments are not trackable

Premium: Trackable delivery to most addresses in the U.K. within 3-4 business days!
Add one extra business day for deliveries to Northern Ireland and Scottish Highlands and islands

EU:

Premium: Trackable delivery to most EU destinations within 4-9 business days.

Australia:

Economy: Can deliver to P. O. Boxes and private residences.
Trackable service with delivery to addresses in Australia only.
Delivery time ranges from 7-9 business days for VIC and 8-10 business days for Interstate metro
Delivery time is up to 15 business days for remote areas of WA, NT & QLD.

Premium: Delivery to addresses in Australia only
Trackable delivery to most P. O. Boxes and private residences in Australia within 4-5 days based on the distance to a destination following dispatch.

India:

Premium: Delivery to most Indian addresses within 5-6 business days

Rest of the World:

Premium: Countries in the American continent: Trackable delivery to most countries within 4-7 business days

Asia:

Premium: Delivery to most Asian addresses within 5-9 business days

Disclaimer:
All orders received before 5 PM U.K time would start printing from the next business day. So the estimated delivery times start from the next day as well. Orders received after 5 PM U.K time (in our internal systems) on a business day or anytime on the weekend will begin printing the second to next business day. For example, an order placed at 11 AM today will begin printing tomorrow, whereas an order placed at 9 PM tonight will begin printing the day after tomorrow.


Unfortunately, due to several restrictions, we are unable to ship to the following countries:

  1. Afghanistan
  2. American Samoa
  3. Belarus
  4. Brunei Darussalam
  5. Central African Republic
  6. The Democratic Republic of Congo
  7. Eritrea
  8. Guinea-bissau
  9. Iran
  10. Lebanon
  11. Libiya Arab Jamahriya
  12. Somalia
  13. Sudan
  14. Russian Federation
  15. Syrian Arab Republic
  16. Ukraine
  17. Venezuela
What is custom duty/charge? Chevron down icon Chevron up icon

Customs duty are charges levied on goods when they cross international borders. It is a tax that is imposed on imported goods. These duties are charged by special authorities and bodies created by local governments and are meant to protect local industries, economies, and businesses.

Do I have to pay customs charges for the print book order? Chevron down icon Chevron up icon

The orders shipped to the countries that are listed under EU27 will not bear custom charges. They are paid by Packt as part of the order.

List of EU27 countries: www.gov.uk/eu-eea:

A custom duty or localized taxes may be applicable on the shipment and would be charged by the recipient country outside of the EU27 which should be paid by the customer and these duties are not included in the shipping charges been charged on the order.

How do I know my custom duty charges? Chevron down icon Chevron up icon

The amount of duty payable varies greatly depending on the imported goods, the country of origin and several other factors like the total invoice amount or dimensions like weight, and other such criteria applicable in your country.

For example:

  • If you live in Mexico, and the declared value of your ordered items is over $ 50, for you to receive a package, you will have to pay additional import tax of 19% which will be $ 9.50 to the courier service.
  • Whereas if you live in Turkey, and the declared value of your ordered items is over € 22, for you to receive a package, you will have to pay additional import tax of 18% which will be € 3.96 to the courier service.
How can I cancel my order? Chevron down icon Chevron up icon

Cancellation Policy for Published Printed Books:

You can cancel any order within 1 hour of placing the order. Simply contact customercare@packt.com with your order details or payment transaction id. If your order has already started the shipment process, we will do our best to stop it. However, if it is already on the way to you then when you receive it, you can contact us at customercare@packt.com using the returns and refund process.

Please understand that Packt Publishing cannot provide refunds or cancel any order except for the cases described in our Return Policy (i.e. Packt Publishing agrees to replace your printed book because it arrives damaged or material defect in book), Packt Publishing will not accept returns.

What is your returns and refunds policy? Chevron down icon Chevron up icon

Return Policy:

We want you to be happy with your purchase from Packtpub.com. We will not hassle you with returning print books to us. If the print book you receive from us is incorrect, damaged, doesn't work or is unacceptably late, please contact Customer Relations Team on customercare@packt.com with the order number and issue details as explained below:

  1. If you ordered (eBook, Video or Print Book) incorrectly or accidentally, please contact Customer Relations Team on customercare@packt.com within one hour of placing the order and we will replace/refund you the item cost.
  2. Sadly, if your eBook or Video file is faulty or a fault occurs during the eBook or Video being made available to you, i.e. during download then you should contact Customer Relations Team within 14 days of purchase on customercare@packt.com who will be able to resolve this issue for you.
  3. You will have a choice of replacement or refund of the problem items.(damaged, defective or incorrect)
  4. Once Customer Care Team confirms that you will be refunded, you should receive the refund within 10 to 12 working days.
  5. If you are only requesting a refund of one book from a multiple order, then we will refund you the appropriate single item.
  6. Where the items were shipped under a free shipping offer, there will be no shipping costs to refund.

On the off chance your printed book arrives damaged, with book material defect, contact our Customer Relation Team on customercare@packt.com within 14 days of receipt of the book with appropriate evidence of damage and we will work with you to secure a replacement copy, if necessary. Please note that each printed book you order from us is individually made by Packt's professional book-printing partner which is on a print-on-demand basis.

What tax is charged? Chevron down icon Chevron up icon

Currently, no tax is charged on the purchase of any print book (subject to change based on the laws and regulations). A localized VAT fee is charged only to our European and UK customers on eBooks, Video and subscriptions that they buy. GST is charged to Indian customers for eBooks and video purchases.

What payment methods can I use? Chevron down icon Chevron up icon

You can pay with the following card types:

  1. Visa Debit
  2. Visa Credit
  3. MasterCard
  4. PayPal
What is the delivery time and cost of print books? Chevron down icon Chevron up icon

Shipping Details

USA:

'

Economy: Delivery to most addresses in the US within 10-15 business days

Premium: Trackable Delivery to most addresses in the US within 3-8 business days

UK:

Economy: Delivery to most addresses in the U.K. within 7-9 business days.
Shipments are not trackable

Premium: Trackable delivery to most addresses in the U.K. within 3-4 business days!
Add one extra business day for deliveries to Northern Ireland and Scottish Highlands and islands

EU:

Premium: Trackable delivery to most EU destinations within 4-9 business days.

Australia:

Economy: Can deliver to P. O. Boxes and private residences.
Trackable service with delivery to addresses in Australia only.
Delivery time ranges from 7-9 business days for VIC and 8-10 business days for Interstate metro
Delivery time is up to 15 business days for remote areas of WA, NT & QLD.

Premium: Delivery to addresses in Australia only
Trackable delivery to most P. O. Boxes and private residences in Australia within 4-5 days based on the distance to a destination following dispatch.

India:

Premium: Delivery to most Indian addresses within 5-6 business days

Rest of the World:

Premium: Countries in the American continent: Trackable delivery to most countries within 4-7 business days

Asia:

Premium: Delivery to most Asian addresses within 5-9 business days

Disclaimer:
All orders received before 5 PM U.K time would start printing from the next business day. So the estimated delivery times start from the next day as well. Orders received after 5 PM U.K time (in our internal systems) on a business day or anytime on the weekend will begin printing the second to next business day. For example, an order placed at 11 AM today will begin printing tomorrow, whereas an order placed at 9 PM tonight will begin printing the day after tomorrow.


Unfortunately, due to several restrictions, we are unable to ship to the following countries:

  1. Afghanistan
  2. American Samoa
  3. Belarus
  4. Brunei Darussalam
  5. Central African Republic
  6. The Democratic Republic of Congo
  7. Eritrea
  8. Guinea-bissau
  9. Iran
  10. Lebanon
  11. Libiya Arab Jamahriya
  12. Somalia
  13. Sudan
  14. Russian Federation
  15. Syrian Arab Republic
  16. Ukraine
  17. Venezuela