Batching table read and writes
When performing DDL operations such as merge and update on several large tables stored in databases with high concurrency, the transaction log can become blocked and lead to real outages in the data warehouse. All SQL statements are atomic, which means that modifications that take a long time will cause data to be locked for as long as the process is being executed, which can be a problem for real-time databases. To reduce the computational burden of these operations, we can optimize some of them so that they can run on smaller, easier-to-handle batches that only lock resources for brief periods.
Let's see how we can implement batch reads and writes in Delta Lake, thanks to the options provided by the Apache Spark API.
Creating a table
We can create Delta Lake tables either by using the Apache Spark DataFrameWriter
or by using DDL commands such as CREATE
TABLE
. Let's take a look:
- Delta Lake tables are created in the metastore...