Building SQL queries via database metadata
While working with source database systems, developers have to remain constantly vigilant for new system changes as they happen. Utilizing the source metadata that can be found within the database system can help generate SQL statements that remain constantly up-to-date. This will allow for source data to be captured even if the rest of an ETL transformation fails due to the new changes.
In this recipe you will create a dynamic data extracting transformation that will extract data from the books
database created earlier in the chapter.
Getting ready
For this recipe you will need the database which can be created and filled by running the script books.sql
. This can be found in the code for this chapter.
As the recipe will read metadata from the books
database, before proceeding, explore the database's metadata repository to see what is inside. In particular, run these statements and pay attention to the results:
SELECT TABLE_NAME , TABLE_TYPE , ENGINE , VERSION , ROW_FORMAT , TABLE_ROWS , AVG_ROW_LENGTH , DATA_LENGTH FROM information_schema.TABLES WHERE TABLE_SCHEMA = 'books'; +--------------+------------+--------+---------+------------+------------+----------------+-------------+ | TABLE_NAME | TABLE_TYPE | ENGINE | VERSION | ROW_FORMAT | TABLE_ROWS | AVG_ROW_LENGTH | DATA_LENGTH | +--------------+------------+--------+---------+------------+------------+----------------+-------------+ | authors | BASE TABLE | InnoDB | 10 | Compact | 13 | 1260 | 16384 | | books | BASE TABLE | InnoDB | 10 | Compact | 34 | 481 | 16384 | +--------------+------------+--------+---------+------------+------------+----------------+-------------+ 2 rows in set (0.00 sec) SELECT TABLE_NAME , COLUMN_NAME , ORDINAL_POSITION , COLUMN_DEFAULT , IS_NULLABLE , DATA_TYPE FROM information_schema.COLUMNS WHERE TABLE_SCHEMA = 'books'; +--------------+-------------+------------------+----------------+-------------+-----------+ | TABLE_NAME | COLUMN_NAME | ORDINAL_POSITION | COLUMN_DEFAULT | IS_NULLABLE | DATA_TYPE | +--------------+-------------+------------------+----------------+-------------+-----------+ | authors | lastname | 1 | NULL | NO | tinytext | | authors | firstname | 2 | NULL | NO | tinytext | | authors | nationality | 3 | unknown | YES | varchar | | authors | birthyear | 4 | 1900 | YES | int | | authors | id_author | 5 | NULL | NO | char | | books | id_title | 1 | NULL | NO | char | | books | title | 2 | NULL | NO | tinytext | | books | id_author | 3 | NULL | NO | tinytext | | books | price | 4 | 0 | YES | double | | books | genre | 5 | unknown | YES | varchar | +--------------+-------------+------------------+----------------+-------------+-----------+ 10 rows in set (0.00 sec)
Note
The preceding statement is written with the MySQL syntax. Please review and fix it if needed if you are using a different DBMS.
Compare how the data in the information_schema
database matches the CREATE DDL
statements found in the books.sql
file. Notice how the same parameters in the statements used to create the tables translate into the TABLES
and COLUMNS
tables.
How to do It...
Create a new transformation and add a Table Input step that can connect to the
information_schema
database.Create a query that selects the
TABLE_NAME
andCOLUMN_NAME
columns from theCOLUMNS
table, making sure to filter only on thebooks TABLE_SCHEMA
.Add a constant value using the Add constants step found under the Transform category. The value should be named
grouper
with typeInteger
and value of1
. Create a hop from the Table Input step to the Add constants step:Add a Denormaliser step found under the Transform category. The Key field should be the grouper column created in the last step. The Group field should be
TABLE_NAME
. Fill in the Target fields: information like the following:Preview the Denormaliser step. For each table in the
books
database, you should see a record with a comma-separated list of column names.Now finish this transformation by adding a Copy rows to result step and create a hop from the Row denormaliser step to the Copy rows to result step.
Since we will be building a SQL query from these columns, the simplest way will be to use them as variables. Variables can not be used in the same transformation as they are set, plus we will have multiple sets of variables, so we need to create a sub job and a parent job. Sub jobs are jobs within other jobs. Let's continue building the transformations needed and then we will build the two jobs required to run this process.
Create a second transformation. Add a Table Input step that will use the variables we will be creating from the data in the first transformation. Be sure to select the Replace variables in script? checkbox. The query should look like the following:
SELECT ${column_list_par} FROM ${table_name_par}
Add a Text file output step. For the Filename field, point to a location where the database table extracts can be dumped to. The Filename can also use parameters. Use the
table_name_par
as the file's name. The Text file output step will store whatever data is in the stream without declaring any Fields, so leave the Fields tab empty.Create one last transformation that will use the Get rows from result step and load the variables
column_list_par
andtable_name_par
with the Set Variables step. The Set Variables step should be filled in like the following:Create a job. This will be the sub job that will take each record and execute a query. Bring over a START step, two Transformation steps, and a Success step. The first Transformation step should point to the transformation that sets the parameters used in the query. The second Transformation step should point to the transformation that uses the parameters and extracts the query output into a text file. This job should look similar to the following:
Create another job. This will be the parent job to the whole process. Bring over a START step, a Transformation step, a Job step, and a Success step. The Transformation step should point to the transformation that is extracting data from the
information_schema
database. The Job step will be pointing to the job we created previously. Make sure that the Execute for every input row? checkbox is checked under the Advanced tab for the job. Your final job should look similar to the following:Now execute the parent job. There should be a number of text files with an output equal to the number of tables in the
books
database.
How it works...
Most databases have a metadata repository that details tables and columns. This can be used to build dynamic queries to extract data for further processing. In this instance, once the books
database was created, the database application stored the metadata inside the information_schema
database. We then queried that database and used a Row denormaliser step to merge the column details into a single field so that our query would execute properly.
See also
Getting data from a database by providing parameters
Getting data from a database by running a query built at runtime
Performing repetitive database design tasks from PDI
The Executing part of a job once for every row in a dataset recipe in Chapter 8, Executing and Re-using Jobs and Transformations