Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
Pentaho Data Integration Cookbook - Second Edition

You're reading from   Pentaho Data Integration Cookbook - Second Edition The premier open source ETL tool is at your command with this recipe-packed cookbook. Learn to use data sources in Kettle, avoid pitfalls, and dig out the advanced features of Pentaho Data Integration the easy way.

Arrow left icon
Product type Paperback
Published in Dec 2013
Publisher Packt
ISBN-13 9781783280674
Length 462 pages
Edition 2nd Edition
Languages
Tools
Arrow right icon
Toc

Table of Contents (21) Chapters Close

Pentaho Data Integration Cookbook Second Edition
Credits
About the Author
About the Reviewers
www.PacktPub.com
Preface
1. Working with Databases FREE CHAPTER 2. Reading and Writing Files 3. Working with Big Data and Cloud Sources 4. Manipulating XML Structures 5. File Management 6. Looking for Data 7. Understanding and Optimizing Data Flows 8. Executing and Re-using Jobs and Transformations 9. Integrating Kettle and the Pentaho Suite 10. Getting the Most Out of Kettle 11. Utilizing Visualization Tools in Kettle 12. Data Analytics Data Structures References Index

Building SQL queries via database metadata


While working with source database systems, developers have to remain constantly vigilant for new system changes as they happen. Utilizing the source metadata that can be found within the database system can help generate SQL statements that remain constantly up-to-date. This will allow for source data to be captured even if the rest of an ETL transformation fails due to the new changes.

In this recipe you will create a dynamic data extracting transformation that will extract data from the books database created earlier in the chapter.

Getting ready

For this recipe you will need the database which can be created and filled by running the script books.sql. This can be found in the code for this chapter.

As the recipe will read metadata from the books database, before proceeding, explore the database's metadata repository to see what is inside. In particular, run these statements and pay attention to the results:

SELECT
    TABLE_NAME
    , TABLE_TYPE
    , ENGINE
    , VERSION
    , ROW_FORMAT
    , TABLE_ROWS
    , AVG_ROW_LENGTH
    , DATA_LENGTH
FROM information_schema.TABLES
WHERE TABLE_SCHEMA = 'books';

+--------------+------------+--------+---------+------------+------------+----------------+-------------+
| TABLE_NAME   | TABLE_TYPE | ENGINE | VERSION | ROW_FORMAT | TABLE_ROWS | AVG_ROW_LENGTH | DATA_LENGTH |
+--------------+------------+--------+---------+------------+------------+----------------+-------------+
| authors      | BASE TABLE | InnoDB |      10 | Compact    |         13 |           1260 |       16384 |
| books        | BASE TABLE | InnoDB |      10 | Compact    |         34 |            481 |       16384 |
+--------------+------------+--------+---------+------------+------------+----------------+-------------+
2 rows in set (0.00 sec)

SELECT
    TABLE_NAME
    , COLUMN_NAME
    , ORDINAL_POSITION
    , COLUMN_DEFAULT
    , IS_NULLABLE
    , DATA_TYPE
FROM information_schema.COLUMNS
WHERE TABLE_SCHEMA = 'books';

+--------------+-------------+------------------+----------------+-------------+-----------+
| TABLE_NAME   | COLUMN_NAME | ORDINAL_POSITION | COLUMN_DEFAULT | IS_NULLABLE | DATA_TYPE |
+--------------+-------------+------------------+----------------+-------------+-----------+
| authors      | lastname    |                1 | NULL           | NO          | tinytext  |
| authors      | firstname   |                2 | NULL           | NO          | tinytext  |
| authors      | nationality |                3 | unknown        | YES         | varchar   |
| authors      | birthyear   |                4 | 1900           | YES         | int       |
| authors      | id_author   |                5 | NULL           | NO          | char      |
| books        | id_title    |                1 | NULL           | NO          | char      |
| books        | title       |                2 | NULL           | NO          | tinytext  |
| books        | id_author   |                3 | NULL           | NO          | tinytext  |
| books        | price       |                4 | 0              | YES         | double    |
| books        | genre       |                5 | unknown        | YES         | varchar   |
+--------------+-------------+------------------+----------------+-------------+-----------+
10 rows in set (0.00 sec)

Note

The preceding statement is written with the MySQL syntax. Please review and fix it if needed if you are using a different DBMS.

Compare how the data in the information_schema database matches the CREATE DDL statements found in the books.sql file. Notice how the same parameters in the statements used to create the tables translate into the TABLES and COLUMNS tables.

How to do It...

  1. Create a new transformation and add a Table Input step that can connect to the information_schema database.

  2. Create a query that selects the TABLE_NAME and COLUMN_NAME columns from the COLUMNS table, making sure to filter only on the books TABLE_SCHEMA.

  3. Add a constant value using the Add constants step found under the Transform category. The value should be named grouper with type Integer and value of 1. Create a hop from the Table Input step to the Add constants step:

  4. Add a Denormaliser step found under the Transform category. The Key field should be the grouper column created in the last step. The Group field should be TABLE_NAME. Fill in the Target fields: information like the following:

  5. Preview the Denormaliser step. For each table in the books database, you should see a record with a comma-separated list of column names.

  6. Now finish this transformation by adding a Copy rows to result step and create a hop from the Row denormaliser step to the Copy rows to result step.

  7. Since we will be building a SQL query from these columns, the simplest way will be to use them as variables. Variables can not be used in the same transformation as they are set, plus we will have multiple sets of variables, so we need to create a sub job and a parent job. Sub jobs are jobs within other jobs. Let's continue building the transformations needed and then we will build the two jobs required to run this process.

  8. Create a second transformation. Add a Table Input step that will use the variables we will be creating from the data in the first transformation. Be sure to select the Replace variables in script? checkbox. The query should look like the following:

    SELECT ${column_list_par}
    FROM ${table_name_par}
  9. Add a Text file output step. For the Filename field, point to a location where the database table extracts can be dumped to. The Filename can also use parameters. Use the table_name_par as the file's name. The Text file output step will store whatever data is in the stream without declaring any Fields, so leave the Fields tab empty.

  10. Create one last transformation that will use the Get rows from result step and load the variables column_list_par and table_name_par with the Set Variables step. The Set Variables step should be filled in like the following:

  11. Create a job. This will be the sub job that will take each record and execute a query. Bring over a START step, two Transformation steps, and a Success step. The first Transformation step should point to the transformation that sets the parameters used in the query. The second Transformation step should point to the transformation that uses the parameters and extracts the query output into a text file. This job should look similar to the following:

  12. Create another job. This will be the parent job to the whole process. Bring over a START step, a Transformation step, a Job step, and a Success step. The Transformation step should point to the transformation that is extracting data from the information_schema database. The Job step will be pointing to the job we created previously. Make sure that the Execute for every input row? checkbox is checked under the Advanced tab for the job. Your final job should look similar to the following:

  13. Now execute the parent job. There should be a number of text files with an output equal to the number of tables in the books database.

How it works...

Most databases have a metadata repository that details tables and columns. This can be used to build dynamic queries to extract data for further processing. In this instance, once the books database was created, the database application stored the metadata inside the information_schema database. We then queried that database and used a Row denormaliser step to merge the column details into a single field so that our query would execute properly.

See also

  • Getting data from a database by providing parameters

  • Getting data from a database by running a query built at runtime

  • Performing repetitive database design tasks from PDI

  • The Executing part of a job once for every row in a dataset recipe in Chapter 8, Executing and Re-using Jobs and Transformations

You have been reading a chapter from
Pentaho Data Integration Cookbook - Second Edition - Second Edition
Published in: Dec 2013
Publisher: Packt
ISBN-13: 9781783280674
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $19.99/month. Cancel anytime
Banner background image