Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
Pentaho Data Integration 4 Cookbook

You're reading from   Pentaho Data Integration 4 Cookbook Over 70 recipes to solve ETL problems using Pentaho Kettle

Arrow left icon
Product type Paperback
Published in Jun 2011
Publisher Packt
ISBN-13 9781849515245
Length 352 pages
Edition 1st Edition
Tools
Arrow right icon
Toc

Table of Contents (17) Chapters Close

Pentaho Data Integration 4 Cookbook
Credits
About the Authors
About the Reviewers
www.PacktPub.com
Preface
1. Working with Databases FREE CHAPTER 2. Reading and Writing Files 3. Manipulating XML Structures 4. File Management 5. Looking for Data 6. Understanding Data Flows 7. Executing and Reusing Jobs and Transformations 8. Integrating Kettle and the Pentaho Suite 9. Getting the Most Out of Kettle Data Structures Index

Inserting new rows where the primary key has to be generated based on stored values


There are tables where the primary key is not a database sequence, nor a consecutive integer, but a column which is built based on a rule or pattern that depends on the keys already inserted. For example imagine a table where the values for primary key are A00001, A00002, and A00003. In this case, you can guess the rule: putting an A followed by a sequence. The next in the sequence would be A00004. This seems too simple, but doing it in PDI is not trivial. This recipe will teach you how to load a table where a primary key has to be generated based on existing rows as in that example.

Suppose that you have to load author data into the book's database. You have the main data for the authors, and you have to generate the primary key as in the example above.

Getting ready

Run the script that creates and loads data into the book's database. You'll find it at http://packtpub.com/support.

Before proceeding, verify the current values for the primary keys in the table where you will insert data:

SELECT MAX(id_author) FROM authors;
+----------------+
| MAX(id_author) |
+----------------+
| A00009         |
+----------------+
1 row in set (0.00 sec)

How to do it...

  1. Create a transformation and create a connection to the book's database.

  2. Use a Text file input step to read the authors.txt file.

    Note

    For simplicity, the authors.txt file only has new authors, that is, authors who are not in the table.

  3. To generate the next primary key, you need to know the current maximum. So use a Table Input step to get it. In this case the following statement will give you that number:

    SELECT
    cast(max(right(id_author,5)) as unsigned) max_id
    FROM authors

    Note

    Alternatively you can simply get the id_author field and transform the field with Kettle steps until you get the current maximum. You will have a simple clear transformation, but it will take several Kettle steps to do it.

  4. By using a Join Rows (cartesian product) step, join both streams. Your transformation should look like this:

  5. Add an Add sequence step. Replace the default value valuename with delta_value. For the rest of the fields in the setting window leave the default values.

  6. Add a Calculator step to build the keys. You do it by filling the setting window as shown:

  7. In order to insert the rows, add a Table output step, double-click it, and select the connection to the book's database.

  8. As Target table type authors.

  9. Check the option Specify database fields.

  10. Select the Database fields tab and fill the grid as follows:

  11. Save and run the transformation.

  12. Explore the authors table. You should see the new authors:

    SELECT * FROM authors ORDER BY id_author;
    +----------+-----------+-------------+-----------+----------+
    | lastname | firstname | nationality | birthyear | id_author|
    +----------+-----------+-------------+-----------+----------+
    | Larsson  | Stieg     | Swedish     |      1954 | A00001   |
    | King     | Stephen   | American    |      1947 | A00002   |
    | Hiaasen  | Carl      | American    |      1953 | A00003   |
    | Handler  | Chelsea   | American    |      1975 | A00004   |
    | Ingraham | Laura     | American    |      1964 | A00005   |
    | Ramsey   | Dave      | American    |      1960 | A00006   |
    | Kiyosaki | Robert    | American    |      1947 | A00007   |
    | Rowling  | Joanne    | English     |      1965 | A00008   |
    | Riordan  | Rick      | American    |      1964 | A00009   |
    | Gilbert  | Elizabeth | unknown     |      1900 | A00010   |
    | Franzen  | Jonathan  | unknown     |      1900 | A00011   |
    | Collins  | Suzanne   | unknown     |      1900 | A00012   |
    | Blair    | Tony      | unknown     |      1900 | A00013   |
    +----------+-----------+-------------+-----------+----------+
    13 rows in set (0.00 sec)

How it works...

When you have to generate a primary key based on the existing primary keys, unless the new primary key is simple to generate by adding one to the maximum, there is no direct way to do it in Kettle. One possible solution is the one shown in the recipe: Getting the last primary key in the table, combining it with your main stream, and using those two sources for generating the new primary keys. This is how it worked in this example.

First, by using a Table Input step, you found out the last primary key in the table. In fact, you got only the numeric part needed to build the new key. In this exercise, the value was 9. With the Join Rows (cartesian product) step, you added that value as a new column in your main stream.

Taking that number as a starting point, you needed to build the new primary keys as A00010, A00011, and so on. You did this by generating a sequence (1, 2, 3, and so on), adding this sequence to the max_id (that led to values 10, 11, 12, and so on), and finally formatting the key with the use of the calculator.

Note that in the calculator the first A+B performs an arithmetic calculation. It adds the max_id with the delta_value sequence. Then it converts the result to a String giving it the format with the mask 0000. This led to the values 00010, 00011, and so on.

The second A+B is a string concatenation. It concatenates the literal A with the previously calculated ID.

Note that this approach works as long as you have a single user scenario. If you run multiple instances of the transformation they can select the same maximum value, and try to insert rows with the same PK leading to a primary key constraint violation.

There's more...

The key in this exercise is to get the last or maximum primary key in the table, join it to your main stream, and use that data to build the new key. After the join, the mechanism for building the final key would depend on your particular case.

See also

  • Inserting new rows when a simple primary key has to be generated. If the primary key to be generated is simply a sequence, it is recommended to examine this recipe.

You have been reading a chapter from
Pentaho Data Integration 4 Cookbook
Published in: Jun 2011
Publisher: Packt
ISBN-13: 9781849515245
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $19.99/month. Cancel anytime