Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
Pentaho Data Integration Cookbook - Second Edition

You're reading from   Pentaho Data Integration Cookbook - Second Edition The premier open source ETL tool is at your command with this recipe-packed cookbook. Learn to use data sources in Kettle, avoid pitfalls, and dig out the advanced features of Pentaho Data Integration the easy way.

Arrow left icon
Product type Paperback
Published in Dec 2013
Publisher Packt
ISBN-13 9781783280674
Length 462 pages
Edition 2nd Edition
Languages
Tools
Arrow right icon
Toc

Table of Contents (21) Chapters Close

Pentaho Data Integration Cookbook Second Edition
Credits
About the Author
About the Reviewers
www.PacktPub.com
Preface
1. Working with Databases FREE CHAPTER 2. Reading and Writing Files 3. Working with Big Data and Cloud Sources 4. Manipulating XML Structures 5. File Management 6. Looking for Data 7. Understanding and Optimizing Data Flows 8. Executing and Re-using Jobs and Transformations 9. Integrating Kettle and the Pentaho Suite 10. Getting the Most Out of Kettle 11. Utilizing Visualization Tools in Kettle 12. Data Analytics Data Structures References Index

Changing the database connection at runtime


Sometimes, you have several databases with exactly the same structure serving different purposes. These are some situations:

  • A database for the information that is being updated daily and one or more databases for historical data.

  • A different database for each branch of your business.

  • A database for your sandbox, a second database for the staging area, and a third database fulfilling the production server purpose.

In any of those situations, it's likely that you need access to one or the other depending on certain conditions, or you may even have to access all of them one after the other. Not only that, the number of databases may not be fixed; it may change over time (for example, when a new branch is opened).

Suppose you face the second scenario: your company has several branches, and the sales for each branch are stored in a different database. The database structure is the same for all branches; the only difference is that each of them holds different data. Now you want to generate a file with the total sales for the current year in every branch.

Getting ready

Download the material for this recipe. You will find a sample file with database connections to three branches. It looks like the following:

branch,host,database
0001 (headquarters),localhost,sales2010
0002,183.43.2.33,sales
0003,233.22.1.97,sales

If you intend to run the transformation, modify the file so it points to real databases.

How to do it...

Perform the following steps to dynamically change database connections:

  1. Create a transformation that uses a Text file input step that reads the file with the connection data.

  2. Add a Copy rows to results step to the transformation. Create a hop going from Text file input to Copy rows to results.

  3. Create a second transformation and define the following named parameters: BRANCH, HOST_NAME, and DATABASE_NAME. Named parameters can be created by right-clicking on the transformation and selecting Transformation settings. Switch to the Parameters tab and enter the named parameters.

  4. Create a database connection. Choose the proper Connection Type:, and fill the Settings data. Type a value for the Port Number:, the User Name:, and the Password fields. As Host Name: type ${HOST_NAME}, and as Database Name: type ${DATABASE_NAME}.

  5. Use a Table Input step for getting the total sales from the database. Use the connection just defined.

  6. Use a Text file output step for sending the sales summary to a text file. Don't forget to check the option Append under the Content tab of the setting window.

  7. Create a job with two Transformation job entries, linked one after the other.

  8. Use the first entry to call the first transformation you created and the second entry to call the second transformation. The job looks like the following:

  9. Double-click on the second transformation entry, select the Advanced tab, and check the Copy previous results to parameters? and the Execute for every input row? checkboxes.

  10. Select the Parameters tab and fill it as shown:

  11. Save both transformations. Save the job and run it.

  12. Open the generated text file. It should have one line with sales information for each database in the file with the list of databases.

How it works...

If you have to connect to several databases, and you don't know in advance which or how many databases you will have to connect to, you can't rely on a connection with fixed values or variables defined in a single place, for example, in the kettle.properties file (which is located in the Kettle home directory). In those situations, the best you could do is to define a connection with variables and set the values for the variables at runtime.

In the recipe, you created a text file with a summary sales line for each database in a list.

The transformation that wrote the sales line used a connection with variables defined as named parameters. This means that whoever calls the transformation has to provide the proper values.

The main job loops on the list of database connections. For each row in that list, it calls the transformation copying the values from the file to the parameters in the transformation. In other words, each time the transformation runs, the named parameters are instantiated with the values coming from the file.

There's more...

In the recipe, you changed the host and the name of the database. You could have parameterized any of the values that made up a database connection, for example, the username and password.

See also

  • Connecting to a database

  • The Executing part of a job once for every row in a dataset recipe in Chapter 8, Executing and Re-using Jobs and Transformations

You have been reading a chapter from
Pentaho Data Integration Cookbook - Second Edition - Second Edition
Published in: Dec 2013
Publisher: Packt
ISBN-13: 9781783280674
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $19.99/month. Cancel anytime
Banner background image