Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Free Learning
Arrow right icon
Pentaho Data Integration Cookbook - Second Edition
Pentaho Data Integration Cookbook - Second Edition

Pentaho Data Integration Cookbook - Second Edition: The premier open source ETL tool is at your command with this recipe-packed cookbook. Learn to use data sources in Kettle, avoid pitfalls, and dig out the advanced features of Pentaho Data Integration the easy way. , Second Edition

eBook
€8.99 €32.99
Paperback
€41.99
Subscription
Free Trial
Renews at €18.99p/m

What do you get with Print?

Product feature icon Instant access to your digital eBook copy whilst your Print order is Shipped
Product feature icon Paperback book shipped to your preferred address
Product feature icon Download this book in EPUB and PDF formats
Product feature icon Access this title in our online reader with advanced features
Product feature icon DRM FREE - Read whenever, wherever and however you want
OR
Modal Close icon
Payment Processing...
tick Completed

Shipping Address

Billing Address

Shipping Methods
Table of content icon View table of contents Preview book icon Preview Book

Pentaho Data Integration Cookbook - Second Edition

Chapter 2. Reading and Writing Files

In this chapter we will cover:

  • Reading a simple file

  • Reading several files at the same time

  • Reading semi-structured files

  • Reading files having one field per row

  • Reading files having some fields occupying two or more rows

  • Writing a simple file

  • Writing a semi-structured file

  • Providing the name of a file (for reading or writing) dynamically

  • Using the name of a file (or part of it) as a field

  • Reading an Excel file

  • Getting the value of specific cells in an Excel file

  • Writing an Excel file with several sheets

  • Writing an Excel file with a dynamic number of sheets

  • Reading data from an AWS S3 Instance

Introduction


Files are the most primitive, but also the most used format to store and interchange data. PDI has the ability to read data from all kinds of files and different formats. It also allows you to write back to files in different formats as well.

Reading and writing simple files is a very straightforward task. There are several steps under the input and output categories of steps that allow you to do it. You pick the step, configure it quickly, and you are done. However, when the files you have to read or create are not simple—and that happens most of the time—the task of reading or writing can become a tedious exercise, if you don't know the tricks. In this chapter, you will learn not only the basics for reading and writing files, but also all the how-tos for dealing with them.

Note

This chapter covers plain files (.txt, .csv, and fixed width) and Excel files. For recipes on reading and writing XML files, refer to Chapter 4, Manipulating XML Structures.

Reading a simple file


In this recipe, you will learn the use of the Text file input step. In the example, you have to read a simple file with a list of authors' information like the following:

"lastname","firstname","country","birthyear"
"Larsson","Stieg","Swedish",1954
"King","Stephen","American",1947
"Hiaasen","Carl ","American",1953
"Handler","Chelsea ","American",1975
"Ingraham","Laura ","American",1964

Getting ready

In order to continue with the exercise, you must have a file named authors.txt similar to the one shown in the introduction section of this recipe.

How to do it...

Carry out the following steps:

  1. Create a new transformation.

  2. Drop a Text file input step to the canvas.

  3. Now, you have to type the name of the file (authors.txt) with its complete path. You do it in the File or directory textbox.

    Tip

    Alternatively, you can select the file by clicking on the Browse button and looking for the file. The textbox will be populated with the complete path of the file.

  4. Click on the Add button. The...

Reading several files at the same time


Sometimes, you have several files to read, all with the same structure, but different data. In this recipe, you will see how to read those files in a single step. The example uses a list of files containing names of museums in Italy.

Getting ready

You must have a group of text files in a directory, all with the same format. In this recipe, the names of these files start with museums_italy_, for example, museums_italy_1, museums_italy_2, museums_italy_roma, museums_italy_genova, and so on.

Each file has a list of names of museums, one museum on each line.

How to do it...

Carry out the following steps:

  1. Create a new transformation.

  2. Drop a Text file input step onto the work area.

  3. Under the File or directory tab, type the directory where the files are.

  4. In the Regular Expression textbox, type museums_italy_.*\.txt.

  5. Then, click on the Add button. The grid will be populated, as shown in the following screenshot:

    Note

    ${Internal.Transformation.Filename.Directory} is a variable...

Reading semi-structured files


The simplest files for reading are those where all rows follow the same pattern: Each row has a fixed number of columns, and all columns have the same kind of data in every row. However, it is common to have files where the information does not have that format. On many occasions, the files have little or no structure. This is also called "semi-structured" formatting. Suppose you have a file with roller coaster descriptions, and the file looks like the following:

JOURNEY TO ATLANTIS
SeaWorld Orlando

Journey to Atlantis is a unique thrill ride since it is ...
Roller Coaster Stats
Drop: 60 feet
Trains: 8 passenger boats
Train Mfg: Mack

KRAKEN
SeaWorld Orlando

Named after a legendary sea monster, Kraken is a ...
Kraken begins with a plunge from a height of 15-stories ...
Roller Coaster Stats
Height: 151 feet
Drop: 144 feet
Top Speed: 65 mph
Length: 4,177 feet
Inversions: 7
Trains: 3 - 32 passenger
Ride Time: 2 minutes, 2 seconds

KUMBA
Busch Gardens Tampa
.....

Reading files having one field per row


When you use one of the Kettle steps meant for reading files, Kettle expects the data to be organized in rows, where the columns are the fields. Suppose that instead of having a file with that structure, your file has one attribute per row, as in the following example:

Mastering Joomla! 1.5 Extension and Framework Development
Published: November 2007
Our price: $30.99

CakePHP 1.3 Application Development Cookbook: RAW
Expected: December 2010
Our price: $24.99

Firebug 1.5: Editing, Debugging, and Monitoring Web Pages
Published: April 2010
Our price: $21.99

jQuery Reference Guide
...

This file contains book information. In the file, each book is described in three rows: one for the title, one for the published or expected publishing date, and one row for the price.

There is no direct way to tell Kettle how to interpret these rows, but a simple transformation can do the trick.

Getting ready

Create a file containing the preceding text or download the sample...

Reading files with some fields occupying two or more rows


When you use one of the Kettle steps devoted for reading files, Kettle expects one entity per row. For example, if you are reading a file with a list of customers, then Kettle expects one customer per row. Suppose that you have a file organized by rows, where the fields are in different columns, but some of the fields span several rows, as in the following example containing data about roller coasters:

Roller Coaster      Speed     Location                   Year
Kingda Ka           128 mph   Six Flags Great Adventure
                              Jackson, New Jersey        2005
Top Thrill Dragster 120 mph   Cedar Point
                              Sandusky, Ohio             2003
Dodonpa             106.8 mph Fuji-Q Highland
                              FujiYoshida-shi            2001
                              Japan
Steel Dragon 2000   95 mph    Nagashima Spa Land
                              Mie                        2000...

Writing a simple file


In this recipe, you will learn the use of the Text file output step for writing text files.

Let's assume that you have a database with outdoor products and you want to export a catalog of products to a text file.

Getting ready

For this recipe, you will need a database with outdoor products with the structure explained in Appendix A, Data Structures.

How to do it...

Carry out the following steps:

  1. Create a new transformation.

  2. Drop a Table input step into the canvas. Enter the following SQL statement:

    SELECT innerj.desc_product, categories.category, innerj.price FROM products innerj
    INNER JOIN categories
    ON innerj.id_category = categories.id_category
  3. From the Output category, add a Text file output step.

  4. In the Filename textbox under the File tab, type or browse to the name of the destination file.

  5. In the Extension textbox, leave the default value txt.

  6. Check the Do not create file at start checkbox. This checkbox prevents the creation of the file when there is no data to write to...

Writing a semi-structured file


A standard file generated with Kettle is a file with several columns, which may vary according to how you configured the Fields tab of the Output step and one row for each row in your dataset, all with the same structure. If you want the file to have a header, the header is automatically created with the names of the fields. What if you want to generate a file somehow different from that? Suppose that you have a file with a list of topics for a writing examination. When a student has to take the examination, you take that list of topics and generate a sheet like the following:

Student name: Mary Williams
-------------------------------------------------------------
Choose one of the following topics and write a paragraph about it
(write at least 300 words)

1. Should animals be used for medical research?
2. What do you think about the amount of violence on TV?
3. What does your country mean to you?
4. What would happen if there were no televisions?
5. What would...

Providing the name of a file (for reading or writing) dynamically


Sometimes, you don't have the complete name of the file that you intend to read or write in your transformation. That can be because the name of the file depends on a field or on external information. Suppose you receive a text file with information about new books to process. This file is sent to you on a daily basis and the date is part of its name (for example, newBooks_20100927.txt).

Getting ready

In order to follow this recipe, you must have a text file named newBooks_20100927.txt with sample book information such as the following:

"Title","Author","Price","Genre"
"The Da Vinci Code","Dan Brown","25.00","Fiction"
"Breaking Dawn","Stephenie Meyer","21.00","Children"
"Foundation","Isaac Asimov","38.50","Fiction"
"I, Robot","Isaac Asimov","39.99","Fiction"

How to do it...

Carry out the following steps:

  1. Create a new transformation.

  2. Drop a Get System Info step from the Input category into the canvas. Add a new field named today,...

Using the name of a file (or part of it) as a field


There are some occasions where you need to include the name of a file as a column in your dataset for further processing. With Kettle, you can do it in a very simple way.

In this example, you have several text files about camping products. Each file belongs to a different category and you know the category from the filename. For example, tents.txt contains tent products. You want to obtain a single dataset with all the products from these files including a field indicating the category of every product.

Getting ready

In order to run this exercise, you need a directory (campingProducts) with text files named kitchen.txt, lights.txt, sleeping_bags.txt, tents.txt, and tools.txt. Each file contains descriptions of the products and their price separated with a |. Consider the following example:

Swedish Firesteel - Army Model|$19.97
Mountain House #10 Can Freeze-Dried Food|$53.50
Coleman 70-Quart Xtreme Cooler (Blue)|$59.99
Kelsyus Floating Cooler...

Reading an Excel file


Kettle provides the Excel input step, in order to read data from Excel files. In this recipe, you will use this step to read an Excel file regarding museums in Italy. The file has a sheet with one column for the name of the museum and an other for the city where it is located. The data starts in the C3 cell (as shown in the screenshot in the next section).

Getting ready

For this example, you need an Excel file named museumsItaly.xls with a museums sheet, as shown in the following screenshot:

You can download a sample file from Packt's website.

How to do it...

Carry out the following steps:

  1. Create a new transformation.

  2. Drop an Excel input step from the Input category.

  3. Under the Files tab, browse to the museumsItaly.xls file and click on the Add button. This will cause the name of the file to be moved to the grid below.

  4. Under the Sheet tab, fill in the first row as follows: type museums in the Sheet name column, 2 in the Start row, and 2 in the Start column.

    Note

    The rows and columns...

Getting the value of specific cells in an Excel file


One of the good things about Excel files is that they give you the freedom to write anywhere on the sheets, which sometimes is good if you want to prioritize the look and feel. However, that could cause troubles when it's time to automatically process the data in those files. Suppose that you have an Excel file with values for a couple of variables you'd like to set, as shown in the following screenshot:

In this example, you want to set values for three variables: Year, ProductLine, and Origin. The problem is, that you don't know where in the sheet that table is. It can be anywhere, near the upper left corner of the sheet. As you cannot ask Kettle to scan somewhere near the upper-left corner, you will learn in this recipe how to get that data with a simple transformation.

Getting ready

Create an Excel file with the preceding table. Feel free to write the values anywhere within the first rows and columns, as long as the labels and values are...

Writing an Excel file with several sheets


Writing an Excel file with Kettle has a lot in common with writing a text file. Except for a couple of settings specific to Excel files, configuring an Excel Output step is quite similar to configuring a Text file output step. One of the differences is that when you write an Excel file, you add a sheet to the file. What if you want to write more than one sheet in the same file?

Suppose you have a data source containing books and their authors and you want to create an Excel file with two sheets. In the first sheet, you want the authors and in the second, the books' titles. This recipe teaches you how to do this.

Getting ready

In order to run this recipe, you will need a database with books and authors with the structure described in Appendix A, Data Structures.

How to do it...

Carry out the following steps, in order to create the sheet with the authors' details:

  1. Create a new transformation.

  2. Drop a Table Input step into the canvas, in order to read the author...

Writing an Excel file with a dynamic number of sheets


When you generate an Excel file, you usually generate it with a single sheet. You can, however, generate a file with more sheets. With PDI, you can generate an Excel file with several sheets, even if you don't know in advance how many sheets you will generate, or the name of those sheets.

In this recipe, you will create such an Excel file. Your file will have book title information separated in different sheets depending on the genre of the books.

Getting ready

You will need a database containing books and authors with the structure described in Appendix A, Data Structures.

How to do it...

Carry out the following steps:

  1. Create a new job.

  2. From the File Management category, drop a Delete file job entry into the work area.

  3. In the File name textbox, type the path and name of the Excel file you will create, in order to remove the file if it exists.

  4. Then, you have to add the two Transformation entries: one for selecting the book's categories (Transf_Categories...

Reading data from an AWS S3 Instance


Amazon Web Services has helped to reshape server management by providing infinite flexibility with virtual machines that can be spun up or shut down almost as fast as a simple command. S3 is a scalable storage space that can be shared across virtual instances and is a common location for files to be processed. With this recipe, we will be reading information out of a file in an S3 instance.

Note

This recipe will require access to AWS, which does have a free tier for new users. If you have already used AWS in the past and do not have access to the free tier, the recipe will not deal with large transfers of data so the expense will be minimal.

Getting ready

You will need to have access to the files for this recipe, which are available on Packt's website. You will also need to create an S3 bucket to upload the files to.

  1. Go to http://aws.amazon.com and create an account or log in.

  2. Once logged in, you should now see the AWS Console. Click on S3 under the Storage...

Left arrow icon Right arrow icon

Key benefits

  • Intergrate Kettle in integration with other components of the Pentaho Business Intelligence Suite, to build and publish Mondrian schemas,create  reports, and populatedashboards
  • This book contains an organized sequence of recipes packed with screenshots, tables, and tips so you can complete the tasks as efficiently as possible
  • Manipulate your data by exploring, transforming, validating, integrating, and performing data analysis

Description

Pentaho Data Integration is the premier open source ETL tool, providing easy, fast, and effective ways to move and transform data. While PDI is relatively easy to pick up, it can take time to learn the best practices so you can design your transformations to process data faster and more efficiently. If you are looking for clear and practical recipes that will advance your skills in Kettle, then this is the book for you. Pentaho Data Integration Cookbook Second Edition guides you through the features of explains the Kettle features in detail and provides easy to follow recipes on file management and databases that can throw a curve ball to even the most experienced developers. Pentaho Data Integration Cookbook Second Edition provides updates to the material covered in the first edition as well as new recipes that show you how to use some of the key features of PDI that have been released since the publication of the first edition. You will learn how to work with various data sources – from relational and NoSQL databases, flat files, XML files, and more. The book will also cover best practices that you can take advantage of immediately within your own solutions, like building reusable code, data quality, and plugins that can add even more functionality. Pentaho Data Integration Cookbook Second Edition will provide you with the recipes that cover the common pitfalls that even seasoned developers can find themselves facing. You will also learn how to use various data sources in Kettle as well as advanced features.

Who is this book for?

Pentaho Data Integration Cookbook Second Edition is designed for developers who are familiar with the basics of Kettle but who wish to move up to the next level.It is also aimed at advanced users that want to learn how to use the new features of PDI as well as and best practices for working with Kettle.

What you will learn

  • Configure Kettle to connect to relational and NoSQL databases and web applications like SalesForce, explore them, and perform CRUD operations
  • Utilize plugins to get even more functionality into your Kettle jobs
  • Embed Java code in your transformations to gain performance and flexibility
  • Execute and reuse transformations and jobs in different ways
  • Integrate Kettle with Pentaho Reporting, Pentaho Dashboards, Community Data Access, and the Pentaho BI Platform
  • Interface Kettle with cloud-based applications
  • Learn how to control and manipulate data flows
  • Utilize Kettle to create datasets for analytics
Estimated delivery fee Deliver to Spain

Premium delivery 7 - 10 business days

€17.95
(Includes tracking information)

Product Details

Country selected
Publication date, Length, Edition, Language, ISBN-13
Publication date : Dec 02, 2013
Length: 462 pages
Edition : 2nd
Language : English
ISBN-13 : 9781783280674
Category :
Languages :
Tools :

What do you get with Print?

Product feature icon Instant access to your digital eBook copy whilst your Print order is Shipped
Product feature icon Paperback book shipped to your preferred address
Product feature icon Download this book in EPUB and PDF formats
Product feature icon Access this title in our online reader with advanced features
Product feature icon DRM FREE - Read whenever, wherever and however you want
OR
Modal Close icon
Payment Processing...
tick Completed

Shipping Address

Billing Address

Shipping Methods
Estimated delivery fee Deliver to Spain

Premium delivery 7 - 10 business days

€17.95
(Includes tracking information)

Product Details

Publication date : Dec 02, 2013
Length: 462 pages
Edition : 2nd
Language : English
ISBN-13 : 9781783280674
Category :
Languages :
Tools :

Packt Subscriptions

See our plans and pricing
Modal Close icon
€18.99 billed monthly
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Simple pricing, no contract
€189.99 billed annually
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just €5 each
Feature tick icon Exclusive print discounts
€264.99 billed in 18 months
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just €5 each
Feature tick icon Exclusive print discounts

Frequently bought together


Stars icon
Total 125.97
Pentaho Data Integration Cookbook - Second Edition
€41.99
Pentaho 5.0 Reporting by Example: Beginner's Guide
€41.99
Pentaho Data Integration Beginner's Guide - Second Edition
€41.99
Total 125.97 Stars icon
Banner background image

Table of Contents

12 Chapters
Working with Databases Chevron down icon Chevron up icon
Reading and Writing Files Chevron down icon Chevron up icon
Working with Big Data and Cloud Sources Chevron down icon Chevron up icon
Manipulating XML Structures Chevron down icon Chevron up icon
File Management Chevron down icon Chevron up icon
Looking for Data Chevron down icon Chevron up icon
Understanding and Optimizing Data Flows Chevron down icon Chevron up icon
Executing and Re-using Jobs and Transformations Chevron down icon Chevron up icon
Integrating Kettle and the Pentaho Suite Chevron down icon Chevron up icon
Getting the Most Out of Kettle Chevron down icon Chevron up icon
Utilizing Visualization Tools in Kettle Chevron down icon Chevron up icon
Data Analytics Chevron down icon Chevron up icon

Customer reviews

Top Reviews
Rating distribution
Full star icon Full star icon Full star icon Half star icon Empty star icon 3.8
(8 Ratings)
5 star 37.5%
4 star 25%
3 star 12.5%
2 star 25%
1 star 0%
Filter icon Filter
Top Reviews

Filter reviews by




Russ May 15, 2015
Full star icon Full star icon Full star icon Full star icon Full star icon 5
This book builds on is predecessor. It picks up where the previous book left off and adds to the foundation. This book had already helped me several times.
Amazon Verified review Amazon
Fábio de Salles Mar 18, 2014
Full star icon Full star icon Full star icon Full star icon Full star icon 5
A cookbook builds upon reader's knowledge on some subject to teach her new recipes. For instance, have you ever seen some TV cook teaching how to break eggs? You just learn it from seeing it and then it's recipe time!Packt new book, Pentaho Data Integration Cookbook 2nd Edition [...]... which they presented me for reviewing), released a couple of weeks ago is the quintessential cookbook: it will not teach you how to install, run or create new transformations and jobs - that is a job for the great PDI Beginner's Guide (also from Packt.) Instead it is going to teach you recipes, little HOW-TOs, on over a hundred subjects.There are recipes about connecting to a database, how to parametrize it (and why do it), how to read data from relational or NoSQL (like MongoDB) databases, how to feed it to a sub-transformation, then handling the streams and its metadata up to finally writing everything to a Hadoop cluster and so on. It does not stop on PDI: There are recipes on how to read and write data to cloud (Amazon and Salesforce), how to run jobs and transformations from inside BI Server (effectivelly turning it into a data integration resource), generate fully formated Pentaho Reports from inside a PDI transformation OR with a transformation as a data source from inside PRD.I've been reading the book for the last two weeks and, frankly, it seems endless: Data analysis, optimization, sampling data for Data Mining, InstaView, AgileBI, read SAS files, treating JSON and XML files, creating new functionalities (requires Java understanding but NO compilation or program building), send e-mail, add extra log messages etc. etc. etc. Besides the sheer recipe number, each recipe comes with some variation on the theme, occasionally a discussion about the pros and cons of the possibile options and alternatives (like Database Lookup vs. Database Join) So, each recipe not only tells you how to do something, but also adds a lot of value explaining how PDI works. When applicable there are comments on performance issues and how to get the best out of PDI (my favorite one is on location 400: how to use a in-memory database to speed up lookups!)You can take a look at the complete recipe list on Packt's site (at [...]...).Every recipe builds on the reader's knowledge but there is enough details so no novice user can run the recipe, without boring the more experienced reader. Figures are sparingly used, only when the authors feel it is going to better explain some setup. It helps to keep the book shorter (a +400-page hulk already) but on the other hand demands more attention on reading.As a lot of Packt books, this one has been written by non-native English speakers which renders the language a bit "unfluid". It does not hinder the reading but makes it unconfortable some times.This book (you can buy it here [...]..) is a treasure chest, with great value to either new or seasoned PDI users. If you already know how to use PDI and want to get better with it, this book is a good reading. If you want to learn how to use PDI, look for Packt's "PDI Begginer’s Guide" and then read this book. It is a must-have!
Amazon Verified review Amazon
Carlos Viana Aug 31, 2014
Full star icon Full star icon Full star icon Full star icon Full star icon 5
Very good. It worths
Amazon Verified review Amazon
Atul Madan Mar 17, 2017
Full star icon Full star icon Full star icon Full star icon Empty star icon 4
Book a great help and very practical
Amazon Verified review Amazon
Josep Curto Mar 03, 2014
Full star icon Full star icon Full star icon Full star icon Empty star icon 4
In 2011, the first edition of “Pentaho Data Integration Cookbook” was published. In that moment in time, the book was interesting enough for a PDI (Pentaho Data Integration) developer as it provided relevant answers for many of the common tasks that have to be carried out for data warehousing processes.After two years, the data market has greatly evolved. Among other trends, Big Data is a major trend and nowadays PDI included numerous new features to connect and use Hadoop and NoSQL databases.The idea behind the second version is to include some of the brand new tasks required to tame Big Data using PDI and update the content of the previous edition. Alex Meadows, from Red Hat, has joined the previous authors (María Carina Roldan and Adrián Sergio Pulvirenti) in this second version. Maria is author of four books about Pentaho Data Integration.What is Pentaho Data Integration?I’m sure that many of you already know it. For those who doesn’t. PDI is an open source swiss army knife of tools to extract, move, transform and load data.What is this book?To put it simply. It includes practical handy recipes for many of the everyday situations for a PDI developer. All recipes follow the same schema:State the problemCreate a transformation or job to solve the problemExplain in detail and provide potential pitfallsWhat is new?One thing that a potential reader can question himself is: If I already have the previous one, is it worth to read this additional version? If you are a Pentaho Data Integration developer, the easy answer is yes. Mainly, because the book includes new chapters and sections for Big Data and Business Analytics, technologies that are becoming crucial core corporate capabilities in the information age.So, in my humble opinion, the most interesting chapters are:Chapter 3: where the reader will have the chance to learn how to load / get data into Hadoop, hbase and MongoDB.Chapter 12: where the reader will be given the opportunity to read data from a SAS data file, to create statistics from a data stream and to build a random data sample for Weka.What I’m missing or could be improved?More screenshots, some readers probably could think the same. Being honest, while I’m happy about the chapter 3 and 12, it will be interesting to have more content related to these topics. So, let’s put it this way. I am counting down the days for the following edition.In summary, an interesting book for PDI and data warehousing practitioners that give some information about how to use PDI for Big Data and Analytics.
Amazon Verified review Amazon
Get free access to Packt library with over 7500+ books and video courses for 7 days!
Start Free Trial

FAQs

What is the delivery time and cost of print book? Chevron down icon Chevron up icon

Shipping Details

USA:

'

Economy: Delivery to most addresses in the US within 10-15 business days

Premium: Trackable Delivery to most addresses in the US within 3-8 business days

UK:

Economy: Delivery to most addresses in the U.K. within 7-9 business days.
Shipments are not trackable

Premium: Trackable delivery to most addresses in the U.K. within 3-4 business days!
Add one extra business day for deliveries to Northern Ireland and Scottish Highlands and islands

EU:

Premium: Trackable delivery to most EU destinations within 4-9 business days.

Australia:

Economy: Can deliver to P. O. Boxes and private residences.
Trackable service with delivery to addresses in Australia only.
Delivery time ranges from 7-9 business days for VIC and 8-10 business days for Interstate metro
Delivery time is up to 15 business days for remote areas of WA, NT & QLD.

Premium: Delivery to addresses in Australia only
Trackable delivery to most P. O. Boxes and private residences in Australia within 4-5 days based on the distance to a destination following dispatch.

India:

Premium: Delivery to most Indian addresses within 5-6 business days

Rest of the World:

Premium: Countries in the American continent: Trackable delivery to most countries within 4-7 business days

Asia:

Premium: Delivery to most Asian addresses within 5-9 business days

Disclaimer:
All orders received before 5 PM U.K time would start printing from the next business day. So the estimated delivery times start from the next day as well. Orders received after 5 PM U.K time (in our internal systems) on a business day or anytime on the weekend will begin printing the second to next business day. For example, an order placed at 11 AM today will begin printing tomorrow, whereas an order placed at 9 PM tonight will begin printing the day after tomorrow.


Unfortunately, due to several restrictions, we are unable to ship to the following countries:

  1. Afghanistan
  2. American Samoa
  3. Belarus
  4. Brunei Darussalam
  5. Central African Republic
  6. The Democratic Republic of Congo
  7. Eritrea
  8. Guinea-bissau
  9. Iran
  10. Lebanon
  11. Libiya Arab Jamahriya
  12. Somalia
  13. Sudan
  14. Russian Federation
  15. Syrian Arab Republic
  16. Ukraine
  17. Venezuela
What is custom duty/charge? Chevron down icon Chevron up icon

Customs duty are charges levied on goods when they cross international borders. It is a tax that is imposed on imported goods. These duties are charged by special authorities and bodies created by local governments and are meant to protect local industries, economies, and businesses.

Do I have to pay customs charges for the print book order? Chevron down icon Chevron up icon

The orders shipped to the countries that are listed under EU27 will not bear custom charges. They are paid by Packt as part of the order.

List of EU27 countries: www.gov.uk/eu-eea:

A custom duty or localized taxes may be applicable on the shipment and would be charged by the recipient country outside of the EU27 which should be paid by the customer and these duties are not included in the shipping charges been charged on the order.

How do I know my custom duty charges? Chevron down icon Chevron up icon

The amount of duty payable varies greatly depending on the imported goods, the country of origin and several other factors like the total invoice amount or dimensions like weight, and other such criteria applicable in your country.

For example:

  • If you live in Mexico, and the declared value of your ordered items is over $ 50, for you to receive a package, you will have to pay additional import tax of 19% which will be $ 9.50 to the courier service.
  • Whereas if you live in Turkey, and the declared value of your ordered items is over € 22, for you to receive a package, you will have to pay additional import tax of 18% which will be € 3.96 to the courier service.
How can I cancel my order? Chevron down icon Chevron up icon

Cancellation Policy for Published Printed Books:

You can cancel any order within 1 hour of placing the order. Simply contact customercare@packt.com with your order details or payment transaction id. If your order has already started the shipment process, we will do our best to stop it. However, if it is already on the way to you then when you receive it, you can contact us at customercare@packt.com using the returns and refund process.

Please understand that Packt Publishing cannot provide refunds or cancel any order except for the cases described in our Return Policy (i.e. Packt Publishing agrees to replace your printed book because it arrives damaged or material defect in book), Packt Publishing will not accept returns.

What is your returns and refunds policy? Chevron down icon Chevron up icon

Return Policy:

We want you to be happy with your purchase from Packtpub.com. We will not hassle you with returning print books to us. If the print book you receive from us is incorrect, damaged, doesn't work or is unacceptably late, please contact Customer Relations Team on customercare@packt.com with the order number and issue details as explained below:

  1. If you ordered (eBook, Video or Print Book) incorrectly or accidentally, please contact Customer Relations Team on customercare@packt.com within one hour of placing the order and we will replace/refund you the item cost.
  2. Sadly, if your eBook or Video file is faulty or a fault occurs during the eBook or Video being made available to you, i.e. during download then you should contact Customer Relations Team within 14 days of purchase on customercare@packt.com who will be able to resolve this issue for you.
  3. You will have a choice of replacement or refund of the problem items.(damaged, defective or incorrect)
  4. Once Customer Care Team confirms that you will be refunded, you should receive the refund within 10 to 12 working days.
  5. If you are only requesting a refund of one book from a multiple order, then we will refund you the appropriate single item.
  6. Where the items were shipped under a free shipping offer, there will be no shipping costs to refund.

On the off chance your printed book arrives damaged, with book material defect, contact our Customer Relation Team on customercare@packt.com within 14 days of receipt of the book with appropriate evidence of damage and we will work with you to secure a replacement copy, if necessary. Please note that each printed book you order from us is individually made by Packt's professional book-printing partner which is on a print-on-demand basis.

What tax is charged? Chevron down icon Chevron up icon

Currently, no tax is charged on the purchase of any print book (subject to change based on the laws and regulations). A localized VAT fee is charged only to our European and UK customers on eBooks, Video and subscriptions that they buy. GST is charged to Indian customers for eBooks and video purchases.

What payment methods can I use? Chevron down icon Chevron up icon

You can pay with the following card types:

  1. Visa Debit
  2. Visa Credit
  3. MasterCard
  4. PayPal
What is the delivery time and cost of print books? Chevron down icon Chevron up icon

Shipping Details

USA:

'

Economy: Delivery to most addresses in the US within 10-15 business days

Premium: Trackable Delivery to most addresses in the US within 3-8 business days

UK:

Economy: Delivery to most addresses in the U.K. within 7-9 business days.
Shipments are not trackable

Premium: Trackable delivery to most addresses in the U.K. within 3-4 business days!
Add one extra business day for deliveries to Northern Ireland and Scottish Highlands and islands

EU:

Premium: Trackable delivery to most EU destinations within 4-9 business days.

Australia:

Economy: Can deliver to P. O. Boxes and private residences.
Trackable service with delivery to addresses in Australia only.
Delivery time ranges from 7-9 business days for VIC and 8-10 business days for Interstate metro
Delivery time is up to 15 business days for remote areas of WA, NT & QLD.

Premium: Delivery to addresses in Australia only
Trackable delivery to most P. O. Boxes and private residences in Australia within 4-5 days based on the distance to a destination following dispatch.

India:

Premium: Delivery to most Indian addresses within 5-6 business days

Rest of the World:

Premium: Countries in the American continent: Trackable delivery to most countries within 4-7 business days

Asia:

Premium: Delivery to most Asian addresses within 5-9 business days

Disclaimer:
All orders received before 5 PM U.K time would start printing from the next business day. So the estimated delivery times start from the next day as well. Orders received after 5 PM U.K time (in our internal systems) on a business day or anytime on the weekend will begin printing the second to next business day. For example, an order placed at 11 AM today will begin printing tomorrow, whereas an order placed at 9 PM tonight will begin printing the day after tomorrow.


Unfortunately, due to several restrictions, we are unable to ship to the following countries:

  1. Afghanistan
  2. American Samoa
  3. Belarus
  4. Brunei Darussalam
  5. Central African Republic
  6. The Democratic Republic of Congo
  7. Eritrea
  8. Guinea-bissau
  9. Iran
  10. Lebanon
  11. Libiya Arab Jamahriya
  12. Somalia
  13. Sudan
  14. Russian Federation
  15. Syrian Arab Republic
  16. Ukraine
  17. Venezuela