Search icon CANCEL
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
Getting Started with Talend Open Studio for Data Integration

You're reading from   Getting Started with Talend Open Studio for Data Integration This is the complete course for anybody who wants to get to grips with Talend Open Studio for Data Integration. From the basics of transferring data to complex integration processes, it will give you a head start.

Arrow left icon
Product type Paperback
Published in Nov 2012
Publisher Packt
ISBN-13 9781849514729
Length 320 pages
Edition 1st Edition
Languages
Tools
Arrow right icon
Author (1):
Arrow left icon
Jonathan Bowen Jonathan Bowen
Author Profile Icon Jonathan Bowen
Jonathan Bowen
Arrow right icon
View More author details
Toc

Table of Contents (22) Chapters Close

Getting Started with Talend Open Studio for Data Integration
Credits
Foreword
Foreword
About the Author
Acknowledgement
About the Reviewers
www.PacktPub.com
Preface
1. Knowing Talend Open Studio 2. Working with Talend Open Studio FREE CHAPTER 3. Transforming Files 4. Working with Databases 5. Filtering, Sorting, and Other Processing Techniques 6. Managing Files 7. Job Orchestration 8. Managing Jobs 9. Global Variables and Contexts 10. Worked Examples Installing Sample Jobs and Data Resources Index

Duplicating and merging dataflows


Our final section in this chapter will look at how we can duplicate and merge dataflows. Duplicating dataflows is particularly useful as it allows us to undertake different processing on the same data without having to read a file twice or query a database twice. Merging dataflows allows us to take data from different sources and rationalize it into a single dataflow.

Duplicating data

Open the job DuplicatingData from the Resources directory.

It starts with a simple database query. The dataflow from this is replicated using a tReplicate component and the same dataflow is subsequently passed to two processing streams. In this case the processing is very simple, a filter on each dataflow to filter for rows from region1 or region3 respectively. As noted previously, the processing on each dataflow could be completely different, for example, one flow being extracted to a CSV file while the other transformed and imported into a different database.

Tip

The tReplicate...

lock icon The rest of the chapter is locked
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at £16.99/month. Cancel anytime