Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds
Programming MapReduce with Scalding
Programming MapReduce with Scalding

Programming MapReduce with Scalding: A practical guide to designing, testing, and implementing complex MapReduce applications in Scala

Arrow left icon
Profile Icon Antonios Chalkiopoulos
Arrow right icon
$19.99 per month
Full star icon Full star icon Full star icon Full star icon Half star icon 4.3 (6 Ratings)
Paperback Jun 2014 148 pages 1st Edition
eBook
$16.99 $18.99
Paperback
$29.99
Subscription
Free Trial
Renews at $19.99p/m
Arrow left icon
Profile Icon Antonios Chalkiopoulos
Arrow right icon
$19.99 per month
Full star icon Full star icon Full star icon Full star icon Half star icon 4.3 (6 Ratings)
Paperback Jun 2014 148 pages 1st Edition
eBook
$16.99 $18.99
Paperback
$29.99
Subscription
Free Trial
Renews at $19.99p/m
eBook
$16.99 $18.99
Paperback
$29.99
Subscription
Free Trial
Renews at $19.99p/m

What do you get with a Packt Subscription?

Free for first 7 days. $19.99 p/m after that. Cancel any time!
Product feature icon Unlimited ad-free access to the largest independent learning library in tech. Access this title and thousands more!
Product feature icon 50+ new titles added per month, including many first-to-market concepts and exclusive early access to books as they are being written.
Product feature icon Innovative learning tools, including AI book assistants, code context explainers, and text-to-speech.
Product feature icon Thousands of reference materials covering every tech concept you need to stay up to date.
Subscribe now
View plans & pricing
Table of content icon View table of contents Preview book icon Preview Book

Programming MapReduce with Scalding

Chapter 1. Introduction to MapReduce

In this first chapter, we will take a look at the core technologies used in the distributed model of Hadoop; more specifically, we cover the following:

  • The Hadoop platform and the framework it provides
  • The MapReduce programming model
  • Technologies built on top of MapReduce that provide an abstraction layer and an API that is easier to understand and work with

In the following diagram, Hadoop stands at the base, and MapReduce as a design pattern enables the execution of distributed jobs. MapReduce is a low-level programming model. Thus, a number of libraries such as Cascading, Pig, and Hive provide alternative APIs and are compiled into MapReduce. Cascading, which is a Java application framework, has a number of extensions in functional programming languages, with Scalding being the one presented in this book.

Introduction to MapReduce

The Hadoop platform

Hadoop can be used for a lot of things. However, when you break it down to its core parts, the primary features of Hadoop are Hadoop Distributed File System (HDFS) and MapReduce.

HDFS stores read-only files by splitting them into large blocks and distributing and replicating them across a Hadoop cluster. Two services are involved with the filesystem. The first service, the NameNode acts as a master and keeps the directory tree of all file blocks that exist in the filesystem and tracks where the file data is kept across the cluster. The actual data of the files is stored in multiple DataNode nodes, the second service.

MapReduce is a programming model for processing large datasets with a parallel, distributed algorithm in a cluster. The most prominent trait of Hadoop is that it brings processing to the data; so, MapReduce executes tasks closest to the data as opposed to the data travelling to where the processing is performed. Two services are involved in a job execution. A job is submitted to the service JobTracker, which first discovers the location of the data. It then orchestrates the execution of the map and reduce tasks. The actual tasks are executed in multiple TaskTracker nodes.

Hadoop handles infrastructure failures such as network issues, node, or disk failures automatically. Overall, it provides a framework for distributed storage within its distributed file system and execution of jobs. Moreover, it provides the service ZooKeeper to maintain configuration and distributed synchronization.

Many projects surround Hadoop and complete the ecosystem of available Big Data processing tools such as utilities to import and export data, NoSQL databases, and event/real-time processing systems. The technologies that move Hadoop beyond batch processing focus on in-memory execution models. Overall multiple projects, from batch to hybrid and real-time execution exist.

MapReduce

Massive parallel processing of large datasets is a complex process. MapReduce simplifies this by providing a design pattern that instructs algorithms to be expressed in map and reduce phases. Map can be used to perform simple transformations on data, and reduce is used to group data together and perform aggregations.

By chaining together a number of map and reduce phases, sophisticated algorithms can be achieved. The shared nothing architecture of MapReduce prohibits communication between map tasks of the same phase or reduces tasks of the same phase. Communication that's required happens at the end of each phase.

The simplicity of this model allows Hadoop to translate each phase, depending on the amount of data that needs to be processed into tens or even hundreds of tasks being executed in parallel, thus achieving scalable performance.

Internally, the map and reduce tasks follow a simplistic data representation. Everything is a key or a value. A map task receives key-value pairs and applies basic transformations emitting new key-value pairs. Data is then partitioned and different partitions are transmitted to different reduce tasks. A reduce task also receives key-value pairs, groups them based on the key, and applies basic transformation to those groups.

A MapReduce example

To illustrate how MapReduce works, let's look at an example of a log file of total size 1 GB with the following format:

INFO      MyApp  - Entering application.
WARNING   com.foo.Bar - Timeout accessing DB - Retrying
ERROR     com.foo.Bar  - Did it again!
INFO      MyApp  - Exiting application

Tip

Downloading the example code

You can download the example code files for all Packt books you have purchased from your account at http://www.packtpub.com. If you purchased this book elsewhere, you can visit http://www.packtpub.com/support and register to have the files e-mailed directly to you.

Once this file is stored in HDFS, it is split into eight 128 MB blocks and distributed in multiple Hadoop nodes. In order to build a MapReduce job to count the amount of INFO, WARNING, and ERROR log lines in the file, we need to think in terms of map and reduce phases.

In one map phase, we can read local blocks of the file and map each line to a key and a value. We can use the log level as the key and the number 1 as the value. After it is completed, data is partitioned based on the key and transmitted to the reduce tasks.

MapReduce guarantees that the input to every reducer is sorted by key. Shuffle is the process of sorting and copying the output of the map tasks to the reducers to be used as input. By setting the value to 1 on the map phase, we can easily calculate the total in the reduce phase. Reducers receive input sorted by key, aggregate counters, and store results.

In the following diagram, every green block represents an INFO message, every yellow block a WARNING message, and every red block an ERROR message:

A MapReduce example

Implementing the preceding MapReduce algorithm in Java requires the following three classes:

  • A Map class to map lines into <key,value> pairs; for example, <"INFO",1>
  • A Reduce class to aggregate counters
  • A Job configuration class to define input and output types for all <key,value> pairs and the input and output files

MapReduce abstractions

This simple MapReduce example requires more than 50 lines of Java code (mostly because of infrastructure and boilerplate code). In SQL, a similar implementation would just require the following:

SELECT level, count(*) FROM table GROUP BY level

Hive is a technology originating from Facebook that translates SQL commands, such as the preceding one, into sets of map and reduce phases. SQL offers convenient ubiquity, and it is known by almost everyone.

However, SQL is declarative and expresses the logic of a computation without describing its control flow. So, there are use cases that will be unusual to implement in SQL, and some problems are too complex to be expressed in relational algebra. For example, SQL handles joins naturally, but it has no built-in mechanism for splitting data into streams and applying different operations to each substream.

Pig is a technology originating from Yahoo that offers a relational data-flow language. It is procedural, supports splits, and provides useful operators for joining and grouping data. Code can be inserted anywhere in the data flow and is appealing because it is easy to read and learn.

However, Pig is a purpose-built language; it excels at simple data flows, but it is inefficient for implementing non-trivial algorithms.

In Pig, the same example can be implemented as follows:

LogLine    = load 'file.logs' as (level, message);
LevelGroup = group LogLine by level;
Result     = foreach LevelGroup generate group, COUNT(LogLine);
store Result into 'Results.txt';

Both Pig and Hive support extra functionality through loadable user-defined functions (UDF) implemented in Java classes.

Cascading is implemented in Java and designed to be expressive and extensible. It is based on the design pattern of pipelines that many other technologies follow. The pipeline is inspired from the original chain of responsibility design pattern and allows ordered lists of actions to be executed. It provides a Java-based API for data-processing flows.

Developers with functional programming backgrounds quickly introduced new domain specific languages that leverage its capabilities. Scalding, Cascalog, and PyCascading are popular implementations on top of Cascading, which are implemented in programming languages such as Scala, Clojure, and Python.

Introducing Cascading

Cascading is an abstraction that empowers us to write efficient MapReduce applications. The API provides a framework for developers who want to think in higher levels and follow Behavior Driven Development (BDD) and Test Driven Development (TDD) to provide more value and quality to the business.

Cascading is a mature library that was released as an open source project in early 2008. It is a paradigm shift and introduces new notions that are easier to understand and work with.

In Cascading, we define reusable pipes where operations on data are performed. Pipes connect with other pipes to create a pipeline. At each end of a pipeline, a tap is used. Two types of taps exist: source, where input data comes from and sink, where the data gets stored.

Introducing Cascading

In the preceding image, three pipes are connected to a pipeline, and two input sources and one output sink complete the flow. A complete pipeline is called a flow, and multiple flows bind together to form a cascade. In the following diagram, three flows form a cascade:

Introducing Cascading

The Cascading framework translates the pipes, flows, and cascades into sets of map and reduce phases. The flow and cascade planner ensure that no flow or cascade is executed until all its dependencies are satisfied.

The preceding abstraction makes it easy to use a whiteboard to design and discuss data processing logic. We can now work on a productive higher level abstraction and build complex applications for ad targeting, logfile analysis, bioinformatics, machine learning, predictive analytics, web content mining, and for extract, transform and load (ETL) jobs.

By abstracting from the complexity of key-value pairs and map and reduce phases of MapReduce, Cascading provides an API that so many other technologies are built on.

What happens inside a pipe

Inside a pipe, data flows in small containers called tuples. A tuple is like a fixed size ordered list of elements and is a base element in Cascading. Unlike an array or list, a tuple can hold objects with different types.

Tuples stream within pipes. Each specific stream is associated with a schema. The schema evolves over time, as at one point in a pipe, a tuple of size one can receive an operation and transform into a tuple of size three.

To illustrate this concept, we will use a JSON transformation job. Each line is originally stored in tuples of size one with a schema: 'jsonLine. An operation transforms these tuples into new tuples of size three: 'time, 'user, and 'action. Finally, we extract the epoch, and then the pipe contains tuples of size four: 'epoch, 'time, 'user, and 'action.

What happens inside a pipe

Pipe assemblies

Transformation of tuple streams occurs by applying one of the five types of operations, also called pipe assemblies:

  • Each: To apply a function or a filter to each tuple
  • GroupBy: To create a group of tuples by defining which element to use and to merge pipes that contain tuples with similar schemas
  • Every: To perform aggregations (count, sum) and buffer operations to every group of tuples
  • CoGroup: To apply SQL type joins, for example, Inner, Outer, Left, or Right joins
  • SubAssembly: To chain multiple pipe assemblies into a pipe
Pipe assemblies

To implement the pipe for the logfile example with the INFO, WARNING, and ERROR levels, three assemblies are required: The Each assembly generates a tuple with two elements (level/message), the GroupBy assembly is used in the level, and then the Every assembly is applied to perform the count aggregation.

We also need a source tap to read from a file and a sink tap to store the results in another file. Implementing this in Cascading requires 20 lines of code; in Scala/Scalding, the boilerplate is reduced to just the following:

  TextLine(inputFile)
  .mapTo('line->'level,'message) { line:String => tokenize(line) } 
  .groupBy('level)  { _.size }
  .write(Tsv(outputFile))

Cascading is the framework that provides the notions and abstractions of tuple streams and pipe assemblies. Scalding is a domain-specific language (DSL) that specializes in the particular domain of pipeline execution and further minimizes the amount of code that needs to be typed.

Cascading extensions

Cascading offers multiple extensions that can be used as taps to either read from or write data to, such as SQL, NoSQL, and several other distributed technologies that fit nicely with the MapReduce paradigm.

A data processing application, for example, can use taps to collect data from a SQL database and some more from the Hadoop file system. Then, process the data, use a NoSQL database, and complete a machine learning stage. Finally, it can store some resulting data into another SQL database and update a mem-cache application.

Cascading extensions

Summary

The pipelining abstraction works really well with the Hadoop ecosystem and other state-of-the-art messaging technologies. Cascading provides the blueprints to pipeline for MapReduce. As a framework, it offers a frame to build applications. It comes with several decisions that are already made, and it provides a foundation, including support structures that allow us to get started and deliver results quickly.

Unlike Hive and Pig, where user-defined functionality is separated from the query language, Cascading integrates everything into a single language. Functional and scalable languages follow lightweight, modular, high performance, and testable principles. Scalding combines functional programming with Cascading and brings the best of both worlds by providing an unmatchable way of developing distributed applications.

In the next chapter, we will introduce Scala, set up our environment, and demonstrate the power and expressiveness of Scalding when building MapReduce applications.

Left arrow icon Right arrow icon

Description

This book is an easy-to-understand, practical guide to designing, testing, and implementing complex MapReduce applications in Scala using the Scalding framework. It is packed with examples featuring log-processing, ad-targeting, and machine learning. This book is for developers who are willing to discover how to effectively develop MapReduce applications. Prior knowledge of Hadoop or Scala is not required; however, investing some time on those topics would certainly be beneficial.

What you will learn

  • Set up an environment to execute jobs in local and Hadoop mode
  • Preview the complete Scalding API through examples and illustrations
  • Learn about Scalding capabilities, testing, and pipelining jobs
  • Understand the concepts of MapReduce patterns and the applications of its ecosystem
  • Implement logfile analysis and adtargeting applications using best practices
  • Apply a testdriven development (TDD) methodology and structure Scalding applications in a modular and testable way
  • Interact with external NoSQL and SQL data stores from Scalding
  • Deploy, schedule, monitor, and maintain production systems

Product Details

Country selected
Publication date, Length, Edition, Language, ISBN-13
Publication date : Jun 25, 2014
Length: 148 pages
Edition : 1st
Language : English
ISBN-13 : 9781783287017
Languages :
Tools :

What do you get with a Packt Subscription?

Free for first 7 days. $19.99 p/m after that. Cancel any time!
Product feature icon Unlimited ad-free access to the largest independent learning library in tech. Access this title and thousands more!
Product feature icon 50+ new titles added per month, including many first-to-market concepts and exclusive early access to books as they are being written.
Product feature icon Innovative learning tools, including AI book assistants, code context explainers, and text-to-speech.
Product feature icon Thousands of reference materials covering every tech concept you need to stay up to date.
Subscribe now
View plans & pricing

Product Details

Publication date : Jun 25, 2014
Length: 148 pages
Edition : 1st
Language : English
ISBN-13 : 9781783287017
Languages :
Tools :

Packt Subscriptions

See our plans and pricing
Modal Close icon
$19.99 billed monthly
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Simple pricing, no contract
$199.99 billed annually
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just $5 each
Feature tick icon Exclusive print discounts
$279.99 billed in 18 months
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just $5 each
Feature tick icon Exclusive print discounts

Frequently bought together


Stars icon
Total $ 144.97
Programming MapReduce with Scalding
$29.99
Learning Concurrent Programming in Scala
$48.99
Scala for Machine Learning
$65.99
Total $ 144.97 Stars icon

Table of Contents

10 Chapters
1. Introduction to MapReduce Chevron down icon Chevron up icon
2. Get Ready for Scalding Chevron down icon Chevron up icon
3. Scalding by Example Chevron down icon Chevron up icon
4. Intermediate Examples Chevron down icon Chevron up icon
5. Scalding Design Patterns Chevron down icon Chevron up icon
6. Testing and TDD Chevron down icon Chevron up icon
7. Running Scalding in Production Chevron down icon Chevron up icon
8. Using External Data Stores Chevron down icon Chevron up icon
9. Matrix Calculations and Machine Learning Chevron down icon Chevron up icon
Index Chevron down icon Chevron up icon

Customer reviews

Top Reviews
Rating distribution
Full star icon Full star icon Full star icon Full star icon Half star icon 4.3
(6 Ratings)
5 star 50%
4 star 33.3%
3 star 16.7%
2 star 0%
1 star 0%
Filter icon Filter
Top Reviews

Filter reviews by




soulmachine Feb 23, 2015
Full star icon Full star icon Full star icon Full star icon Full star icon 5
This book is very easy to understand because it has many tiny examples that are very detailed to let you understand core APIs quickly.Chapter 4 "Intermediate Examples" elaborate two complete examples.Chapter 5 "Scalding Design Patterns" introduces three kind of desing paterns that are very pratical and insightful.The only weakness of this book is that it uses fields based APIs, but in my opinion, type safe APIs are more modern and elegant, however, all the knowlege of fiedls based APIs can apply to type safe APIs seeminglessly.
Amazon Verified review Amazon
Sujit Pal Jul 17, 2014
Full star icon Full star icon Full star icon Full star icon Full star icon 5
Scalding is a small but very powerful and expressive Scala DSL built on top of Cascading, itself a Java API that exposes relational algebra constructs that expand to Map and Reduce operators in the backend. Scalding was developed at Twitter and open sourced - it has reasonably good documentation on GitHub and support is available on their Google Groups mailing list. There is also Paco Nathan's Cascading book where Scalding gets a chapter. However, this is the first book devoted completely to Scalding, and it does a great job making the Scalding API accessible to a broader audience.The target audience for this book is someone who is somewhat familiar with Scala and Hadoop, though not necessarily an expert at either. The author describes the behavior of various Scalding operations using before and after diagrams on small datasets which I thought was very helpful in understanding the API. The book covers the original fields based API and the typed API, and finally the Matrix API, all through case examples that increase in complexity as more advanced features are explained. There is also some coverage on making Scalding work with various NoSQL databases using custom Taps rather than just files.I am not an expert at Scalding, but I have used it in the past so I was quite familiar with some features of the DSL. But having read this book, I have a much better idea of Scalding's capabilities and how I can use them.DISCLAIMER - I did not buy this book, I requested a copy from a PackT representative because I was interested in learning more about Scalding and thought this book may help (I was right), and I thought my perspective as someone somewhat familiar with Scalding would be useful for other readers.
Amazon Verified review Amazon
Si Dunn Jul 29, 2014
Full star icon Full star icon Full star icon Full star icon Full star icon 5
Programming MapReduce with Scalding offers clear, well-illustrated, smoothly paced how-to steps, as well as easy-to-digest definitions and descriptions. It takes the reader from setting up and running a Hadoop mini-cluster and local-development environment to applying Scalding to real-use cases, as well as developing good test and test-driven development methodologies, running Scalding in production, using external data stores, and applying matrix calculations and machine learning.The book is written for developers who have at least "a basic understanding" of Hadoop and MapReduce, but is also intended for experienced Hadoop developers who may be "enlightened by this alternative methodology of developing MapReduce applications with Scalding."It does help to be somewhat familiar with MapReduce, Scalding, Scala, Hadoop, Maven, Eclipse and the Linux environment. But Antonio Chalkiopoulo does a good job of keeping the examples accessible even when readers are new to some of the packages.
Amazon Verified review Amazon
Tushar Kapila Oct 28, 2015
Full star icon Full star icon Full star icon Full star icon Empty star icon 4
few more examples with mixed join, how to refer to the elements in a fold, would be usful for everyday work
Amazon Verified review Amazon
tom Oct 22, 2015
Full star icon Full star icon Full star icon Full star icon Empty star icon 4
It's got good coverage and examples to get you descent with fields API, but missing a bit on explaining how the code would translate into mappers and reducers and what you should watch for to optimize your code. It's also out dated since people are using the typed API now.
Amazon Verified review Amazon
Get free access to Packt library with over 7500+ books and video courses for 7 days!
Start Free Trial

FAQs

What is included in a Packt subscription? Chevron down icon Chevron up icon

A subscription provides you with full access to view all Packt and licnesed content online, this includes exclusive access to Early Access titles. Depending on the tier chosen you can also earn credits and discounts to use for owning content

How can I cancel my subscription? Chevron down icon Chevron up icon

To cancel your subscription with us simply go to the account page - found in the top right of the page or at https://subscription.packtpub.com/my-account/subscription - From here you will see the ‘cancel subscription’ button in the grey box with your subscription information in.

What are credits? Chevron down icon Chevron up icon

Credits can be earned from reading 40 section of any title within the payment cycle - a month starting from the day of subscription payment. You also earn a Credit every month if you subscribe to our annual or 18 month plans. Credits can be used to buy books DRM free, the same way that you would pay for a book. Your credits can be found in the subscription homepage - subscription.packtpub.com - clicking on ‘the my’ library dropdown and selecting ‘credits’.

What happens if an Early Access Course is cancelled? Chevron down icon Chevron up icon

Projects are rarely cancelled, but sometimes it's unavoidable. If an Early Access course is cancelled or excessively delayed, you can exchange your purchase for another course. For further details, please contact us here.

Where can I send feedback about an Early Access title? Chevron down icon Chevron up icon

If you have any feedback about the product you're reading, or Early Access in general, then please fill out a contact form here and we'll make sure the feedback gets to the right team. 

Can I download the code files for Early Access titles? Chevron down icon Chevron up icon

We try to ensure that all books in Early Access have code available to use, download, and fork on GitHub. This helps us be more agile in the development of the book, and helps keep the often changing code base of new versions and new technologies as up to date as possible. Unfortunately, however, there will be rare cases when it is not possible for us to have downloadable code samples available until publication.

When we publish the book, the code files will also be available to download from the Packt website.

How accurate is the publication date? Chevron down icon Chevron up icon

The publication date is as accurate as we can be at any point in the project. Unfortunately, delays can happen. Often those delays are out of our control, such as changes to the technology code base or delays in the tech release. We do our best to give you an accurate estimate of the publication date at any given time, and as more chapters are delivered, the more accurate the delivery date will become.

How will I know when new chapters are ready? Chevron down icon Chevron up icon

We'll let you know every time there has been an update to a course that you've bought in Early Access. You'll get an email to let you know there has been a new chapter, or a change to a previous chapter. The new chapters are automatically added to your account, so you can also check back there any time you're ready and download or read them online.

I am a Packt subscriber, do I get Early Access? Chevron down icon Chevron up icon

Yes, all Early Access content is fully available through your subscription. You will need to have a paid for or active trial subscription in order to access all titles.

How is Early Access delivered? Chevron down icon Chevron up icon

Early Access is currently only available as a PDF or through our online reader. As we make changes or add new chapters, the files in your Packt account will be updated so you can download them again or view them online immediately.

How do I buy Early Access content? Chevron down icon Chevron up icon

Early Access is a way of us getting our content to you quicker, but the method of buying the Early Access course is still the same. Just find the course you want to buy, go through the check-out steps, and you’ll get a confirmation email from us with information and a link to the relevant Early Access courses.

What is Early Access? Chevron down icon Chevron up icon

Keeping up to date with the latest technology is difficult; new versions, new frameworks, new techniques. This feature gives you a head-start to our content, as it's being created. With Early Access you'll receive each chapter as it's written, and get regular updates throughout the product's development, as well as the final course as soon as it's ready.We created Early Access as a means of giving you the information you need, as soon as it's available. As we go through the process of developing a course, 99% of it can be ready but we can't publish until that last 1% falls in to place. Early Access helps to unlock the potential of our content early, to help you start your learning when you need it most. You not only get access to every chapter as it's delivered, edited, and updated, but you'll also get the finalized, DRM-free product to download in any format you want when it's published. As a member of Packt, you'll also be eligible for our exclusive offers, including a free course every day, and discounts on new and popular titles.