Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
Data Engineering with Scala and Spark

You're reading from   Data Engineering with Scala and Spark Build streaming and batch pipelines that process massive amounts of data using Scala

Arrow left icon
Product type Paperback
Published in Jan 2024
Publisher Packt
ISBN-13 9781804612583
Length 300 pages
Edition 1st Edition
Languages
Arrow right icon
Authors (3):
Arrow left icon
Rupam Bhattacharjee Rupam Bhattacharjee
Author Profile Icon Rupam Bhattacharjee
Rupam Bhattacharjee
David Radford David Radford
Author Profile Icon David Radford
David Radford
Eric Tome Eric Tome
Author Profile Icon Eric Tome
Eric Tome
Arrow right icon
View More author details
Toc

Table of Contents (21) Chapters Close

Preface 1. Part 1 – Introduction to Data Engineering, Scala, and an Environment Setup
2. Chapter 1: Scala Essentials for Data Engineers FREE CHAPTER 3. Chapter 2: Environment Setup 4. Part 2 – Data Ingestion, Transformation, Cleansing, and Profiling Using Scala and Spark
5. Chapter 3: An Introduction to Apache Spark and Its APIs – DataFrame, Dataset, and Spark SQL 6. Chapter 4: Working with Databases 7. Chapter 5: Object Stores and Data Lakes 8. Chapter 6: Understanding Data Transformation 9. Chapter 7: Data Profiling and Data Quality 10. Part 3 – Software Engineering Best Practices for Data Engineering in Scala
11. Chapter 8: Test-Driven Development, Code Health, and Maintainability 12. Chapter 9: CI/CD with GitHub 13. Part 4 – Productionalizing Data Engineering Pipelines – Orchestration and Tuning
14. Chapter 10: Data Pipeline Orchestration 15. Chapter 11: Performance Tuning 16. Part 5 – End-to-End Data Pipelines
17. Chapter 12: Building Batch Pipelines Using Spark and Scala 18. Chapter 13: Building Streaming Pipelines Using Spark and Scala 19. Index 20. Other Books You May Enjoy

Understanding functional programming

Functional programming is based on the principle that programs are constructed using only pure functions. A pure function does not have any side effects and only returns a result. Some examples of side effects are modifying a variable, modifying a data structure in place, and performing I/O. We can think of a pure function as just like a regular algebraic function.

An example of a pure function is the length function on a string object. It only returns the length of the string and does nothing else, such as mutating a variable. Similarly, an integer addition function that takes two integers and returns an integer is a pure function.

Two important aspects of functional programming are referential transparency (RT) and the substitution model. An expression is referentially transparent if all of its occurrences can be substituted by the result of the expression without altering the meaning of the program.

In the following example, Example 1.1, we set x and then use it to set r1 and r2, both of which have the same value:

scala> val x: String = "hello"
x: String = hello
scala> val r1 = x + " world!"
r1: String = hello world!
scala> val r2 = x + " world!"
r2: String = hello world!

Example 1.1

Now, if we replace x with the expression referenced by x, r1 and r2 will be the same. In other words, the expression hello is referentially transparent.

Example 1.2 shows the output from a Scala interpreter:

scala> val r1 = "hello" + " world!"
r1: String = hello world!
scala> val r2 = "hello" + " world!"
r2: String = hello world!

Example 1.2

Let’s now look at the following example, Example 1.3, where x is an instance of StringBuilder instead of String:

scala> val x = new StringBuilder("who")
x: StringBuilder = who
scala> val y = x.append(" am i?")
y: StringBuilder = who am i?
scala> val r1 = y.toString
r1: String = who am i?
scala> val r2 = y.toString
r2: String = who am i?

Example 1.3

If we substitute y with the expression it refers to (val y = x.append(" am i?")), r1 and r2 will no longer be equal:

scala> val x = new StringBuilder("who")
x: StringBuilder = who
scala> val r1 = x.append(" am i?").toString
r1: String = who am i?
scala> val r2 = x.append(" am i?").toString
r2: String = who am i? am i?

Example 1.4

So, the expression x.append(" am i?") is not referentially transparent.

One of the advantages of the functional programming style is it allows you to apply local reasoning without having to worry about whether it updates any globally accessible mutable state. Also, since no variable in the global scope is updated, it considerably simplifies building a multi-threaded application.

Another advantage is pure functions are also easier to test as they do not depend on any state apart from the inputs supplied, and they generate the same output for the same input values.

We won’t delve deep into functional programming as it is outside of the scope of this book. Please refer to the Further reading section for additional material on functional programming. In the rest of this chapter, we will provide a high-level tour of some of the important language features that the subsequent chapters build upon.

In this section, we looked at a very high-level introduction to functional programming. Starting with the next section, we will look at Scala language features that enable both functional and object-oriented programming styles.

You have been reading a chapter from
Data Engineering with Scala and Spark
Published in: Jan 2024
Publisher: Packt
ISBN-13: 9781804612583
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $19.99/month. Cancel anytime
Banner background image