To get the most out of this book
Before beginning, it’s important to know that this book assumes you have a foundational understanding of data sources and types, including relational databases, NoSQL, flat files, and APIs. You should be familiar with basic data formats such as CSV, JSON, and XML. The book builds on these basics to explore data integration models, architectures, and patterns, with practical applications across various industries. Having prior experience with SQL and understanding its role in data transformation will be beneficial. Additionally, knowledge of data storage technologies and architectures will help you make the most of the content.
Software/hardware covered in the book |
Operating system requirements |
SQL and data transformation |
Windows, macOS, or Linux |
Massively parallel processing systems |
Windows, macOS, or Linux |
Spark for data transformation |
Windows, macOS, or Linux |
Data storage technologies (data warehouses, data lakes, and object storage) |
Windows, macOS, or Linux |
Data modeling techniques |
Windows, macOS, or Linux |
Data integration models (ETL and ELT) |
Windows, macOS, or Linux |
Data exposition technologies (Streams, REST APIs, and GraphQL) |
Windows, macOS, or Linux |
If you are using the digital version of this book, we advise you to type the code yourself or access the code from the book’s GitHub repository (a link is available in the next section). Doing so will help you avoid any potential errors related to the copying and pasting of code.
The following are some additional installation instructions and information:
- You should have a stable internet connection to access the online resources and repositories mentioned in the book.
- Familiarize yourself with basic command-line operations as they are commonly used in setting up and managing data environments.
- Installation of a database system that supports SQL, such as MySQL, PostgreSQL, or a similar system, may be required to follow the practical examples.
- For massively parallel processing systems and Spark, ensure that Java is installed on your system as it is required for running Spark-based applications.
- It’s recommended to have a code editor or an Integrated Development Environment (IDE) that supports database management and big data processing, such as PyCharm, Jupyter, or Visual Studio Code, to facilitate code writing and testing.
- The versions of software and examples provided are current as of the book’s publication. You should always check for the latest versions to ensure compatibility and access to the latest features.