Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
Serverless Design Patterns and Best Practices

You're reading from   Serverless Design Patterns and Best Practices Build, secure, and deploy enterprise ready serverless applications with AWS to improve developer productivity

Arrow left icon
Product type Paperback
Published in Apr 2018
Publisher Packt
ISBN-13 9781788620642
Length 260 pages
Edition 1st Edition
Languages
Tools
Arrow right icon
Author (1):
Arrow left icon
Brian Zambrano Brian Zambrano
Author Profile Icon Brian Zambrano
Brian Zambrano
Arrow right icon
View More author details
Toc

Table of Contents (12) Chapters Close

Preface 1. Introduction FREE CHAPTER 2. A Three-Tier Web Application Using REST 3. A Three-Tier Web Application Pattern with GraphQL 4. Integrating Legacy APIs with the Proxy Pattern 5. Scaling Out with the Fan-Out Pattern 6. Asynchronous Processing with the Messaging Pattern 7. Data Processing Using the Lambda Pattern 8. The MapReduce Pattern 9. Deployment and CI/CD Patterns 10. Error Handling and Best Practices 11. Other Books You May Enjoy

Processing Enron emails with serverless MapReduce


I've based our example application on the Enron email corpus, which is publicly available on Kaggle. This data is made up of some 500,000 emails from the Enron corporation. In total, this dataset is approximately 1.5 GB. What we will be doing is counting the number of From-To emails. That is, for each person who sent an email, we will generate a count of the number of times they sent to a particular person.

Note

Anyone may download and work with this dataset: https://www.kaggle.com/wcukierski/enron-email-dataset. The original data from Kaggle comes as a single file in CSV format. To make this data work with this example MapReduce program, I broke the single ~1.4 GB file into roughly 100 MB chunks. During this example, it's important to remember that we are starting from 14 separate files on S3.

The data format in our dataset is a CSV with two columns, the first being the email message location (on the mail server, presumably) and the second...

lock icon The rest of the chapter is locked
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $19.99/month. Cancel anytime
Banner background image