Web scraping is a key component of data mining and Julia provides a powerful toolbox for handling these tasks. In this chapter, we addressed the fundamentals of building a web crawler. We learned how to request web pages with a Julia web client and how to read the responses, how to work with Julia's powerful Dict data structure to read HTTP information, how to make our software more resilient by handling errors, how to better organize our code by writing functions and documenting them, and how to use conditional logic to make decisions.
Armed with this knowledge, we built the first version of our web crawler. In the next chapter, we will improve it and will use it to extract the data for our upcoming Wiki game. In the process, we'll dive deeper into the language, learning about types, methods and modules, and how to interact with relational databases.
...