Now that you have a good understanding of how web pages are accessed on the internet through client-server interactions, let's see how we can do this with Julia.
The most common web clients are the web browsers—apps such as Chrome or Firefox. However, these are meant to be used by human users, rendering web pages with fancy styled UIs and sophisticated interactions. Web scraping can be done manually through a web browser, it's true, but the most efficient and scalable way is through a fully automated, software-driven process. Although web browsers can be automated (with something like Selenium from https://www.seleniumhq.org), it's a more difficult, error-prone, and resource-intensive task. For most use cases, the preferred approach is to use a dedicated HTTP client.