So far we have focused on simple web pages where all of the information needed is only available in the HTML file. This is not always the case for more modern websites, which contain JavaScript code responsible for loading extra information after the initial page loads. In many websites, when you perform a search, the initial page might display with an empty table and, in the background, make a second request to collect the actual results to display. In order to do this, custom code written in JavaScript is run by your web browser. Using the standard HTTP client would not be sufficient in this case and you would need to use an external browser that supports JavaScript execution.
In Go, there are many options for integrating scraper code with web browsers thanks to a few standard protocols. The WebDriver protocol is the original standard developed by...