Now we know that Python can programmatically download a website as long as we have the URL. If we have to download multiple pages that only differ in the query string, then we can write a script to do this without repeatedly rerunning the script, and instead download everything we need in one run.
Dynamic GET requests
How to do it...
Check out this URL-- https://www.packtpub.com/all?search=&offset=12&rows=&sort=. Here, the query string variable that defines the page number (offset) is multiples of 12:
To download all the images in all of these pages, we can rewrite the previous recipe as follows:
- Import the required modules:
import urllib.request import urllib.parse import re from os.path import basename...