Writing an HTTP fail-over client with resume and partial downloading
You would like to create a fail-over client that will resume downloading a file if it fails for any reason in the first instance.
How to do it...
Let us download the Python 2.7 code from http://www.python.org. A resume_download()
file will resume any unfinished download of that file.
Listing 4.9 explains resume downloading as follows:
#!/usr/bin/env python # Python Network Programming Cookbook -- Chapter - 4 # This program requires Python 3.5.2 or any later version # It may run on any other version with/without modifications. # # Follow the comments inline to make it run on Python 2.7.x. import urllib.request, urllib.parse, urllib.error # Comment out the above line and uncomment the below for Python 2.7.x. #import urllib import os TARGET_URL = 'http://python.org/ftp/python/2.7.4/' TARGET_FILE = 'Python-2.7.4.tgz' class CustomURLOpener(urllib.request.FancyURLopener): # Comment out the above line and uncomment...