In this recipe, we will learn how to distribute a TensorFlow computation across multiple servers. The key assumption is that the code is the same for both the workers and the parameter servers. Therefore the role of each computation node is passed a command line argument.
Playing with Distributed TensorFlow: multiple servers
Getting ready
Again, this recipe is inspired by a good blog posting written by Neil Tenenholtz and available online: https://clindatsci.com/blog/2017/5/31/distributed-tensorflow
How to do it...
We proceed with the recipe as follows:
- Consider this...