One of the most useful and emerging applications in the ML domain nowadays is using the transfer learning technique; it provides high portability between different frameworks and platforms.
Once you've trained a neural network, what you get is a set of trained hyperparameters' values. For example, LeNet-5 has 60k parameter values, AlexNet has 60 million, and VGG- 16 has about 138 million parameters. These architectures are trained using anything from 1,000 to millions of images and typically have very deep architectures, having hundreds of layers that contribute toward so many hyperparameters.
There are many open source community guys or even tech giants who have made those pretrained models publicly available for research (and also industry) so that they can be restored and reused to solve similar problems. For example, suppose...