One of the most essential steps in any complex deep learning system that consumes large amounts of data is to build an efficient dataset generator. This is very relevant in our system, especially because we will be dealing with image and text data. Besides that, we will be dealing with sequence models where we have to pass the same data multiple times to our model during training. Unpacking all the data in lists, pre-building datasets would be the most in-efficient way to tackle this problem. Hence we will be leveraging the power of generators for our system.
To start with, we will load up our image features learned from transfer learning, along with our vocabulary metadata, using the following code:
from sklearn.externals import joblib tl_img_feature_map = joblib.load('transfer_learn_img_features.pkl') vocab_metadata...