The movie review text needs to be preprocessed and converted to numerical tokens, corresponding to different words in the corpus. The Keras tokenizer will be used to convert the words into numerical indices, or tokens, by taking the first 50000 frequent words. We have restricted the movie reviews to have a maximum of 1000 word tokens. If a movie review has less than 1000 word tokens, the review is padded with zeros at the beginning. After the preprocessing, the data is split into train, validation, and test sets. The Keras Tokenizer object is saved for use during inference.
The detailed code(preprocess.py) for preprocessing the movie reviews is as follows:
# -*- coding: utf-8 -*-
"""
Created on Sun Jun 17 22:36:00 2018
@author: santanu
"""
import numpy as np
import pandas as pd
import os
import re
from keras.preprocessing...