Summary
Summarizing text is considered a uniquely human trait. Deep learning NLP models have made great strides in this area in the past 2-3 years. Summarization remains a very hot area of research within many applications. In this chapter, we built a seq2seq model from scratch that can summarize sentences from news articles and generate a headline. This model obtains fairly good results due to its simplicity. We were able to train the model for a long period of time due to learning rate annealing. By checkpointing the model, training was made resilient as it could be restarted from the last checkpoint in case of failure. Post-training, we improved our generated summaries through a custom implementation of beam search. As beam search has a tendency to provide short summaries, length normalization techniques were used to make the summaries even better.
Measuring the quality of generated summaries is a challenge in abstractive summarization. Here is a random example from the...