NLP model serving
Now, we will discuss NLP model serving. We will assume that the model has been successfully trained with your own custom data:
- First, we define our questions and answers, as follows:
string1 = "Packt is a publisher" string2 = "Who is Packt ?" index_tokens = tokenizer.encode(string1, string2, add_special_tokens=True)
Basically, we define a question-and-answer pair as string1
and string2
and then tokenize both strings.
- Then, we convert the preceding tokens to torch tensors, as follows:
tokens_tensors = torch.tensor([index_tokens])
- Then, we can conduct NLP model serving for question answering, as follows:
with torch.no_grad(): out = model(tokens_tensors, token_type_ids = segments_tensors) ans = tokenizer.decode(index_tokens [torch.argmax(out...