LSTM is a particular architecture of recurrent neural network, originally conceived by Hochreiter and Schmidhuber in 1997. This type of neural network has been recently rediscovered in the context of deep learning because it is free from the problem of vanishing gradient, and in practice it offers excellent results and performance.
LSTM-based networks are ideal for prediction and classification of time series, and are supplanting many classic machine learning approaches. This is due to the fact that LSTM networks are able to consider long-term dependencies between data, and in the case of speech recognition, this means managing the context within a sentence to improve recognition capacity.