The first practical speaker-independent, large-vocabulary, and continuous speech recognition systems emerged in the 1990s. In the early 2000s, speech recognition engines offered by leading startups Nuance and SpeechWorks powered many of the first-generation web-based voice services, such as TellMe, AOL by Phone, and BeVocal. Speech recognition systems built then were mainly based on the traditional Hidden Markov Models (HMM) and required manually-written grammar and quiet environments to help the recognition engine work more accurately.
Modern speech recognition engines can pretty much understand any utterance by people under noisy environments and are based on end-to-end deep learning, especially another type of deep neural network more suitable for natural language processing, called recurrent neural network (RNN). Unlike traditional...