Introduction
The process of data indexing can be divided into parts. One of the parts is data analysis. It's one of the crucial parts of data preparation. It defines how your data will be divided into terms from text, and what type it will be. The Solr data parsing behavior is defined by types. A type's behavior can be defined in the context of the indexing process, query process, or both. Furthermore, the type definition is composed of a tokenizer (or multiple tokenizers, some for querying and some for indexing) and filters (both token and character filters). A tokenizer specifies how your data will be preprocessed after it is sent to the appropriate field. An analyzer operates on the whole data that is sent to the field. Types can only have one tokenizer. The result of the tokenizer is a stream of objects called tokens.
Next in the analysis chain are the filters. They operate on the tokens in the token stream. They can do anything with the tokens—changing, removing,...