Building a vector quantizer
Vector Quantization is a quantization technique where the input data is represented by a fixed number of representative points. It is the N-dimensional equivalent of rounding off a number. This technique is commonly used in multiple fields such as voice/image recognition, semantic analysis, and image/voice compression. The history of optimal vector quantization theory goes back to the 1950s in Bell Labs, where research was carried out to optimize signal transmission using discretization procedures. One advantage of vector quantizer neural networks is that they have high interpretability. Let's see how we can build a vector c.
Due to some issues with the current version of NeuroLab (v. 0.3.5), running the following code will throw an error. Fortunately, there is a fix for this, but it involves making a change in the NeuroLab package. Changing line 179 of the net.py
file in the NeuroLab package (layer_out.np['w'][n][st:i].fill...