SOFM, VQ and LVQ
- Feature Maps
- Geometric order in the brain
- sensory mapping
- Hubel and Wiesel - visual hypercolumns
- Kohonen's SOFM
- Hebbian learning with forgetting
- topological neighborhood
- algorithm description
- activity bubble formation
- relation to LBG
- Experimental results
Vector Quantization (VQ)
- Principle Components Analysis (PCA)
- Karhunen Loeve transform
- use in pattern recognition
- Oja's neural PCA
- Bottlenecking MLPs
- Sammon's nonlinear mapping (NLM)
- Exploratory projection pursuit (EPP)
- Why VQ?
- underlying probability distribution approximation
- feature extraction
- Application to speech signals
- Kohonen's LVQ1
- Perform Kohonen's SOFM on the Iris or
Peterson Barney data (whichever you used for the BP assignment).
Compare error with LBG results.
- Train a bottlenecking MLP using BP.
Start with the 424 problem with all possible patterns as
the training set.
Train an 838 network with a random 50% subset of the
patterns. How good is the generalization?
Back to the CS667 Home Page