CS667
Backprop Training of Multilayer Perceptrons
- Backwards Propogation of Error (Backprop)
- Derivation of simple backprop
- Local minima of the error
- Momentum
- Simple Examples
- Setting (and changing) the Parameters
- Batch vs. online mode
- Why batch is always better
- Why online is always better
- Setting epsilon (and bepsilon)
- Setting the momentum
- When to stop
- How to monitor progress
- General Rules
- Training a MLP is NP complete
- How to use the training examples
- Training set and test set
- Calibration set
- How many examples are needed?
- Online training
- Optimal presenting order
- Special Cases
- Pattern recognition
- Why BP doesn't work with the selector representation
- How to make it work
- Function approximation
Assignment
- Backprop Assignment
Back to the CS667 Home Page