CS661 Artifical Intelligence
Lecture 10 - Neural Networks
- why is AI so complex for Turing machines?
- where do we think?
- where people thought they thought
- thinking takes place in the brain
- brains are made of neurons
- brains easily accomplish tasks that are hard for computers
- speech recognition
- natural language understanding
- reading handwriting
- level mixing
- intuition and discovery
- computers easily accomplish tasks that are hard for brains
- accurate numeric calculation
- complex logical reasoning
- large memory
- classical (simplistic) neuron
- 10 billion in cortex
- destruction of individual neurons shouldn't overly effect processing
- soma is processing unit
- axon is output (electric pulses)
- dendrites are inputs(about 100K inputs per neuron)
- synapses are connections between presynaptic axon and postsynaptic dendrite
- in 1 square mm there are about
- 60,000 neurons
- 3M synapses
- 1.5 km of dendrites
- individual neurons don't perform the computation
- neural network is
- large number of
- richly interconnected
- simple processing units
- exhibiting collective behavior
- after learning to perform some task
- performance comparison of a brain and a computer
- real-time emulation of a brain using a super-computer
- ( 1010 neurons * 105 synapses / 5 millisec ) /
5 nanosecond per clock = 109 super-computers to emulate one brain
- real-time emulation of a computer using a brain
- 5 seconds per arithmetic operation / 5 nanosecond per clock
= 109 people to emulate one super-computer
- computer architecture
- term coined by DEC for VAX line
- computers with identical architecture can run the same machine code
- computers with identical architecture are unambiguously comparable
- can only directly compare two computers of sufficiently similar architecture
- only indirectly compare computers by cross emulation
- computer and brain architectures are so radically different that you can't compare them
- single or multiple processor vs. massively parallel
- fast processing but slow interprocessor communications vs. the opposite
- highly precise and accurate vs. faulty low-precision
- serial computation vs. distributed (eigenstates)
- programming vs. learning
- memory organization: LAM vs CAM (assoicative)
- before trying to solve a problem choose correct architecture