From Autoencoders to VQ-VAEs: A Mathematical Timeline
January 2026
30 min read
Generative Models, Math Concepts, Autoencoders, VQ-VAE
Use arrow keys or click to navigate slides. Press 'F' or Fullscreen icon for best experience.
What You'll Learn
- •Evolution of Autoencoder architectures over time
- •Mathematical foundations of Variational Inference
- •The shift from continuous to discrete latent spaces
- •Vector Quantization mechanics and codebook learning
- •Addressing posterior collapse and training stability
Key Concepts Covered
Neural network that learns efficient data codings in an unsupervised manner
Method to approximate complex distributions in latent variable models
Technique to map continuous vectors to a finite set of codebook vectors
Training failure where only a subset of embedding vectors is used
Resources
Slide Overview
- Introduction to Autoencoders (Slides 1-8)
- The Probabilistic Turn: VAEs (Slides 9-18)
- Discretization & VQ-VAE (Slides 19-28)
- Mathematical Constraints & Loss Functions (Slides 29-38)
- Future Directions (Slides 39-44)
