Day88 Deep Learning Lecture Review - Lecture 10-12
Deep Learning & Numerical Precision(Floating Point), Hardware Considerations, and Distributed Model Training
Deep Learning & Numerical Precision(Floating Point), Hardware Considerations, and Distributed Model Training
LLMs - Speeding Up LLMs (Grouped Query Attention, KV Caches, MoE, and DPO)
LLMs- Generating Texts, Positional Encoding, and Fine-Tuning LLMs (LoRA)
LLMs - Perplexity, Tokenizers, Data Cleaning, and Embedding Layer
Basic Machine Learning & Deep Learning, Word Embedding, CNNs, RNNs, LSTM and Transformer
Large Language Model - BERT, GPT, and GPT-2, 3 & 4
Transformers and Foundation Models: GELU, Layer Norm, Key Concepts & Workflow
Brief Explanation of Basic Algebra and Machine Learning
Primary Goals, Common Tasks, and Deep Learning NLP
Transformer Architecture, How the Models Are Different, and Q,K,V in Self-Attention