Day102 Deep Learning Lecture Review - Lecture 18 (1)
Data-Centric AI: Crowdsourcing, Methods to Estimate Annotator Quality, Neural Scaling Laws, Pareto Curves and Power Law
Data-Centric AI: Crowdsourcing, Methods to Estimate Annotator Quality, Neural Scaling Laws, Pareto Curves and Power Law
Variation of Conformal Prediction: Size of Calibration Set, Evaluation, and Group-Based & Adaptive Conformal Prediction
Understanding Conformal Prediction: Concepts, Applications, Marginal Coverage, and Recipes In Detail
Uncertainty in Deep Learning, Distribution Shifts, Model Calibration, and Out-of-Distribution (OOD) Detection
Language Models- Transfer Learning, Basic Concepts & Terminologies, Components of NLP Models, and Attention Mechanism
Bias Mitigation Strategies: Loss Reweighting, Sampling & Synthetic Samples and Architectural Changes (OccamNets, Adversarial Training & DANN)
Model Comparison and Bias Mitigation; McNemar’s Test, Dataset Bias, and Bias Detection
AI Ethics; AI Safety, Key Issues, AGI (Artificial General Intelligence), and Current AI Models’ Challenges
Llama 3: Framework, Workflow (RMSNorm, Grouped Query Attention, RoPE, SwiGLU Attention), Pre-training & Post-training
Comparing Pre-trained model embeddings (ResNet+SBERT vs. CLIP) and Prompt Engineering (Short and Direct, Few-Shot Learning, & Expert Prompting)