lesson
Advanced Multimodal ApplicationsAI BootcampApply domain-specific LoRA tuning, explore regression/classification heads, and understand diffusion models for text-to-image/video generation.
lesson
Multimodal Finetuning with CLIPAI BootcampFine-tune CLIP for classification/regression (e.g., pizza types, solar prediction), add heads on embeddings, and compare zero-shot vs fine-tuned accuracy.
lesson
FFN Components & TrainingAI BootcampExplore dropout, LayerNorm, positional encoding, skip connections, and build intuition for transformer depth and context encoding.
lesson
Feedforward Networks in TransformersAI BootcampUnderstand linear/nonlinear layers, implement FFNs in PyTorch, and compare activation functions (ReLU, GELU, SwiGLU).
lesson
Finetuning Case StudiesAI BootcampApply fine-tuning for HTML generation, resume scoring, financial tasks, and compare base vs instruction-tuned model performance.
lesson
Instructional Finetuning & LoRAAI BootcampDifferentiate fine-tuning vs instruction fine-tuning, apply LoRA/BitFit/prompt tuning, and use Hugging Face PEFT for JSON, tone, or domain tasks.
lesson
Multi-Head Attention & Mixture of Experts (MoE)AI BootcampBuild single-head and multi-head transformer models, implement Mixture-of-Experts (MoE) attention, and evaluate fluency/generalization.
lesson
Implementing Self-AttentionAI BootcampImplement self-attention in PyTorch, visualize attention heatmaps with real LLMs, and compare loss curves vs trigram models.
lesson
Mechanics of Self-AttentionAI BootcampLearn self-attention mechanics (Query, Key, Value, dot products, weighted sums), compute attention scores, and visualize softmax’s role.
lesson
Motivation for Attention MechanismsAI BootcampUnderstand limitations of fixed-window n-gram models and explore how word meaning changes with context (static vs contextual embeddings).
lesson
Neural N-Gram ModelsAI BootcampOne-hot encode inputs, build PyTorch bigram/trigram neural networks, train with cross-entropy loss, and monitor training dynamics.