lesson
RAG Hallucination Control & Enterprise SearchAI Bootcamp- Explore use of RAG in enterprise settings with citation engines - Compare hallucination reduction strategies: constrained decoding, retrieval, DPO - Evaluate model trustworthiness for sensitive applications - Learn from production examples in legal, compliance, and finance contexts
lesson
LLM Production Chain (Inference, Deployment, CI/CD)AI Bootcamp- Map the end-to-end LLM production chain: data, serving, latency, monitoring - Explore multi-tenant LLM APIs, vector databases, caching, rate limiting - Understand tradeoffs between hosting vs using APIs, and inference tuning - Plan a scalable serving stack (e.g., LLM + vector DB + API + orchestrator) - Learn about LLMOps roles, workflows, and production-level tooling
lesson
Positional Encoding + DeepSeek InternalsAI Bootcamp- Understand why self-attention requires positional encoding - Compare encoding types: sinusoidal, RoPE, learned, binary, integer - Study skip connections and layer norms: stability and convergence - Learn from DeepSeek-V3 architecture: MLA (KV compression), MoE (expert gating), MTP (parallel decoding), FP8 training - Explore when and why to use advanced transformer optimizations
lesson
Text-to-SQL and Text-to-Music ArchitecturesAI Bootcamp- Implement text-to-SQL using structured prompts and fine-tuned models - Train and evaluate SQL generation accuracy using execution-based metrics - Explore text-to-music pipelines: prompt → MIDI → audio generation - Compare contrastive vs generative learning in multimodal alignment - Study evaluation tradeoffs for logic-heavy vs creative outputs
lesson
Building AI Code Agents — Case Studies from Copilot, Cursor, WindsurfAI Bootcamp- Reverse engineer modern code agents like Copilot, Cursor, Windsurf, and Augment Code - Compare transformer context windows vs RAG + AST-powered systems - Learn how indexing, retrieval, caching, and incremental compilation create agentic coding experiences - Explore architecture of knowledge graphs, graph-based embeddings, and execution-aware completions - Design your own multi-agent AI IDE stack: chunking, AST parsing, RAG + LLM collaboration
lesson
Preference-Based Finetuning — DPO, PPO, RLHF & GRPOAI Bootcamp- Learn why base LLMs are misaligned and how preference data corrects this - Understand the difference between DPO, PPO, RLHF, and GRPO - Generate math-focused DPO datasets using numeric correctness as preference signal - Apply ensemble voting to simulate “majority correctness” and eliminate hallucinations - Evaluate model learning using preference alignment instead of reward models - Compare training pipelines: DPO vs RLHF vs PPO — cost, control, complexity
lesson
Math Reasoning & Tool-Augmented FinetuningAI Bootcamp- Use SymPy to introduce symbolic reasoning to LLMs for math-focused applications - Fine-tune with Chain-of-Thought (CoT) data that blends natural language with executable Python - Learn two-stage finetuning: CoT → CoT+Tool integration - Evaluate reasoning accuracy using symbolic checks, semantic validation, and regression metrics - Train quantized models with LoRA and save for deployment with minimal resource overhead
lesson
CLIP Fine-Tuning for InsuranceAI Bootcamp- Fine-tune CLIP to classify car damage using real-world image categories - Use Google Custom Search API to generate labeled datasets from scratch - Apply PEFT techniques like LoRA to vision models and optimize hyperparameters with Optuna - Evaluate accuracy using cosine similarity over natural language prompts (e.g. “a car with large damage”) - Deploy the model in a real-world insurance agent workflow using LLaMA for reasoning over predictions
lesson
Advanced RAG & Retrieval MethodsAI Bootcamp- Analyze case studies on production-grade RAG systems and tools like Relari and Evidently - Understand common RAG bottlenecks and solutions: chunking, reranking, retriever+generator coordination - Compare embedding models (small vs large) and reranking strategies - Evaluate real-world RAG outputs using recall, MRR, and qualitative techniques - Learn how RAG design changes based on use case (enterprise Q&A, citation engines, document summaries)
lesson
Full Transformer Architecture (From Scratch)AI Bootcamp- Connect all core transformer components: embeddings, attention, feedforward, normalization - Implement skip connections and positional encodings manually - Use sanity checks and test loss to debug your model assembly - Observe transformer behavior on structured prompts and simple sequences - Compare transformer predictions vs earlier trigram or FFN models to appreciate context depth
lesson
Multimodal Finetuning (Mini Project 6)AI Bootcamp- Understand what CLIP is and how contrastive learning aligns image/text modalities - Fine-tune CLIP for classification (e.g., pizza types) or regression (e.g., solar prediction) - Add heads on top of CLIP embeddings for specific downstream tasks - Compare zero-shot performance vs fine-tuned model accuracy - Apply domain-specific LoRA tuning to vision/text encoders - Explore regression/classification heads, cosine similarity scoring, and decision layers - Learn how diffusion models extend CLIP-like embeddings for text-to-image and video generation - Understand how video generation differs via temporal modeling, spatiotemporal coherence
lesson
Feedforward Networks & Loss-Centric TrainingAI Bootcamp- Understand the role of linear + nonlinear layers in neural networks - Explore how MLPs refine outputs after self-attention in transformers - Learn the structure of FFNs (e.g., two-layer projection + activation like ReLU/SwiGLU) - Implement your own FFN in PyTorch with real training/evaluation - Compare activation functions: ReLU, GELU, SwiGLU - Understand how dropout prevents co-adaptation and improves generalization - Learn the role of LayerNorm, positional encoding, and skip connections - Build intuition for how transformers encode depth, context, and structure into layers