lesson
Orientation — Technical KickoffAI Bootcamp- Jupyter & Python Setup - Understanding why Python is used in AI (simplicity, libraries, end-to-end stack) - Exploring Jupyter Notebooks: shortcuts, code + text blocks, and cloud tools like Google Colab - Hands-On with Arrays, Vectors, and Tensors - Creating and manipulating 2D and 3D NumPy arrays (reshaping, indexing, slicing) - Performing matrix operations: element-wise math and dot products - Visualizing vectors and tensors in 2D and 3D space using matplotlib - Mathematical Foundations in Practice - Exponentiation and logarithms: visual intuition and matrix operations - Normalization techniques and why they matter in ML workflows - Activation functions: sigmoid and softmax with coding from scratch - Statistics and Real Data Practice - Exploring core stats: mean, standard deviation, normal distributions - Working with real datasets (Titanic) using Pandas: filtering, grouping, feature engineering, visualization - Preprocessing tabular data for ML: encoding, scaling, train/test split - Bonus Topics - Intro to probability, distributions, classification vs regression - Tensor intuition and compute providers (GPU, Colab, cloud vs local)
lesson
Orientation — Course IntroductionAI Bootcamp- Meet the instructors and understand the support ecosystem (Circle, Notion, async help) - Learn the 4 learning pillars: concept clarity, muscle memory, project building, and peer community - Understand course philosophy: minimize math, maximize intuition, focus on real-world relevance - Set up accountability systems, learning tools, and productivity habits for long-term success
lesson
Staying Current with AI (Research, News, and Tools)AI Bootcamp- Track foundational trends: RAG, Agents, Fine-tuning, RLHF, Infra - Understand tradeoffs of long context windows vs retrieval pipelines - Compare agent frameworks (CrewAI vs LangGraph vs Relevance AI) - Learn from real 2025 GenAI use cases: productivity + emotion-first design - Stay current via curated newsletters, YouTube breakdowns, and community tools
lesson
Career Prep — Roles, Interviews, and AI Career PathsAI Bootcamp- Break down roles: AI Engineer, Model Engineer, Researcher, PM, Architect - Prepare for FAANG/LLM interviews with DSA, behavioral prep, and project portfolio - Use ChatGPT and other tools for mock interviews and story crafting - Learn how to build a standout AI resume, repo, and demo strategy - Explore internal AI projects, indie hacker startup paths, and transition guides
lesson
RAG Hallucination Control & Enterprise SearchAI Bootcamp- Explore use of RAG in enterprise settings with citation engines - Compare hallucination reduction strategies: constrained decoding, retrieval, DPO - Evaluate model trustworthiness for sensitive applications - Learn from production examples in legal, compliance, and finance contexts
lesson
LLM Production Chain (Inference, Deployment, CI/CD)AI Bootcamp- Map the end-to-end LLM production chain: data, serving, latency, monitoring - Explore multi-tenant LLM APIs, vector databases, caching, rate limiting - Understand tradeoffs between hosting vs using APIs, and inference tuning - Plan a scalable serving stack (e.g., LLM + vector DB + API + orchestrator) - Learn about LLMOps roles, workflows, and production-level tooling
lesson
Positional Encoding + DeepSeek InternalsAI Bootcamp- Understand why self-attention requires positional encoding - Compare encoding types: sinusoidal, RoPE, learned, binary, integer - Study skip connections and layer norms: stability and convergence - Learn from DeepSeek-V3 architecture: MLA (KV compression), MoE (expert gating), MTP (parallel decoding), FP8 training - Explore when and why to use advanced transformer optimizations
lesson
Text-to-SQL and Text-to-Music ArchitecturesAI Bootcamp- Implement text-to-SQL using structured prompts and fine-tuned models - Train and evaluate SQL generation accuracy using execution-based metrics - Explore text-to-music pipelines: prompt → MIDI → audio generation - Compare contrastive vs generative learning in multimodal alignment - Study evaluation tradeoffs for logic-heavy vs creative outputs
lesson
Building AI Code Agents — Case Studies from Copilot, Cursor, WindsurfAI Bootcamp- Reverse engineer modern code agents like Copilot, Cursor, Windsurf, and Augment Code - Compare transformer context windows vs RAG + AST-powered systems - Learn how indexing, retrieval, caching, and incremental compilation create agentic coding experiences - Explore architecture of knowledge graphs, graph-based embeddings, and execution-aware completions - Design your own multi-agent AI IDE stack: chunking, AST parsing, RAG + LLM collaboration
lesson
Preference-Based Finetuning — DPO, PPO, RLHF & GRPOAI Bootcamp- Learn why base LLMs are misaligned and how preference data corrects this - Understand the difference between DPO, PPO, RLHF, and GRPO - Generate math-focused DPO datasets using numeric correctness as preference signal - Apply ensemble voting to simulate “majority correctness” and eliminate hallucinations - Evaluate model learning using preference alignment instead of reward models - Compare training pipelines: DPO vs RLHF vs PPO — cost, control, complexity
lesson
Math Reasoning & Tool-Augmented FinetuningAI Bootcamp- Use SymPy to introduce symbolic reasoning to LLMs for math-focused applications - Fine-tune with Chain-of-Thought (CoT) data that blends natural language with executable Python - Learn two-stage finetuning: CoT → CoT+Tool integration - Evaluate reasoning accuracy using symbolic checks, semantic validation, and regression metrics - Train quantized models with LoRA and save for deployment with minimal resource overhead
lesson
CLIP Fine-Tuning for InsuranceAI Bootcamp- Fine-tune CLIP to classify car damage using real-world image categories - Use Google Custom Search API to generate labeled datasets from scratch - Apply PEFT techniques like LoRA to vision models and optimize hyperparameters with Optuna - Evaluate accuracy using cosine similarity over natural language prompts (e.g. “a car with large damage”) - Deploy the model in a real-world insurance agent workflow using LLaMA for reasoning over predictions