Explore all newline lessons
lesson
Tokens, Embeddings & Modalities — Foundations of Understanding Text, Image, and AudioAI Bootcamp- Understand the journey from raw text → tokens → token IDs → embeddings - Compare word-based, BPE, and advanced tokenizers (LLaMA, GPT-2, T5) - Analyze how good/bad tokenization affects loss, inference time, and semantic meaning - Learn how embedding vectors represent meaning and change with context - Explore and manipulate Word2Vec-style word embeddings through vector math and dot product similarity - Apply tokenization and embedding logic to multimodal models (CLIP, ViLT, ViT-GPT2) - Conduct retrieval and classification tasks using image and audio embeddings (CLIP, Wav2Vec2) - Discuss emerging architectures like Byte Latent Transformers and their implications
lesson
Prompt Engineering — From Structure to Evaluation (Mini Project 1)AI Bootcamp- Learn foundational prompt styles: vague vs. specific, structured formatting, XML-tagging - Practice prompt design for controlled output: enforcing strict JSON formats with Pydantic - Discover failure modes and label incorrect LLM behavior (e.g., hallucinations, format issues) - Build early evaluators to measure LLM output quality and rule-following - Write your first "LLM-as-a-judge" prompts to automate pass/fail decisions - Iterate prompts based on analysis-feedback loops and evaluator results - Explore advanced prompting techniques: multi-turn, rubric-based human alignment, and A/B testing - Experiment with `dspy` for signature-based structured prompting and validation
lesson
Intro to AI-Centric EvaluationAI Bootcamp- Metrics and Evaluation Design - Foundation for Future Metrics Work - Building synthetic data for AI applications
lesson
From Theory to Practice — Building Your First LLM ApplicationAI Bootcamp- Understand how inference works in LLMs (prompt processing vs. autoregressive decoding) - Explore real-world AI applications: RAG, vertical models, agents, multimodal tools - Learn the five phases of the model lifecycle: pretraining to RLHF to evaluation - Compare architecture types: generic LLMs vs. ChatGPT vs. domain-specialized models - Work with tools like Hugging Face, Modal, and vector databases - Build a “Hello World” LLM inference API using OPT-125m on Modal
lesson
Navigating the Landscape of LLM Projects & ModalitiesAI Bootcamp- Compare transformer-based LLMs vs diffusion models and their use cases - Understand the "lego blocks" of LLM-based systems: prompts, embeddings, generation, inference - Explore core LLM application types: RAG, vertical models, agents, and multimodal apps - Learn how LLMs are being used in different roles and industries (e.g., healthcare, finance, legal) - Discuss practical project scoping: what to build vs outsource, how to identify viable ideas - Identify limitations of LLMs: hallucinations, lack of reasoning, sensitivity to prompts - Highlight real-world startup examples (e.g., AutoShorts, HeadshotPro) and venture-backed tools
lesson
Replit Embed Lessonlesson
Codesandbox Embed Lessonlesson
MDX Powered Lessonlesson
Code Loading Lesson