Tags
    Author
      Technology
        Rating
        Pricing
        Sort By
        Video
        Results To Show
        Most Recent
        Most Popular
        Highest Rated
        Reset
        https://s3.amazonaws.com/assets.fullstack.io/n/20250722182237417_AI%20Bootcamp%20cover%20image%20%281%29.png

        lesson

        Advanced Multimodal ApplicationsAI Bootcamp

        Apply domain-specific LoRA tuning, explore regression/classification heads, and understand diffusion models for text-to-image/video generation.

        https://s3.amazonaws.com/assets.fullstack.io/n/20250722182237417_AI%20Bootcamp%20cover%20image%20%281%29.png

        lesson

        Multimodal Finetuning with CLIPAI Bootcamp

        Fine-tune CLIP for classification/regression (e.g., pizza types, solar prediction), add heads on embeddings, and compare zero-shot vs fine-tuned accuracy.

        https://s3.amazonaws.com/assets.fullstack.io/n/20250722182237417_AI%20Bootcamp%20cover%20image%20%281%29.png

        lesson

        FFN Components & TrainingAI Bootcamp

        Explore dropout, LayerNorm, positional encoding, skip connections, and build intuition for transformer depth and context encoding.

        https://s3.amazonaws.com/assets.fullstack.io/n/20250722182237417_AI%20Bootcamp%20cover%20image%20%281%29.png

        lesson

        Feedforward Networks in TransformersAI Bootcamp

        Understand linear/nonlinear layers, implement FFNs in PyTorch, and compare activation functions (ReLU, GELU, SwiGLU).

        https://s3.amazonaws.com/assets.fullstack.io/n/20250722182237417_AI%20Bootcamp%20cover%20image%20%281%29.png

        lesson

        Finetuning Case StudiesAI Bootcamp

        Apply fine-tuning for HTML generation, resume scoring, financial tasks, and compare base vs instruction-tuned model performance.

        https://s3.amazonaws.com/assets.fullstack.io/n/20250722182237417_AI%20Bootcamp%20cover%20image%20%281%29.png

        lesson

        Instructional Finetuning & LoRAAI Bootcamp

        Differentiate fine-tuning vs instruction fine-tuning, apply LoRA/BitFit/prompt tuning, and use Hugging Face PEFT for JSON, tone, or domain tasks.

        https://s3.amazonaws.com/assets.fullstack.io/n/20250722182237417_AI%20Bootcamp%20cover%20image%20%281%29.png

        lesson

        Multi-Head Attention & Mixture of Experts (MoE)AI Bootcamp

        Build single-head and multi-head transformer models, implement Mixture-of-Experts (MoE) attention, and evaluate fluency/generalization.

        https://s3.amazonaws.com/assets.fullstack.io/n/20250722182237417_AI%20Bootcamp%20cover%20image%20%281%29.png

        lesson

        Implementing Self-AttentionAI Bootcamp

        Implement self-attention in PyTorch, visualize attention heatmaps with real LLMs, and compare loss curves vs trigram models.

        https://s3.amazonaws.com/assets.fullstack.io/n/20250722182237417_AI%20Bootcamp%20cover%20image%20%281%29.png

        lesson

        Mechanics of Self-AttentionAI Bootcamp

        Learn self-attention mechanics (Query, Key, Value, dot products, weighted sums), compute attention scores, and visualize softmax’s role.

        https://s3.amazonaws.com/assets.fullstack.io/n/20250722182237417_AI%20Bootcamp%20cover%20image%20%281%29.png

        lesson

        Motivation for Attention MechanismsAI Bootcamp

        Understand limitations of fixed-window n-gram models and explore how word meaning changes with context (static vs contextual embeddings).

        https://s3.amazonaws.com/assets.fullstack.io/n/20250722182237417_AI%20Bootcamp%20cover%20image%20%281%29.png

        lesson

        Neural N-Gram ModelsAI Bootcamp

        One-hot encode inputs, build PyTorch bigram/trigram neural networks, train with cross-entropy loss, and monitor training dynamics.


        Articles

        view all ⭢