\newline Logo
Left arrow icon.
Go to Preview Lesson
Go to Preview Lesson
LESSON 1.2ChatGPT is to LLM, as Kleenex is to tissue
Course Thumbnail of Fundamentals of transformers - Live Workshop.Fundamentals of transformers - Live Workshop
  • MODULE 1
    What are LLMs?

    Demystifying terminology behind LLMs

    • LESSON 1.1Intro
    • LESSON 1.2ChatGPT is to LLM, as Kleenex is to tissue
  • MODULE 2
    What LLMs predict

    Introduction to Autoregressive Decoding

    • LESSON 2.1Tokens
    • LESSON 2.2Demo - Manual LLM inference
    • LESSON 2.3LLM generate text
  • MODULE 3
    How LLMs predict

    The architecture for a Large Language Model

    • LESSON 3.1Vectors, intuitively
    • LESSON 3.2Word embeddings and nearest neighbors
    • LESSON 3.3Demo - Semantic meaning of word embeddings
  • MODULE 4
    How Transformers predict

    The innards of a transformer layer

    • LESSON 4.1Self-attention adds context
    • LESSON 4.2Demo - Adding "context" to a vector
    • LESSON 4.3MLP transforms
    • LESSON 4.4Demo - The necessity of non-linearities
  • MODULE 5
    How LLMs use position

    How to Leverage Positional Bias

    • LESSON 5.1Absolute positional encoding
    • LESSON 5.2Demo Cons of absolute positional bias
    • LESSON 5.3Demo - skip connections
    • LESSON 5.4Batch norm
    • LESSON 5.5RMS norm
  • MODULE 6
    How LLMs attend

    How to find the needle in the haystack

    • LESSON 6.1Workshop-feedback-qa
  • MODULE 7
    Modern LLM connection to papers

    Connection to papers

    • LESSON 7.1Modern day transformer architectures
    • LESSON 7.2Q&A
Right arrow icon.
Go to Next Lesson
Go to Next Lesson
LESSON 2.2Demo - Manual LLM inference
  • Go Pro
  • Courses
  • Fundamentals of transformers - Live Workshop
  • Tokens
  • Go To Previous Lesson
    ChatGPT is to LLM, as Kleenex is to tissue
    ChatGPT is to LLM, as Kleenex is to tissue

    Introduction to the LLMs

  • Go To Next Lesson
    Demo - Manual LLM inference
    Demo - Manual LLM inference