Latest Tutorials

Learn about the latest technologies from fellow newline community members!

  • React
  • Angular
  • Vue
  • Svelte
  • NextJS
  • Redux
  • Apollo
  • Storybook
  • D3
  • Testing Library
  • JavaScript
  • TypeScript
  • Node.js
  • Deno
  • Rust
  • Python
  • GraphQL
  • React
  • Angular
  • Vue
  • Svelte
  • NextJS
  • Redux
  • Apollo
  • Storybook
  • D3
  • Testing Library
  • JavaScript
  • TypeScript
  • Node.js
  • Deno
  • Rust
  • Python
  • GraphQL

    The Hidden Bottleneck in LLM Streaming: Function Calls (And How to Fix It)

    Picture this: You’re building a real-time LLM-powered app. Your users are expecting fast, continuous updates from the AI, but instead, they’re staring at a frozen screen. What gives? Perhaps surprisingly — it’s probably not your LLM that’s slowing things down. It’s your function calls . Every time your app makes a call to process data, hit an API, or load a large file, you risk blocking the stream. The result? Delays, lag, and an experience that feels anything but “real-time.”

      Building the Ideal AI Agent: From Async Event Streams to Context-Aware State Management

      The dream of an autonomous AI agent isn’t just about generating smart responses — it’s about making those responses fast, interactive, and context-aware. To achieve this, you need to manage state across asynchronous tasks, handle real-time communication, and separate logic cleanly. In this blog, you’ll learn how to design an ideal AI agent by: By the end, you’ll have a step-by-step understanding of how to design an agent that’s efficient, elegant, and easy to scale.

      I got a job offer, thanks in a big part to your teaching. They sent a test as part of the interview process, and this was a huge help to implement my own Node server.

      This has been a really good investment!

      Advance your career with newline Pro.

      Only $40 per month for unlimited access to over 60+ books, guides and courses!

      Learn More

        Self-Correcting AI Agents: How to Build AI That Learns From Its Mistakes

        What if your AI agent could recognize its own mistakes, learn from them, and try again — without human intervention? Welcome to the world of self-correcting AI agents . Most AI models generate outputs in a single attempt. But self-correcting agents go further. They can identify when an error occurs, analyze the cause, and apply a fix — all in real time. Think of it as an AI with a built-in "trial and error" mindset. In this blog, you’ll learn:

          How to Build Smarter AI Agents with Dynamic Tooling

          Imagine having an AI agent that can access real-time weather data, process complex calculations, and improve itself after making a mistake — all without human intervention. Sounds kinda neat, right? Well, it’s not as hard to build as you might think. Large Language Models (LLMs) like GPT-4 are impressive, but they have limits. Out-of-the-box, they can't access live data or perform calculations that require real-time inputs. But with dynamic tooling , you can break these limits, allowing agents to fetch live information, make decisions, and even self-correct when things go wrong. In this guide, we’ll walk you through how to build an AI agent that can:

            Mastering Real-Time AI: A Developer’s Guide to Building Streaming LLMs with FastAPI and Transformers

            Real-time AI is transforming how users experience applications. Gone are the days when users had to wait for entire responses to load. Instead, modern apps stream data in chunks. For developers, this shift isn't just a "nice-to-have" — it's essential. Chatbots, search engines, and AI-powered customer support apps are now expected to integrate streaming LLM (Large Language Model) responses. But how do you actually build one? This guide walks you through the process, step-by-step, using FastAPI , Transformers , and a healthy dose of asynchronous programming . By the end, you'll have a working streaming endpoint capable of serving LLM-generated text in real-time.