Latest Tutorials

Learn about the latest technologies from fellow newline community members!

  • React
  • Angular
  • Vue
  • Svelte
  • NextJS
  • Redux
  • Apollo
  • Storybook
  • D3
  • Testing Library
  • JavaScript
  • TypeScript
  • Node.js
  • Deno
  • Rust
  • Python
  • GraphQL
  • React
  • Angular
  • Vue
  • Svelte
  • NextJS
  • Redux
  • Apollo
  • Storybook
  • D3
  • Testing Library
  • JavaScript
  • TypeScript
  • Node.js
  • Deno
  • Rust
  • Python
  • GraphQL
    NEW

    Top AI Applications: Examples of AI Application

    AI applications represent a transformative force across industries, leveraging algorithms to perform tasks ranging from simple automation to complex decision-making. These systems, rooted in machine learning and data analysis, have become integral to daily life, powering tools like voice assistants, recommendation engines, and fraud detection mechanisms . For developers and tech professionals, understanding AI applications is critical, as they underpin innovations in web development, software engineering, and emerging technologies like agentic workflows . This section explores the definition, categories, and significance of AI applications, highlighting their role in shaping modern digital ecosystems. AI applications are software systems that utilize artificial intelligence to execute tasks autonomously or with minimal human intervention. Common examples include digital assistants (e.g., Siri, Alexa), personalized streaming recommendations, and real-time traffic navigation tools . These applications rely on techniques like natural language processing (NLP), computer vision, and predictive analytics to interpret user behavior, generate insights, and automate responses. For instance, credit card fraud detection systems analyze transaction patterns to flag anomalies, reducing financial risks for users . In healthcare, generative AI applications assist in drug discovery and diagnostic imaging, accelerating processes that traditionally required extensive human expertise . Such use cases underscore AI’s versatility in solving domain-specific challenges while improving efficiency. See the Natural Language Processing (NLP) Applications section for more details on how NLP powers voice assistants and similar tools. AI applications can be broadly categorized into two types: analytical and generative systems. Analytical AI focuses on interpreting data to inform decisions, as seen in search engines optimizing query results or social media platforms curating content feeds . Generative AI, by contrast, creates new content, such as text, images, or code, and is increasingly adopted in advertising, manufacturing, and software development . Another classification includes reactive agents (task-specific systems like chess-playing algorithms) and learning agents (adaptive models that improve over time, such as recommendation engines on streaming platforms) . The rise of AI agents—autonomous systems capable of multi-step workflows—further expands possibilities, with 2025 marking a pivotal year for their integration into real-world workflows . These distinctions highlight the evolving landscape of AI capabilities, tailored to diverse technical and business needs. Building on concepts from the Building AI Applications with LLMs section, generative AI’s ability to produce text and code exemplifies its role in modern development pipelines.
    Thumbnail Image of Tutorial Top AI Applications: Examples of AI Application
      NEW

      Review of Claude Opus 4.5

      Watch: Claude Opus 4.5 is the BEST coding model ever... by Better Stack Claude Opus 4.5 represents a significant leap forward in AI-powered development tools, introduced by Anthropic on November 24, 2025 . This model combines advanced coding capabilities with enhanced reliability for agentic workflows, positioning itself as a competitive alternative to existing AI development platforms. The release emphasizes "coherence, stability, iterative refinement, and reliable execution," with early testing highlighting its ability to handle "messy, real-world programming tasks" . Developers using GitHub Copilot in conjunction with Claude Code report "high-quality code" generation and improved task automation . See the section for more details on how these integrations enhance productivity. The model’s design focuses on balancing speed with precision, as noted by John Hughes, who describes it as an "excellent, fast model" suitable for both casual coding and complex system development . Key innovations include a "plan mode" that streamlines decision-making processes, reducing time spent on debugging and iterative adjustments . Building on concepts from the section, this mode leverages structured reasoning to optimize complex workflows. This section establishes the foundational features of Opus 4.5 and outlines its relevance to technical professionals. To begin working with Claude Opus 4.5, start by verifying access to Anthropic’s API platform, as the model is deployed exclusively through this interface . While specific hardware requirements are not explicitly documented, users should ensure sufficient computational resources to handle API requests, particularly for complex tasks like long-horizon planning or large-scale code generation . Anthropic’s documentation provides detailed guidance on account setup, which includes generating an API key—a critical step for authentication during integration . For developers working in local environments, the Anthropic CLI tool enables streamlined interactions, though installation instructions focus on compatibility with standard development workflows rather than explicit system specifications .
      Thumbnail Image of Tutorial Review of Claude Opus 4.5

      I got a job offer, thanks in a big part to your teaching. They sent a test as part of the interview process, and this was a huge help to implement my own Node server.

      This has been a really good investment!

      Advance your career with newline Pro.

      Only $40 per month for unlimited access to over 60+ books, guides and courses!

      Learn More

        What is AI logging or LLM logging or AI Application Logging

        Watch: Gen AI Project | Log Classification System Using Deepseek R1 LLM, NLP, Regex, BERT by codebasics AI logging refers to the systematic recording of data and metadata generated by artificial intelligence systems, including the inputs, outputs, and contextual information of machine learning models and large language models (LLMs) during execution. This process captures critical details such as prompts, model responses, system parameters, and error states, enabling visibility into AI workflows . For LLMs, logging is particularly vital due to their complexity and the dynamic nature of their interactions, which require granular tracking of inference calls, token usage, and performance metrics . The importance of logging in AI applications stems from its role in debugging, compliance, auditing, and iterative model improvement. By maintaining detailed logs, developers can trace decision-making pathways, identify biases, and ensure alignment with ethical and operational standards . AI logging systems typically include structured records of:
        Thumbnail Image of Tutorial What is AI logging or LLM logging or AI Application Logging

          Practical Guide: Implementing AI for Predictive Maintenance

          Predictive maintenance is a data-driven strategy that leverages analytics and real-time monitoring to anticipate equipment failures before they occur. Unlike reactive or scheduled maintenance, this approach uses sensor data and historical performance metrics to optimize maintenance schedules, reducing unplanned downtime and extending asset lifespans . Its benefits include cost savings from avoiding catastrophic failures, improved operational efficiency, and enhanced safety for personnel and systems . For example, manufacturers adopting predictive maintenance report downtime reductions of up to 50% and maintenance cost savings of 25–30% . These advantages stem from the ability to prioritize maintenance tasks based on actual asset conditions rather than fixed intervals . Artificial intelligence (AI) enhances predictive maintenance by enabling faster, more accurate analysis of complex datasets. Traditional methods often struggle with the volume and variability of sensor data, but AI algorithms can detect subtle patterns indicative of potential failures . Machine learning models, such as supervised learning techniques, are trained on historical equipment data to predict future performance degradation . For instance, Siemens employs AI to generate maintenance recommendations for industrial machines, ensuring recommendations adapt to evolving operational conditions . AI also integrates with IoT sensors to provide real-time insights, allowing organizations to respond to anomalies before they escalate into critical issues . This capability is particularly valuable in industries like energy and manufacturing, where equipment reliability directly impacts productivity . AI-driven predictive maintenance relies on a combination of machine learning, statistical analysis, and edge computing. Supervised learning algorithms predict equipment failures by correlating sensor data (e.g., temperature, vibration) with past failure events . See the section for more details on supervised learning techniques. Unsupervised learning techniques, such as clustering, identify abnormal patterns in unlabeled datasets, flagging potential issues for further investigation . Reinforcement learning is also emerging as a tool for dynamic maintenance optimization, where models learn optimal intervention strategies through iterative feedback . Additionally, AI systems leverage digital twins—virtual replicas of physical assets—to simulate scenarios and test maintenance protocols without disrupting operations . These techniques are often deployed using cloud-based platforms, which aggregate data from distributed assets and apply scalable AI models to generate actionable insights . See the section for insights on balancing edge and cloud computing demands.
          Thumbnail Image of Tutorial Practical Guide: Implementing AI for Predictive Maintenance

            How to Implement MCP Server in Drug Discovery AI

            MCP Server, or Model Context Protocol Server, is a specialized framework designed to bridge AI systems with domain-specific databases and tools, particularly in drug discovery research. It enables seamless integration of biomedical data, chemical informatics, and clinical knowledge into AI workflows by acting as an intermediary between large language models (LLMs) and scientific databases. For instance, the ChEMBL-MCP-Server, developed by Augmented Nature, provides 22 specialized tools for querying chemical structures, pharmacological profiles, and experimental data, directly supporting AI-driven drug discovery tasks . Similarly, FDB’s MCP Server enhances AI clinical decision support by extending access to drug databases beyond traditional APIs, offering tools tailored for biomedical research . These implementations highlight the server’s role in contextualizing AI outputs with domain-specific knowledge, a critical requirement for drug discovery where accuracy and relevance are paramount. See the Real-World Applications section for more details on implementations like these. Drug discovery AI relies on the synthesis of vast, heterogeneous datasets—including molecular structures, biological assays, and clinical trial results—to identify potential drug candidates. However, AI models operating in isolation from these data sources often produce outputs that lack scientific validity or actionable insights. This is where MCP Server becomes essential. By embedding AI systems with direct access to curated databases like ChEMBL or Open Targets, MCP Server ensures that models can dynamically retrieve, process, and apply domain-specific information during inference. For example, the BioMCP toolkit explicitly connects AI models to drug discovery pipelines, enabling real-time integration of biopharmaceutical data . This approach not only accelerates hypothesis generation but also reduces the risk of errors stemming from outdated or incomplete data. The implementation of MCP Server in drug discovery AI offers three primary benefits: data contextualization , multi-agent collaboration , and tool interoperability . First, by linking AI models to specialized databases, MCP Server ensures that outputs are grounded in scientifically validated information. As mentioned in the Optimizing section, leveraging advanced RAG techniques enhances this data contextualization. The Azure AI Foundry Labs’ MCP Server, for instance, equips GitHub Copilot with custom biomedical data to refine drug discovery workflows . Second, MCP Server supports multi-agent systems where multiple AI agents collaborate on tasks like molecular design or toxicity prediction. The Tippy AI Agent Pod, which uses MCP Server for external client access, demonstrates how distributed agents can share context while maintaining task-specific focus . Third, the server’s tool interoperability allows integration with existing scientific software, such as chemical informatics platforms, without requiring extensive re-engineering .
            Thumbnail Image of Tutorial How to Implement MCP Server in Drug Discovery AI