Latest Tutorials

Learn about the latest technologies from fellow newline community members!

  • React
  • Angular
  • Vue
  • Svelte
  • NextJS
  • Redux
  • Apollo
  • Storybook
  • D3
  • Testing Library
  • JavaScript
  • TypeScript
  • Node.js
  • Deno
  • Rust
  • Python
  • GraphQL
  • React
  • Angular
  • Vue
  • Svelte
  • NextJS
  • Redux
  • Apollo
  • Storybook
  • D3
  • Testing Library
  • JavaScript
  • TypeScript
  • Node.js
  • Deno
  • Rust
  • Python
  • GraphQL

Should I Directly Access Data From the Apollo Client or From React Component State?

Consider the following code snippet of a React component, <App /> , that... You may have noticed that the data sent back by the mutation provides the user's information in a logIn field, and any data returned from a successful mutation automatically gets added to the local Apollo Client cache. Therefore, why do we have a user state variable when we could just access the user's information via the data field in the mutation result object? For example, like this: This can be considered an anti-pattern, but data can be either undefined or { logIn: { id: ..., token: ..., ... } } . Therefore, you would need to check if data is undefined or not directly in the body (and rendering section) of the <App /> component. Even after you determine that data is not undefined , you would still need to perform the same number of checks as before for the logIn property, etc. By using the setUser approach, you start with a baseline user object with its properties initialized to null , so you don't have to check if the user is undefined in the body (and rendering section) of the <App /> component (one less check). Additionally, with this approach, you only perform the checks for the data inside the onCompleted function. You could directly access the cache via the update function, which is called after the mutation completes and provides the cache as an argument, like so: However, the cache at this point doesn't actually have user data in the cache (to confirm this, print JSON.stringify(cache.data.data) in the update function). The user data is provided separately as the update function's second argument. You would need to manually modify the cache so that it has this user . Once the cached data is updated, the change gets broadcasted across the application and re-renders the components with active queries that correspond to the updated data. So you would need to put into each component that relies on user a call to useQuery that fetches the user . On initial page load, it's an extra, unnecessary network request since the LOG_IN already gets us the user data. But after the initial page load, if the user decides to log in or log out, then getting the user will be based on the update to the cache and having its updated user data be broadcasted to the useQuery s. In this case, it's more ideal to use setUser if it means one less network request on initial page load. As always, it's completely up to you how you want to manage state in your applications, but be sure to evaluate the trade-offs for each possible solution and pick the one that best suits your situation. Check out this Codesandbox example to see what I mean: https://codesandbox.io/embed/mutations-example-app-final-tjoje?fontsize=14&hidenavigation=1&theme=dark If you want to learn more advanced techniques with TypeScript, GraphQL and React, or learn how to build a production-ready Airbnb-like application from scratch, then check out our TinyHouse: A Fullstack React Masterclass with TypeScript and GraphQL :

Thumbnail Image of Tutorial Should I Directly Access Data From the Apollo Client or From React Component State?

Building Your First ASP.NET Core RESTful API for Node.js Developers - Introduction (Part 1)

Over the past decade, many developers started their backend development journey with Node.js. What makes Node.js compelling to developers is the benefit of creating client-side and server-side applications with a single programming language: JavaScript. This convenience, along with the growing interest in frameworks written with modern programming languages like Golang and Rust, means more developers are less likely to branch out to older, more established technologies like ASP.NET. Developed and maintained by Microsoft, ASP.NET ( A ctive S erver P ages N etwork E nabled T echnologies) is a framework for creating dynamic web applications and services on the .NET platform . You can write ASP.NET applications with any .NET programming language: C#, F# or Visual Basic. ASP.NET is widely used across many industries, most notably, by large corporations and government agencies. Despite Microsoft's efforts to adapt ASP.NET to the rapidly evolving web development landscape, such as the release of ASP.NET MVC in 2009 in response to the popularity of MVC frameworks like Django and Ruby on Rails, ASP.NET continued to suffer from several limitations: To migrate away from ASP.NET's monolithic design, Microsoft re-implemented ASP.NET as a modular, cross-platform compatible, open-source framework named ASP.NET Core . Released in 2016 , ASP.NET Core comes with built-in support for dependency injection and provides... With these features, you can build and run lightweight web applications on Windows , Linux and macOS via the .NET Core runtime. In fact, developers can host their web applications not just on IIS, but also, Nginx, Docker, Apache and much more. This all makes ASP.NET Core applications suitable for containerization and optimized for cloud-based environments. For any missing functionality, you can fetch them as packages from NuGet . As the package manager for .NET, NuGet is equivalent to npm for Node.js. At its core, an ASP.NET Core application is a self-contained console application that self-hosts a web server (by default, the cross-platform web server Kestrel ), which processes incoming requests and passes them directly to the application. Once it finishes handling a request, the application passes the response to the web server, which sends the response directly to the client (or reverse proxy). Keeping the web server independent of the application this way makes testing and debugging much simpler, especially when compared to previous versions of ASP.NET where IIS directly executes the application's methods. So why might Node.js developers consider learning ASP.NET Core? In the latest round of TechEmpower benchmark, ASP.NET Core significantly outperforms Node.js, sending back almost nine times more plaintext responses per second. Below, I'm going to show you how to build your first ASP.NET Core RESTful API with C#, a strongly-typed, object-oriented language. Throughout this tutorial, I will relate concepts and patterns to those that you may have already encountered in an Express.js RESTful API. To get started, verify that you have the latest LTS version of the .NET Core SDK, v6, installed on your machine. The .NET Core SDK (Software Development Kit) consists of everything you need to create and run .NET applications: If your machine does not have the .NET Core SDK installed, then download the latest LTS version of the .NET Core SDK for your specific platform and follow the installation directions. Once installed, create a new directory named weather-api . Then, within this directory, create a new solution file: A solution file lists and tracks all of the projects that belong to a .NET Core application. For example, the application may include an ASP.NET Core Web API project, several class libraries (for directly interfacing with databases via the Entity Framework) and an ASP.NET Core with React.js project. With a solution file, the dotnet CLI knows which projects to restore NuGet packages for ( dotnet restore ), build ( dotnet build ) and test ( dotnet test ) in your application. In this case, you will find a weather-api.sln file within the root of the project directory. Let's create a new ASP.NET Core Web API project: The dotnet new command scaffolds a new project or file based on a specified template , such as sln for a solution file and webapi for an ASP.NET Core Web API. The -n option tells the dotnet new command the name of the outputted project/file. In this case, you will find the ASP.NET Core Web API project located within an API directory. Let's add this project to the solution file: You can verify that the project has been added to the solution file by running the following command, which lists all of the projects added to the solution file: If you open the solution file, then you will find the "API" project listed with a project type GUID ( FAE04EC0-301F-11D3-BF4B-00C04F79EFBC for C#), a reference to the project's .csproj file and a unique project GUID. Let's restore the project's NuGet packages. For Node.js developers, this is similar to running npm install / yarn install on a freshly cloned Git repository to reinstall dependencies. If you are building this project on macOS, then you can find the NuGet packages in the ~/.nuget/packages directory. These packages relate to the package referenced in the API/API.csproj : Swashbuckle.AspNetCore . Swashbuckle.AspNetCore sets up Swagger for ASP.NET Core APIs. You can check the project's obj/project.nuget.cache file for absolute paths to the project's NuGet packages. Let's take a look at the three C# files in the API directory: Much like the index.js file of a simple Express.js RESTful API, the Program.cs file also bootstraps and starts up a RESTful API, but for ASP.NET Core. It follows a minimal hosting model that consolidates the Startup.cs and Program.cs files from previous ASP.NET versions into a single Program.cs file. Plus, the Program.cs file now makes use of top-level statements and implicit using directives to eliminate extra boilerplate code like the class with a Main method and using directives respectively. As you can see in the Program.cs file, setting up and running a RESTful API with ASP.NET Core requires significantly less code than previous ASP.NET versions. ( Program.cs ) Program.cs begins with instantiating a new WebApplicationBuilder , a builder for web applications and services. WebApplicationBuilder follows the builder pattern , which breaks down the construction of a complex object into multiple, distinct steps. This means that we delay the creation of the builder object ( var app = builder.Build() ) until we finish configuring it. Upon instantiation, the builder object comes with preconfigured defaults for several properties : Alongside these preconfigured defaults, we explicitly register additional services to the built-in DI ( d ependency i njection) container with WebApplicationBuilder.Services . This DI container simplifies dependency injection in ASP.NET Core (i.e., automatically resolves dependencies and manages their lifetimes) and is responsible for making all registered services available to the entire application. Here, the following methods get called on builder.Services : After registering these services, call the builder object's Build() method to build the WebApplication (host) with these configurations. By default, the WebApplication uses Kestrel as the web server. Then, we check if the application is running within a development environment, and if so, then add Swagger middleware ( UseSwagger() and UseSwaggerUI() ) to the application's middleware pipeline. Notice how these Use{Feature} extension methods that add middleware are prefixed with Use , which is similar to how Express.js calls app.use() to mount middleware functions. Calling the UseSwaggerUI() method automatically enables the static file middleware . Express.js also provides a built-in middleware function for serving static assets ( app.use(express.static("<path_to_static_files>")) ). The remaining middleware gets applied to all requests regardless of environment: After all of this middleware gets added to the middleware pipeline, we call the MapControllers() method to automatically create an endpoint for each of the application's controller actions and add them to the IEndpointRouteBuilder . This method saves us the trouble of having to explicitly define the routes ourselves. Lastly, we call the Run() method to run the application. So to start up the ASP.NET Core RESTful API, run the dotnet run command, which runs the project in the current directory, within the API directory. Note : If our application consisted of multiple projects, then you can specify which project you want to run by passing a --project option to dotnet run (e.g., dotnet run --project API to just run the API project) without having to change the current directory. When you run this command, you may come across the following error message: If you do, then follow the directions in the error message. Run the dotnet dev-certs https --clean command to remove all existing ASP.NET Core development certificates, and run the dotnet dev-certs https to create a new untrusted developer certificate. To trust this certificate, run the command dotnet dev-certs https --trust . Then, re-run the dotnet run command and the error message should no longer pop up. Alternatively, you can remove https://localhost:7101; from applicationUrl in the API/Properties/launchSettings.json file, which stores profiles that tell ASP.NET Core how to run a specific project. Within a browser, you can visit the Swagger documentation at http://localhost:5077/swagger . Here, you will find that the RESTful API comes with only a single endpoint: GET /WeatherForecast . If you expand the endpoint's accordion item, then a summary of the endpoint will appear: This summary provides an example response (status code, value, etc.) for the endpoint. If you test this endpoint by visiting http://localhost:5077/WeatherForecast in the browser, or sending a GET request to http://localhost:5077/WeatherForecast via a REST client like Postman or a CLI utility like cURL , then you will get a response that contains four weather forecasts. To see how the RESTful API handles requests to the GET /WeatherForecast endpoint, open the Controllers/WeatherForecastController.cs file. ( WeatherForecastController.cs ) If you have developed an Express.js RESTful API, then you should be familiar with the concept of controllers. After all, route callback functions act as controllers. To understand how this file works, let's first take a look at the [ApiController] attribute . This attribute tells ASP.NET Core that the controller class will opt-in to using opinionated, commonly-used API functionality like multipart/form-data request inference and automatic HTTP 400 responses . A route attribute ( [Route("[controller]")] ) is placed on the controller and coerces all controller actions to use attribute routing . The [controller] token in the route attribute expands to the controller's name, so the controller's base URL path is /{controller_name} , or in this case, /WeatherForecast . This means the URL path of /WeatherForecast can match the WeatherForecast.Get() action method. Since this action method is marked with the HttpGet attribute, only GET requests to /WeatherForecast will run this action method. When declaring it, a controller class in ASP.NET Core RESTful APIs should derive from the ControllerBase class, which provides the properties and methods needed for processing any HTTP request. This controller only contains one action method, WeatherForecast.Get() . This action method returns four weather forecasts. Each weather forecast is created with the WeatherForecast model that's defined in the API/WeatherForecast.cs file: ( WeatherForecast.cs ) This model represents the shape of a weather forecast's data. In the WeatherForecast.Get() action method, we pass several values to the model: And the model automatically populates each property accordingly. Proceed to the second part of this tutorial series to see how to add your own endpoints to this RESTful API.

Thumbnail Image of Tutorial Building Your First ASP.NET Core RESTful API for Node.js Developers - Introduction (Part 1)

I got a job offer, thanks in a big part to your teaching. They sent a test as part of the interview process, and this was a huge help to implement my own Node server.

This has been a really good investment!

Advance your career with newline Pro.

Only $30 per month for unlimited access to over 60+ books, guides and courses!

Learn More

Find and RegExp Match - How to Fix Object is possibly 'undefined' and Object is possibly 'null' Errors in TypeScript

Consider the following TypeScript code snippet: Let's assume that the list of cars is fetched from a remote API service. In this TypeScript code snippet, we... However, there are two problems with this TypeScript code snippet: As a result, TypeScript gives you the warnings Object is possibly 'undefined' for the find() method and Object is possibly 'null' for the subsequently chained match() method. To keep method chaining possible, we need to provide default values that ensures chained methods never get called on undefined and null values. Here's what this solution looks like: This solution may look compact and will save you a few bytes, but at the same time, you sacrifice readability. This can be taxing for less experienced developers who are unfamiliar with some of this syntax. By breaking up the method chaining into multiple statements, developers can better understand what exactly is happening with the code, step-by-step. Additionally, you can add more concrete checks to ensure that the code is even more type-safe. Here's what this solution looks like: If you want to learn more advanced techniques with TypeScript and React, then check out our Fullstack React with TypeScript Masterclass :

Thumbnail Image of Tutorial Find and RegExp Match - How to Fix Object is possibly 'undefined' and Object is possibly 'null' Errors in TypeScript

Form-associated custom elements FTW!

Before Shadow DOM, you needed a framework to encapsulate component templates or styling. Shadow DOM was a game-changer because it allows you to code UI components without their logic clashing with other components using just the web platform. Shadow DOM poses challenges when HTML elements encapsulated by Shadow DOM need to participate in a form. In this post, I'll provide an overview of form-associated custom elements. Form-associated custom elements is a web specification that allows engineers to code custom form controls that report value and validity to HTMLFormElement , while also promoting a fully accessible user experience. With the encapsulation provided by Shadow DOM, engineers can code UI components where the CSS styling doesn't collide with other components. Shadow DOM provides a DOM tree for an element separated from the rest of the Document Object Model (DOM). The separation of concerns promoted by Shadow DOM is a boon for coding reusable UI components. While Shadow DOM has several benefits, there are some complications when elements embedded in Shadow DOM have to interact with HTMLFormElement . Suppose you wanted to code a custom checkbox component using Shadow DOM. Checkboxes usually require a significant amount of CSS styling that overrides the browser defaults to match a given mockup. You code an autonomous custom element and style the HTMLInputElement with type="checkbox" in the context of Shadow DOM so the styling doesn't conflict with other elements. You give the component a tag name of my-checkbox . Just when you think you're following best practices, you place an instance of the custom element as a child of HTMLFormElement . Upon inspection in Dev Tools, you may notice the HTMLInputElement cannot participate with the form. You can inspect this phenomenon in this CodeSandbox . HTMLInputElement by design can report value and validity back to HTMLFormElement , but only when HTMLInputElement is a direct descendent of HTMLFormElement . When coding reusable components it's a good idea to provide an interface for web engineers that's familiar. It's typical for HTMLInputElement that are direct descendants of HTMLFormElement to have access to the parent form directly on the element. You can inspect this behavior in the following CodeSandbox . Since the HTMLInputElement is found in an entirely different DOM tree (Shadow DOM), the HTMLFormElement doesn't recognize the HTMLInputElement . In short, HTMLInputElement embedded in Shadow DOM can't participate in the form. In 2019, a new specification was proposed that solves this issue. Form-associated custom elements allow web engineers to use the benefits of Shadow DOM while providing an API that enables custom elements to participate in HTMLFormElement . Form-associated custom elements have all the benefits of autonomous custom elements. They can implement Shadow DOM and use the typical custom lifecycle hooks because they inherit from HTMLElement . If you've coded autonomous custom elements, learning how to code form-associated custom elements is fairly similar. In the following examples, I'll demonstrate how a checkbox embedded in Shadow DOM can participate in an HTML form by implementing formAssociated and ElementInternals . Before I mentioned that form controls that are native to the browser like HTMLInputElement automatically participate in HTMLFormElement . The form control is added to an Array-like interface on HTMLFormElement , allowing web engineers to loop through the form controls to handle common tasks like validation. For the Checkbox component to participate in HTMLFormElement the same way, you simply need to set the value of a static property named formAssociated to true . the below example written with TypeScript does just that. If you wish to follow along, fork this CodeSandbox and start coding. Inversely, if you wish to reference HTMLFormElement on instances of Checkbox , similar to how HTMLInputElement behaves when it's a direct descendant of HTMLFormElement , a method inherited from HTMLElement called attachInternals can be called with provides the same interface, along with the Accessibility Object Model (AOM). By setting a property on Checkbox named _internals to what's returned by attachInternals , you effectively add the ElementInternals interface to Checkbox . Later in this post, I'll provide an example of how you can reference a method on the ElementInternals interface that aids with validation. Before that, we should resolve some discrepancies between Checkbox and a typical HTMLInputElement . If we expect engineers to reuse this component, it should behave similarly to HTMLInputElement , which has a well-known interface. To provide parity between HTMLInputElement and Checkbox , let's define some getters and setters on Checkbox . First, make a getter that returns a reference of the HTMLInputElement so you can easily reference the element with this.checkbox throughout the logic of the component. Next, define a getter and setter for the state of the checkbox. It's probably a good idea to make the HTMLInputElement the single source of truth here. Any getter and setter defined on Checkbox either returns the value or sets the value of checked on this.checkbox . We could introduce several more properties on Checkbox to provide parity between it and a typical HTMLInputElement , but we'll stop there for now. While coding a UI library filled with form-associated custom elements, I found a couple of challenges in making the components reusable. Suppose you wanted to add validation logic to Checkbox and have the class another method called onValidate , including all the logic there. In the below example, I call setValidity on the ElementInternals interface, which reports the validity of the input to HTMLFormElement . This is convenient, however placing this logic here doesn't give a web engineer the ability to configure validations per business logic in different scenarios. A higher-level validation pattern is required that would allow engineers to loop through form controls and validate an entire form. Another challenge had to do with making inline validation messages accessible. Getting screen readers to interpret validation messages as errors that should be read aloud at first seems tricky because of Shadow DOM, although is possible using WAI-ARIA attributes. Suppose this were the template instead of just the input. If the form control is invalid, custom logic could populate the <div class="message"> with relevant content. The WAI-ARIA attributes provide an immediate response for screen readers. Did you like what you read here? In the book Fullstack Web Components , you'll code a form-associated custom element, bringing all the features necessary to reuse that component in an enterprise-grade web application. You'll not just learn how to provide parity between the form control well-known elements like HTMLInputElement , but also discover a pattern for implementing reusable validations that validate an entire form. You'll also tackle challenges with making form-associated custom elements accessible. This is just in Chapter 3! Fullstack Web Components provides everything you need to know to code an entire UI library of custom elements. Are you looking to code Web Components now, but don't know where to get started? I wrote a book titled Fullstack Web Components:Complete Guide to Building UI Libraries with Web Components , a hands-on guide to coding UI libraries and web applications with custom elements. In Fullstack Web Components , you'll...

Thumbnail Image of Tutorial  Form-associated custom elements FTW!

Build Your Own JavaScript Micro-Library Using Web Components: Part 4 of 4

In this capstone tutorial, we're going to actually use the micro-library in app code so you can see how the micro-library makes things easier for developers in real world development. In the previous steps of this 4-part tutorial, this is what we accomplished: In this final tutorial, we will now refactor an example component to use the @Component decorator and the attachShadow function from our micro-library. We're refactoring a file, packages/component/src/card/Card.ts , which contains the CardComponent class. This is a regular Web Components custom element. To get it to use our micro-library, we first import Component and attachShadow from our micro-library. Next, we add the Component decorator to CardComponent . We remove the line at the bottom of the file that registers the component, noting the tag name in-card . Remove customElements.define('in-card', CardComponent); . The above code is now automated by our micro-library. We set the selector property to the ElementMeta passed into Component to in-card , the same string originally used to register the component. Next, we move the content of the style tag in the constructor to the new style property on ElementMeta . We do the same for the template of CardComponent . We migrate the HTML to the new template property until the ElementMeta is filled in. Next, we remove everything in the constructor and replace it with a call to our micro-library's attachShadow function, passing in this to the first argument. This automates Shadow DOM setup. To make sure everything is working properly, this is where we start up the development server and observe the changes in the browser. Nothing should have changed about the user interface. Everything should appear the same. Our CardComponent has now been successfully refactored to use the micro-library's utilities, eliminating boilerplate and making the actual component code easier to reason about. That completes this 4-part tutorial series on building a micro-library for developing with Web Components. Our micro-library supports autonomous and form-associated custom elements. It enables developers to automate custom element setup as well as Shadow DOM setup, so they can focus on the unique functionality of their components. In the long run, these efficiencies add up to a lot of saved time and cognitive effort. If you want to dive more into ways to build long-lived web apps that use Web Components and avoid lock-in into specific JavaScript frameworks, check out Fullstack Web Components: Complete Guide to Building UI Libraries with Web Components.

Thumbnail Image of Tutorial Build Your Own JavaScript Micro-Library Using Web Components: Part 4 of 4