Latest Tutorials

Learn about the latest technologies from fellow newline community members!

  • React
  • Angular
  • Vue
  • Svelte
  • NextJS
  • Redux
  • Apollo
  • Storybook
  • D3
  • Testing Library
  • JavaScript
  • TypeScript
  • Node.js
  • Deno
  • Rust
  • Python
  • GraphQL
  • React
  • Angular
  • Vue
  • Svelte
  • NextJS
  • Redux
  • Apollo
  • Storybook
  • D3
  • Testing Library
  • JavaScript
  • TypeScript
  • Node.js
  • Deno
  • Rust
  • Python
  • GraphQL

Figma And Figmagic For React: Your First Workflow

In this post, we'll keep it short and sweet and make a sort of "hello world" implementation. This should get you a good sense of how things work when starting from scratch. Let's recap very briefly what we know about design systems, design tokens and Figmagic: Start Figma. Create a new design file by clicking New in the upper right corner and selecting Design file . In the left panel, click Page 1 and rename it to Design tokens . Figmagic picks up on a certain set of given page names, so it's vital that your naming is in line with what Figmagic expects. In that regard, Figmagic does not do any kind of intelligent inferring of data from a document other than matching its processing with named pages. On the blank new page, we need to create a frame. Press F and drag one out. Rename the frame Colors . On the design tokens page, the same principle goes for naming: Figmagic assumes that frames use a set of correct names that specify what type of entity they contain. I recommend having a high degree of hygiene when creating pages for design system usage since, while Figmagic is a tool that does not care about your internal layout, your teammates very surely will if it's all in shambles. Into the frame, add a few rectangles; I'll go with three rectangles. Change the fills of each to a unique color. I'm thinking red, green, and blue to make very distinct colors. Then, rename their layers to their color ( Red , Green , Blue in my case). Note that the names you give the layers will be what these colors are called in the token files that will be produced shortly. In this lesson, we will use our component library codebase , which includes Figmagic. Clone or download it, then install dependencies with npm install , and then finally navigate into it. To successfully run Figmagic, it needs to have a Figma URL (or document/file ID) and a Figma API token. Open up Figma. Click the Figma icon in the upper left corner. Click Help and account > Account settings . Scroll almost all the way down until you see Personal access tokens . Add a token description and press enter/return. You'll be presented a token, which will have a format like 83715-e8346292-bf3v-88s1-n932-j364ge9h687e . Copy it. Keep this token a secret, and don't check this into your code! If you do happen to leak it, just revoke the old one and get a new token. To get the file ID, right-click the heading of the file and click Copy Link , paste the link in a text editor, and copy the bit immediately after the /file/ section. The best way to handle these "secrets" is by placing them into an environment file, which, by convention, is called .env . Create it at the root of the project. Place the ID and token values into the file, like this: Figmagic will pick up that these values are present. No further configuration is technically needed. Now, to run Figmagic, you just do: Please refer to Figmagic's documentation to get a deeper understanding of what the default values are. There's going to be a bit of feedback on-screen. When it's done pulling the data from your document and processing it, you should be seeing a tokens folder at the root of your project. Inspect tokens/colors.ts : TypeScript output is the default for tokens, but we can change this with either configuration or in the actual command. Now try: VΓ³ilaβ€”it's a plain old JavaScript file, perfect for our current tooling! You'll perhaps agree that using Figmagic isβ€”if not magicβ€”then at least is not very hard. It will, however, require a structured approach to how we work inside Figma, something we will look at more in my next post. We're working on a new course, The newline Guide to React Component Design Systems with Figmagic, where we go deep into a design-driven workflow giving you all the piecesβ€”from code to know-howβ€”to implement a design system and make it operational for web-based development. If you're looking to improve your whole team's way of working with releasing and designing continuously with a shared basis in a design system, The newline Guide to React Component Design Systems with Figmagic is the course that puts together the entire picture, from theory, to process, to practical setup.

Thumbnail Image of Tutorial Figma And Figmagic For React: Your First Workflow

Optimistic UIs with React, Apollo Client and TypeScript (Part I) - Project Overview

Liking a tweet on Twitter. Marking an e-mail as read in your G-Mail inbox. These type of simple, low-stake actions seem to happen so quickly that you can perform one action after another without having to wait for the previous to finish resolving. As the defining trait of optimistic UIs , these actions give the feeling of a highly responsive and instant UI. Psychologically speaking, they trick the user into thinking that an action has completed even though the network request it sends to the server has not been fully processed. Take, for example, the like button of a tweet. You can scroll through an entire feed and like every single tweet with zero delays between successive tweets. To observe this, open up a Twitter feed and your browser's developer console. Within the developer console, switch to the network tab and select the "Slow 3G" option under the throttling dropdown to simulate slow 3G network speeds. Slowing down network speeds lets us see the UI updates happen before the server returns a response for the action. Then, filter for network requests sent to a GraphQL API endpoint containing the text "FavoriteTweet" (in the request URL), which tells the server to mark the tweet as liked by the current user. When you click on a tweet's like button, the heart icon disappears, the like count increments by one and the text color changes to pink despite the network request still pending. While the server handles this request, the updates to the UI give the illusion that the server already finished processing the request and returned a successful response. In the below GIF, you can watch how liking multiple tweets, one after the other, immediately increments the like count of each tweet by one on the UI even if the server is busy working on previous requests. The user gets to like as many tweets as they want without waiting on any responses from the server. Upon receiving a response back from the server, the heart icon of the like button fades back in with an animation. Here's what a normal implementation of the like button might look like: Here's what Twitter's implementation of the like button looks like: Note : Twitter's UI never disables the like button. In fact, you can click on the like button as many times as you like. The UI will be updated accordingly, and the network requests for every click get sent to the server. By building UIs in this manner, the application's performance depends less on factors like the server's status/availability and the user's network connectivity. Since humans, on average, have a reaction time of 200 to 300 milliseconds , being delayed for this amount of time (or more) between actions (due to server response times) can cause not only a frustrating user experience, but also, hurt the brand's image. Being known for having a slow, unreliable, unresponsive UI makes users less likely to enjoy and engage with the UI. As long as the user perceives actions as being instant and working seamlessly, they won't ever question the application's performance. The key to adopting optimistic UI patterns is understanding the meaning of the word "optimistic." Optimistic means being hopeful and confident that something good will occur in the future. In the context of optimistic UIs, we should be confident that for some user action, the server, in at least 99% of all cases, returns a successful response, and in less than 1% of all cases, the server returns an error. In most situations, low-stake actions tend to be ideal candidates when deciding where to apply optimistic UI patterns. To determine whether an action is a low-stake action, ask yourself these questions: If the answer to all these questions is yes, then the action is a low-stake action, and thus, can update the UI optimistically with more benefits to the user experience than drawbacks. In the case of Twitter's like button: On the other hand, you should not consider optimistic UI patterns for high-stake actions, especially those involving very important transactions. For example, could you imagine a bank site's UI showing you that your check was successfully deposited, and then discovering days later, when you have to pay a bill due the next day, that it was not deposited because of the server happened to be experiencing a brief outage during that time? Think about how angry you would be at the bank and how this might sour your perception of the bank. Integrating optimistic UI updates into an application comes with challenges like managing local state such that results of an action can be simulated and reverted. However, applications built with React and Apollo Client have the necessary tools, features and APIs for easily creating and maintaining optimistic UIs. Below, I'm going to show you how to recreate a well-known optimistic UI found in a popular iOS app, Messages , with React and Apollo Client. When a user sends a message, the message appears to have been sent successfully even if the server has not yet finished processing the request. Once the server returns a successful response, there are no changes made to the UI except for a "Delivered" status text being shown beneath the most recently sent message. To get started, scaffold a new React application with the Create React App and TypeScript boilerplate template. For this project, we will be building a "public chatroom" that lets you choose which user to send messages as: Upon picking a user, the application displays the messages from the perspective of the selected user, and you can send messages as this user. Note : This server does not support real-time communications since that's outside the scope of this tutorial. You can add functionality for real-time communications with GraphQL subscriptions. Next, clone (or fork) the following GraphQL API server running Apollo Server. https://codesandbox.io/embed/apollo-server-public-chat-room-for-optimistic-ui-example-srb5q?fontsize=14&hidenavigation=1&theme=dark This server defines a GraphQL schema for a basic chat application with two object types: User and Message . It comes with a query type (for fetching user/s and message/s) and a mutation type (for adding a new message to the existing list of messages). Initially, this server is seeded with two users and twenty messages. Each resolver populates a single field with this seeded data that is stored in memory. Within the newly created React application, let's install several dependencies: Since the application will be styled with Tailwind CSS , let's set up Tailwind CSS for this application. Within the tailwind.config.js file, add the paths glob pattern ./src/**/*.{js,jsx,ts,tsx} to tell Tailwind which type of files contain React components. Since the UI features an input field, we should also the @tailwindcss/forms plugin with the strategy option set to class to leverage Tailwind CSS form component styles via CSS classes. ( tailwind.config.js ) Delete the src/App.css file and remove all of the default CSS rules in the src/index.css file. Within this file, add the standard @tailwind directives: ( src/index.css ) Add several empty directories to the src directory: To initialize an ApolloClient instance, import the Apollo Client and pass it a configuration object with two options: To make the Apollo Client instance available throughout the entire React application, wrap the <App /> component within the provider component <ApolloProvider> , which uses React's Context API. Here's what the src/index.tsx file should look like: ( index.tsx ) The application contains two child components: Since both components must know who the current user is, and the <UsersList /> component sets the current user, let's define a React context AppContext to make the current user globally available to the application's component tree. Within the src/context directory, add an index.ts file: Then, define the React context AppContext . Its value should contain a reference to the current user ( currentUser ) and a method for setting the current user ( changeCurrentUser ). ( src/contexts/index.ts ) Although we initialize the value of AppContext to an empty object, we will later set this context's value in the <App /> component, where we will pass it its actual value via its provider component's value prop. The AppContextInterface interface enforces the types allowed for each method and value specified in the context's value. You may notice a User type that is imported from a src/types/index.ts file. Within the src/types directory, add an index.ts file: Based on the GraphQL schema, define a User interface. ( src/types/index.ts ) Within the src/App.tsx file, import AppContext and wrap the child components and elements of the <App /> component with the AppContext.Provider provider component. Inside of the <App /> component's body, we define a state variable currentUser , which references the currently selected user, and a method changeCurrentUser , which calls the setCurrentUser update function to set the current user. Both currentUser and changeCurrentUser get passed to the AppContext.Provider provider component's value prop. These values satisfy the AppContextInterface interface. ( src/App.tsx ) The <UsersList /> component fetches a list of users from the GraphQL API server, whereas the <MessagesClient /> component fetches a list of messages from the GraphQL API server. To fetch data from a GraphQL API server with Apollo Client, use the useQuery Hook. This Hook executes a GraphQL query operation. It accepts two arguments: And returns a result object, which contains many properties. These are the most commonly used properties: These properties represent the state of the query and change during its execution. They can be destructured from the result object and referenced within the function body of the component, like so: For more properties, visit the official Apollo documentation here . Once it successfully fetches data from the GraphQL API server, Apollo Client automatically caches this data locally within the cache specified during its initialization (i.e., an instance of InMemoryCache ). Using a cache expedites future executions of the same queries. If at a later time, Apollo Client executes the same query, then Apollo Client can get the data directly from the cache rather then having to send (and wait on) a network request. Within the src/components/UsersList.tsx file, define the <UsersList /> component, which... ( src/components/UsersList.tsx ) Once the data has been successfully fetched, the component renders a list of users who are members of the "public chatroom." When you click on one of the users, you select them as the current user. A check mark icon appears next to the user's name to indicate that it has been selected. Since the query returns a list of users, the UsersQueryData interface contains a users property that should be set to a list of User items, like so: ( src/types/index.ts ) Note : It should match what's specified by the GraphQL query string that's passed to the useQuery Hook. To refresh the cached data with the latest, up-to-date data from the GraphQL API server, you can: To know when Apollo Client is refetching (or polling) the data, destructure out the networkStatus value from the result object, and check if it equals NetworkStatus.refetch , which indicates an in-flight refetch, or if it equals Network.poll , which indicates an in-flight poll. Note : The notifyOnNetworkStatusChange networking option tells Apollo Client to re-render the component whenever the network status changes (e.g., when a query is in progress or encounters an error). For a full list of network statuses you can check for, click here . Like the <UsersList /> component, the <MessagesClient /> component also fetches data (in this case, a list of messages) by calling the useQuery Hook. When rendering the messages, the current user's messages are aligned to the right-side of the messages client. These messages have a blue background with white text. All other messages are aligned to the left-side of the messages client. By adding the sender's initials and name to each of these messages, we can tell who sent which message. ( src/components/MessagesClient.tsx ) All that's missing from the message client, UI-wise, is an input field for sending messages. Below the messages, add a form with an input field and send button. Altogether... ( src/components/MessagesClient.tsx ) If you find yourself stuck at any point while working through this tutorial, then feel free to visit the part-1 branch of this GitHub repository here for the code. Thus far, we learned how companies like Twitter adopt optimistic UI patterns to deliver faster, snappier user experiences. We set up the project with Apollo Client, Tailwind CSS and TypeScript, and we built a UI that queries data from a GraphQL API server. Continue on to the second part of this tutorial here , in which we implement the remaining functionality: Specifically, we will dive into the useMutation Hook and learn how to manipulate data within the Apollo Client cache to update the UI optimistically.

Thumbnail Image of Tutorial Optimistic UIs with React, Apollo Client and TypeScript (Part I) - Project Overview

I got a job offer, thanks in a big part to your teaching. They sent a test as part of the interview process, and this was a huge help to implement my own Node server.

This has been a really good investment!

Advance your career with newline Pro.

Only $30 per month for unlimited access to over 60+ books, guides and courses!

Learn More

Work Effectively With Figmagic - File OrganizationΒ 

Working effectively with Figmagic means understanding how and what it actually parses from a document. In this post, we will demystify exactly how a Figmagic-compliant Figma file needs to look for it to work as intended. On a high level, there are three page names that Figmagic looks for: These correspond to what type of thing Figmagic should process them as. You can certainly have any number of pages with other names than those within the same document. However, Figmagic itself will only process pages with the names I just listed. You do not need all of those pagesβ€”use the ones you need! Let's recap how we correctly outline our design tokens. Rename the current page to Design tokens . Next, create a frame, and into it, create a rectangle. Rename the frame Colors because we will use this frame to contain our colors. Make the rectangle a solid color, and then rename the rectangle into the name of my color. What we have done is create the basic structure required for Figmagic to create design tokens from our color swatches. Colors are a typical first use-case and are easily demonstrable, but there are many more types of tokens we can use. At the time of making this course, they are: The basic idea is the same for all of these, and they follow the pattern we have used a couple of times now for the colors: Go ahead and open up the Figmagic Design System template so that we can look at some other examples. The approach used in Figmagic is to express values as "uni-dimensional," which in common parlance just means that every item (or design token) expresses only a single detail. The effect of this is that we get a very granular design system, but collecting tokens or details togetherβ€”for example, assembling a more complex designβ€”is something that we have to do in code from the individual tokens. The opposite approach could be something like a text string that has advanced formatting, with a custom font, some particular size, some font variant, some color, some special kerning setting... and then we would need to either use all of those details together, or we somehow still need to disassemble each detail from this set of details. Some of the token types follow a very straightforward format. For example, Radii will pick up on the assigned Corner radius value. Here, we use 0, 4, 8, and 100 (for a full circle). Border widths is the same story. The stroke width is what gets picked up here: 1, 2, 4, or 8 pixels. For the animation-specific ones, it's a bit different. Instead, these directly specify a value as you can see with 0.15s or the cubic bezier functions. Since these values cannot be represented in another way, these are somewhat unique in the overall scope of how Figmagic works. Z indices also follow that pattern. Spacing and Media queries are specified by using rectangles that express the width of the token value. The exact height of these objects does not affect anything. Then for the typography tokens. This is where we need to remember and understand the uni-dimensional tokens. To chisel out our typography, we need to do it methodically. In the template, we can see Font Weights that correspond to all the allowed weights. In this case, you'll perhaps notice that the Font Families are copies, but that's just how it happens to be in this project. Overall, all of the font token frames specify their own respective aspect: weight, family, size, letter spacings... In practice, then, to get them to work together in a useable way, my recommendation is to add a dedicated Fonts frame and create Figma styles from those so that you have a reusable font style as you design within Figma . That frame will not be caught by Figmagic. Jump over to the Graphics page. Notice how Graphics do not need frames. Instead, they need to be packed into Figma components. Other than that, there's not much to say about graphics. You make them with the provided vector tools, bundle them into a component (with CMD + OPT + K ), and then tell Figmagicβ€”through CLI or configurationβ€”that you also want graphics to be handled. More on that later. Last but not least, open the Elements page. To properly generate elements requires close adherence to how Figmagic will parse groups and layers and their names. This complex topic deserves deeper attention, but I'll give you a brief lightning tour right here! The red lines (courtesy of the Redlines plugin ) are not required. I am using them here as a visual aid and developer guidance. An "element" is the Figmagic term for what could be called components. The rationale for calling them elements is that Figmagic elements should correspond closely to HTML elements, thus being relatively basic and not deeply nested. They should thereby also correspond to HTML primitives that already exist, like button and input . You'll see that the elements are divided into two categories: flat and nested. Flat elements, like the Select , have a shallow model: in this case, it only has a Normal state with its text and layout. Nested elements, like the Button , have one additional level of depth, here using the CSS placeholder syntax to enrich the Normal state with a :disabled level. It also uses more variants ( Warning and Error ). These elements are not generated by default, like the graphics were, so they need to be activated in configuration or in the CLI. Generated elements do not strictly, technically, need to correspond to anything that's set up in the tokens. You would, however, get a warning stating that value so-and-so was hard-coded since it could not be inferred from the tokens. In reality, you'd, of course, want to design these elements from the atomic design tokens as far as possible. Figmagic won't be your blocker, though, if you need to go out of bounds for a bit! By now, you should be able to create a Figma document that can be used together with Figmagic. You've also seen all three types of objects you can generate with it and how you need to approach each one of them. Figmagic makes Figma, an already amazing tool, even better for design-oriented development teams. Use it for simple tokens, or operationalize fully across tokens, graphics output and component generation: it's in your hands. We're working on a new course, The newline Guide to React Component Design Systems with Figmagic, where we go deep into a design-driven workflow giving you all the piecesβ€”from code to know-howβ€”to implement a design system and make it operational for web-based development. If you're looking to improve your whole team's way of working with releasing and designing continuously with a shared basis in a design system, The newline Guide to React Component Design Systems with Figmagic is the course that puts together the entire picture, from theory, to process, to practical setup.

Thumbnail Image of Tutorial Work Effectively With Figmagic - File OrganizationΒ 

Design tokens and why design systems need them?

One magic, simple concept (design tokens) and a one-stop shop (Figmagic) to contain our design make one hell of a powerhouse. Let's learn what design tokens are and how you can work with a β€œstructured design” approach using Figmagic, a command-line interface tool that extends what we can do with Figma. Figmagic lets us do three things: pull design tokens, extract graphics, and generate React components. In this post, we will discuss the core concept "design tokens". Tokens offer a form of β€œcontract” between a designer’s intent and its fulfillment by developers. This means that both sides agree to treat the individual parts of a complete design through the tokens that represent those values. As a format, they are super easy to read, understand, and adapt for consumption by many types of systems or applications. That’s very important as you start doing cross-platform apps and anything similarly complex. Tokens ensure that values are not magic numbers or ”just picked at random.” This makes communication precise and effortless. Creating actual code for components, even complex ones also becomes a lot less of a bore since what you are doing is just pointing stuff like padding, Z indices, and anything else to their token representations. For instance, we need to add margins to a box. Let's see what we have available: Given the above, the small spacing is the single spacing you can use for your margins, paddings, or other sizes. For sure, you sometimes need to hardcode new unique values, but that should happen in a very small number of instances. I call this approach "structured design"β€”no need to make it a big conceptβ€”but it's an approach in which we leave as little as possible to chance. As far as possible, we use the tokens instead of hard-coding any values. With a single spacing it does, however, seem reasonable to believe that it's time to add some more to cater to more dynamic, realistic needs, so let's evolve the model a tiny bit: We now have two possibilities, and both are perfectly valid choices. The evolution process should be done in collaboration between the designers and developers. So, where do these tokens come from? The token has to be communicated and stored somewhere and then exported to a useful formatβ€”like our example JSON file. The exported token files themselves do not, therefore, usually constitute "the truth" as such but act only as a kind of transport artifact for the truth: Figma. The implication of this is that they only fill the role of moving a value from one system (Figma) to a variety of others (anything that accepts JSON, for example). The designer, in turn, needs to ensure correspondence between Figma styles and design tokens on their end. The rest of the tooling just happens to be things I feel are very good and happen to work well together. It's really smart to work in a structured way by assembling design systems from design tokens. Using tokens, we can shape simple elements, which can then be ordered into components and then into compositions and views from those. Design tokens fairly directly tend to create a form of taxonomy of what things exist and in which relations. So, your color "green" will be a very flat and direct way of saying what they are, but it won't say anything about where they are used. That information can be handled in other waysβ€”though you can also use aliases to set a value like "orange" to an auxiliary value like "color-accent," thus naming by purpose : What makes sense for you to use is often use case-dependent. This can also, given the right tooling, provide the side effect of making updates to token values magically update the actual code. Bigger updates, like changing from a token X to token Y , will still require a code change, but the effort is practically negligible. By their nature, design tokens enforce the standards you have set up. Some advice for the road, as you start using design tokens in actual projects: Using design tokens as a part of your workflow will most likely make it easier to communicate the design, faster to work with coding a design, and less susceptible to breaking between UI expectations and how the design is actually coded. We're working on a new course, The newline Guide to React Component Design Systems with Figmagic, where we go deep into a design-driven workflow giving you all the piecesβ€”from code to know-howβ€”to implement a design system and make it operational for web-based development. If you're looking to improve your whole team's way of working with releasing and designing continuously with a shared basis in a design system, The newline Guide to React Component Design Systems with Figmagic is the course that puts together the entire picture, from theory, to process, to practical setup.

Thumbnail Image of Tutorial Design tokens and why design systems need them?

Static Site Generation with Next.js and TypeScript - Project Overview

Many of today's most popular web applications, such as G-Mail and Netflix, are single-page applications (SPAs). Single-page applications deliver highly engaging and exceptional user experiences by dynamically rendering content without fully reloading whole pages. However, because single-page applications generate content via client-side rendering, the content might not be completely rendered by the time a search engine (or bot) finishes crawling and indexing the page. When it reaches your application, a search engine will read the empty HTML shell (e.g., the HTML contains a <div id="root" /> in React) that most single-page applications start off with. For a smaller, client-side rendered application with fewer and smaller assets and data requirements, the application might have all the content rendered just in time for a search engine to crawl and index it. On the other hand, for a larger, client-side rendered application with many and larger assets and data requirements, the application needs a lot more time to download (and parse) all of these assets and fetch data from multiple API endpoints before rendering the content to the HTML shell. By then, the search engine might have already processed the page, regardless of the content's rendering status, and moved on to the next page. For sites that depend on being ranked at the top of a search engine's search results, such as news/media/blogging sites, the performance penalties and slower first contentful paint of client-side rendering may lower a site's ranking. This results in less traffic and business. Such sites should not client-side render entire pages worth of content, especially when the content infrequently (i.e., due to corrections or redactions) or never changes. Instead, these sites should serve the content already pre-generated as plain HTML. A common strategy for pre-generating content is static site generation . This strategy involves generating the content in advance (at build time) so that it is part of the initial HTML document sent back to the user's browser when the user first lands on the site. By exporting the application to static HTML, the content is created just once and reused on every request to the page. With the content made readily available in static HTML files, the client has much less work to perform. Similar to other static assets, these files can be cached and served by a CDN for quicker loading times. Once the browser loads the page, the content gets hydrated and maintains the same level of interactivity as if it was client-side rendered. Unlike Create React App , popular React frameworks like Gatsby and Next.js have first-class, built-in static site generation support for React applications. With the recent release of Next.js v12, Next.js applications build much faster with the new Rust compiler (compared to Babel, this compiler is 17x faster). Not only that, Next.js now lets you run code on incoming requests via middleware, and its APIs are compatible with React v18. In this multi-part tutorial, I'm going to show you how to... We will be building a simple, statically generated application that uses the Petfinder API to display pets available for adoption and recently adopted. All of the site's content will be pre-rendered in advance with the exception of pets available for adoption, which the user can update on the client-side. Home Page ( / ): Listings for Pet Animal Type ( /types/<type> ): Visit the live demo here: https://petfinder-nextjs.vercel.app/ To get started, initialize the project by creating its directory and package.json file. Note : If you want to skip these steps, then run the command npx create-next-app@latest --ts to automatically scafford a Next.js project with TypeScript. Then, proceed to the next section of this tutorial. Install the following dependencies and dev. dependencies: Add a .prettierrc file with an empty configuration object to accept Prettier's default settings. Add the following npm scripts to the package.json file: Here's what each script does: At the root of the project directory, create an empty TypeScript configuration file ( tsconfig.json ). By running next , Next.js automatically updates the empty tsconfig.json file with Next.js's default TypeScript configuration. ( tsconfig.json ) Additionally, this command auto-generates a next-env.d.ts file at the root of the project directory. This file guarantees Next.js types are loaded by the TypeScript compiler. ( next-env.d.ts ) To further configure Next.js, create a next.config.js file at the root of the project directory. This file allows you to override some of Next.js's default configurations, such as the project's base Webpack configurations and mapping between incoming request paths and destination paths. For now, let's just opt-in to React's Strict Mode to spot out any potential problems, such as legacy API usage and unsafe lifecycles, in the application during development. ( next.config.js ) Similar to the tsconfig.json file, running next lint automatically installs the eslint and eslint-config-next dev. dependencies. Plus, it creates a new .eslintrc.json file with Next.js's default ESLint configuration. Note : When asked "How would you like to configure ESLint?" by the CLI, select the "Strict" option. ( eslintrc.json ) This application will be styled with utility CSS rules from the Tailwind CSS framework . If you are not concerned with how the application is styled, then you don't have to set up Tailwind CSS for the Next.js application and can proceed to the next section. Otherwise, follow the directions here to properly integrate in Tailwind CSS. To register for a Petfinder account, visit the Petfinder for Developers and click on "Sign Up" in the navigation bar. Follow the registration directions. Upon creating an account, you can find the API key (passed as the client ID in the request payload to the POST https://api.petfinder.com/v2/oauth2/token endpoint) and secret (passed as the client secret in the request payload to the POST https://api.petfinder.com/v2/oauth2/token endpoint) under your account's developer settings. Here, you can track your API usage. Each account comes with a limit of 1000 daily requests and 50 requests per second. At the root of the project directory, create a .env file with the following environment variables: ( .env ) Replace the Xs with your account's unique client ID and secret. The home page features a grid of cards. Each card represents a pet animal type catalogued by the Petfinder API: These cards lead to pages that contain listings of pets recently adopted and available for adoption, along with a list of breeds associated with the pet animal type (e.g., Shiba Inu and Golden Retriever for dogs). Suppose you have to build this page with client-side rendering only. To fetch the types of animal pet from the Petfinder API, you must: Initially, upon visiting the page, the user would be presented with a loader as the client... Having to wait on the API to process these two requests before any content is shown on the page only adds to a user's frustrations. Wait times may even be worst if the API happens to be experiencing downtime or dealing with lots of traffic. You could store the access token in a cookie to avoid sending a request for a new access token each time the user loads the page. Still, you are left with sending a request for a list of types each time the user loads the page. Note : For stronger security (i.e., mitigate cross-site scripting by protecting the cookie from malicious JavaScript code), you would need a proxy backend system that interacts with the Petfinder API and sets an HttpOnly cookie with the access token on the client's browser after obtaining the token from the API. More on this later. This page serves as a perfect example for using static site generation over client-side rendering. The types returned from the API will very rarely change, so fetching the same data for each user is repetitive and unnecessary. Rather, just fetch this data once from the API, build the page using this data and serve up the content immediately. This way, the user does not have to wait on any outstanding requests to the API (since no requests will be sent) and can instantly engage with the content. With Next.js, we will leverage the getStaticProps function, which runs at build time on the server-side. Inside this function, we fetch data from the API and pass the data to the page component as props so that Next.js pre-renders the page at build time using the data returned by getStaticProps . Note : In development mode ( npm run dev ), getStaticProps gets invoked on every request. Now, within the root of the project directory, create a pages directory, which will contain all of the page components. Next.js's file-system based router maps page components to routes. For example, pages/index.tsx maps to / , pages/types/index.tsx maps to /types and pages/types/[type.tsx] maps to types/:type ( :type is a URL parameter). Create three more directories: The Petfinder API documentation provides example responses for each of its endpoint. With these responses, we can define interfaces for the responses of endpoints related to pet animals, pet animal types and pet animal breeds. Create an interfaces directory within the shared directory. Inside of the interfaces directory, create a petfinder.interface.ts file. ( shared/interfaces/petfinder.interface.ts ) Note : This tutorial skips over endpoints related to organizations. Inside of the pages directory, create an index.tsx file, which corresponds to the home page at / . Let's build out the home page by first defining the <HomePage /> page component's structure. ( pages/index.tsx ) Let's create the <TypeCardsGrid /> component, which renders a grid of cards (each represents a pet animal type). The component places the cards in a 4x2 grid layout for large screen sizes (width >= 1024px), 3x3 grid layout for medium screen sizes (width >= 768px), 2x4 grid layout for small screen sizes (width >= 640px) and a single column for mobile screen sizes (width < 640px). ( components/TypeCardsGrid.tsx ) Let's create the <TypeCard /> component, which renders a card that represents a pet animal type. The card shows a generic picture of the pet animal type and a link to browse listings (recently adopted and available for adoption) of pet animals of this specific type. Note : The types returned from the Petfinder API do not have an id property, which serves a both a unique identifier and a URL slug (e.g., the ID of type "Small & Furry" is "small-furry"). In the next section, we will create a helper method that takes a type name and turns it into an ID. ( components/TypeCard.tsx ) Since the Petfinder API does not include an image for each pet animal type, we can define an enumeration ANIMAL_TYPES that supplements the data returned from the API with an Unsplash stock image for each pet animal type. To account for the images' different dimensions in the <AnimalCard /> component, we display the image as a background cover image of a <div /> and position the image such that the animal in the image appears in the center of a 10rem x 10rem circular mask. ( .enums/index.ts ) Like single-page applications that don't fully reload the page when navigating between different pages, Next.js also lets you perform client-side transitions between routes via the Link component of next/link . This component wraps around an <a /> element, and the href gets passed to the component instead of the <a /> element. When built, the generated markup ends up being just the <a /> element, but having an href attribute and having the same behavior as a <Link /> component.

Thumbnail Image of Tutorial Static Site Generation with Next.js and TypeScript  - Project Overview