Latest Tutorials

Learn about the latest technologies from fellow newline community members!

  • React
  • Angular
  • Vue
  • Svelte
  • NextJS
  • Redux
  • Apollo
  • Storybook
  • D3
  • Testing Library
  • JavaScript
  • TypeScript
  • Node.js
  • Deno
  • Rust
  • Python
  • GraphQL
  • React
  • Angular
  • Vue
  • Svelte
  • NextJS
  • Redux
  • Apollo
  • Storybook
  • D3
  • Testing Library
  • JavaScript
  • TypeScript
  • Node.js
  • Deno
  • Rust
  • Python
  • GraphQL

Getting Started with Angular CLI

In this article, we will explore the different commands and features of the Angular CLI.Angular is one of the most popular frameworks for web app development at the moment. Over the years, it has grown into a mature framework, and now offers a host of features to supercharge your dev process. 🚀 A lot of new Angular developers talk about a steep learning curve when learning the framework. This could be why: If you're new to Angular, you'll have your hands full trying to grasp all these new concepts and the terminology around them. To make things easier when setting up a project and scaffolding new pieces, the Angular team has built Angular CLI . Angular CLI is the Command Line Interface for creating, building and scaffolding new functionality for Angular applications. It provides all the commands you need to rapidly spin up new Angular applications - starting from the build process, all the way to unit tests, end to end tests, and even deployment. You can then run npm i -g @angular/cli to install the Angular CLI. Angular CLI allows you to scaffold a full project structure from scratch, with dependencies and build tooling preinstalled and configured. You can use this command to set it up. When you run this command, the Angular CLI asks you a couple of questions so it can set things up correctly. Once you choose your preferred options, the following folder structure is created: This is a pretty standard NPM-based project which uses TypeScript and webpack . Since it is a single page application, it only contains one index.html file. The webpack configuration is abstracted away for simplicity, and all the configurable parameters are set in the angular.json file. The project uses karma and jasmine for unit testing and protractor for end-to-end testing. It comes with a single module pre-created, with an AppComponent which is the entry point for anyone using the app. We will learn more about modules, components and other types of files in the upcoming sections. An Angular module or NgModule is a container for components, services, pipes and any other functionality. Think of it as a way to group your application features or pages. Modules can export functionality that's declared within them, and can also import other modules. In theory, you could just use the one AppModule that's pre-created when you generate the project, but as your application grows, you will find it useful to logically group functionality to facilitate advanced features such as lazy loading. You can generate Angular modules using the command or Here's what a typical module file looks like: The @NgModule declaration contains the following properties: An Angular component is a unit of functionality that comprises of 4 files: There is also a .spec.ts file included, which will contain tests for your functionality. You can generate Angular components using the command or When you run the ng generate component command, these files are generated, and the nearest module is updated to include the new component in its declarations array. Angular allows for dependency injection using the concept of providers . Any class that includes the @Injectable decorator is considered a provider. It can be injected into a component, service or pipe through its constructor. You can generate an injectable service using the command or Consider the following example: The @Injectable decorator here specifies that this class PizzaService is provided in the root module. This is a more recent way of configuring a provider. Another way is to include it in the providers array for the required NgModule . If PizzaService were to require functionality from another service, it could be injected through its constructor like so: Services are generally used to provide shared functionality such as making HTTP requests, transforming data etc. Angular includes its own HTTP client which is based on RxJS observables. This is quite different from traditional Promise-based implementations and can be a bit hard to wrap your head around. This article is a great resource that explains these concepts. Services are instantiated once for each module they are provided in. So if a service was provided in the root module, it would only be instantiated once for your entire application. An Angular pipe is a special function that takes a value, applies some transformation and returns the transformed value. You can generate a pipe using the command or When a pipe is generated, the nearest module's declarations array is updated to include the new pipe. Here's an example: The ShoutPipe transforms any input into uppercase. This pipe can be instantiated in two ways: Angular CLI abstracts away all the build-related setup and configuration and provides a few commands that perform all these functions. The ng build command compiles your application as per the configuration specified in angular.json and tsconfig.json . It generates a dist folder with the build output. This command takes several command-line arguments . One of the most used variants is ng build --prod which creates a production-ready build of your app. This applies optimisations such as minification, tree shaking and dead code elimination to reduce the overall bundle size When developing your application locally, the Angular CLI provides a means of spinning up a local web server using the ng serve command. This command makes use of Webpack Dev Server under the hood. It watches for any changes in your source files and automatically reloads your browser tab(s). Angular CLI projects come with Karma and Jasmine pre-installed and configured. Karma is the test runner, and Jasmine is the testing framework. There are also Angular APIs available for creating test modules and mocking out dependencies. You can run any tests you've written using this command. This will run the tests in watch mode, which means they will be re-run whenever the source files change. You can run ng test with the argument --watch=false in case you'd like to do a single run e.g. in your CI environment. Angular CLI includes TSLint tooling to enforce code style and detect violations. It provides an opinionated set of rules, which can be overridden if required by modifying the tslint.json file. You can run this command to detect code style violations in your source code. This file contains all the configuration for how your Angular workspace is organized, built and served. A workspace can contain multiple projects, and they are all included in the projects object. Each project contains an architect section, where the builders and configuration for each command are defined. The example file below uses the default builders specified by the CLI. Custom builders can be installed as NPM dependencies and can be specified in place of the default ones. You can also write custom builders as per the guide available here . This article from Angular In Depth is also a great resource for understanding how builders work. Each project also has a schematics property, which can be used to specify extensions to Angular CLI functionality. There are different types of schematics: Schematics are a relatively recent addition to the Angular ecosystem. More information on how they work and how to use them can be found here and here . In this article, we have learnt about what Angular is, and the reasons why new Angular developers perceive it to have a steep learning curve. We have looked at the toolchain that Angular CLI provides and the different commands in-depth. During the process, we have also explored most of the important concepts of the Angular framework. We have seen how Angular CLI's tooling and ecosystem can abstract away the boring and fiddly parts of web development, and allow you, as an application developer, to focus on developing your application. Nx is a great set of tools that extends the functionality of the Angular CLI. This article explains how to set it up for your workspace. There are a number of great articles on Newline about specific Angular-related topics. The documentation available on the Angular website is top-notch and should answer most questions you may have. https://indepth.dev/ (previously Angular in Depth) is a great resource for articles, blogs and tutorials about Angular. https://blog.angular.io/ has articles written by Angular team members. It is generally the first place to feature new announcements about the framework.

Thumbnail Image of Tutorial Getting Started with Angular CLI

Writing Retrowave in Angular

Web Audio API has been around for a while now and there are lots of great articles about it. So I will not go into details regarding the API. What I will tell you is Web Audio can be Angular's best friend if you introduce it well. So let's do this.In Web Audio API you create a graph of audio nodes that process the sound passing through them. They can change volume, introduce delay or distort the signal. Browsers have special AudioNodes with various parameters to handle this. Initially, one would create them with factory functions of AudioContext : But since then they became proper constructors which means you can extend them. This allows us to elegantly and declaratively use Web Audio API in Angular. Angular directives are classes and they can extend existing native classes. Typical feedback loop to create echo effect with Web Audio looks like this: We can see that vanilla code is purely imperative. We create objects, set parameters, manually assemble the graph using connect method. In the example above we use HTML audio tag. When user presses play he would hear echo on his audio file. We will replicate this case using directives. AudioContext will be delivered through Dependency Injection. Both GainNode and DelayNode have only one parameter each — gain and delay time. That is not just a number, it is an AudioParam . We will see what that means a bit later. To declaratively link our nodes into graph we will add AUDIO_NODE token. All our directives will provide it. Directives take closest node from DI and connect with it. We've also added exportAs — it allows us to grab node with template reference variables . Now we can build graph with template: We will end a branch and direct sound to the speakers with waAudioDestinationNode : To be able to create loops like in the echo example above Dependency Injection is not enough. We will make a special directive. It would allow us to pass node as input to connect to it: Both those directives extend GainNode which creates an extra node in the graph. It allows us to disconnect it in ngOnDestroy easily. We do not need to remember everything that is connected to our directive. We can just disconnect this from everything at once. The last directive we need to complete our example is a bit different. It's a source node and it's always at the top of our graph. We will put a directive on audio tag and it will turn it into MediaElementAudioSourceNode for us: Now let's create the echo example with our directives: There are lots of different nodes in Web Audio API but all of them can be implemented using similar approach. Two other important source nodes are OscillatorNode and AudioBufferSourceNode . Often we do not want to add anything into DOM. And there is no need to provide audio file controls to the user. In that case AudioBufferSourceNode is a better option than audio tag. Only inconvenience is — it works with AudioBuffer unlike audio tag which takes a link to an audio asset. We can create a service to mitigate that: Now we can create a directive that works both with AudioBuffer and audio asset URL: Audio nodes have a special kind of properties — AudioParam . For example gain in GainNode . That's why we used setter for it. Such property value can be automated. You can set it to change linearly, exponentially or even over an array of values in a given time. We need some sort of handler which would allow us to take care of this for all such inputs of our directives. Decorator is a good option for this case: Decorator would pass processing to a dedicated function: Strong types will not allow us to accidentally use it for a non-existent parameter. So what would AudioParamInput type look like? Besides number it would include an automation object: processAudioParam function translates those objects into native API commands. It's pretty boring so I will just describe the principle. If current value is 0 and we want it to linearly change to 1 in a second we would pass {value: 1, duration: 1, mode: ‘linear’} . For complex automation we would also need to support an array of such objects. We would typically pass an automation object with short duration instead of plane number . It prevents audible clicking artifacts when parameter changes abruptly. But it's not convenient to do it manually all the time. Let's create a pipe that would take target value, duration and optional mode as arguments: Besides, AudioParam can be automated by connecting an oscillator to it. Usually a frequency lower than 1 is used and it is called an LFO — Low Frequency Oscillator. It can create movement in sound. In example below it adds texture to otherwise static chords. It modulates frequency of a filter they pass through. To connect oscillator to a parameter we can use our waOutput directive. We can access node thanks to exportAs : Web Audio API can be used for different things. From real time processing of a voice for a podcast to math computations, Fourier transforms and more. Let's compose a short music piece using our directives: We will start with simple task — straight drum beat. To count beats we will create a stream and add it to DI: We have 4 beats per measure. Let's map our stream: Now it gives us true in the beginning and false in the middle of each bar. We would use it to play audio samples: Now let's add melody. We will use numbers to indicate notes where 69 means middle A note. Function that translates this number to frequency can be easily found on Wikipedia. Here's our tune: Our component will play right frequency for each note each beat: And inside its template we will have a real synthesizer! But first we need another pipe. It would automate volume with ADSR-envelope. That means "Attack, Decay, Sustain, Release" and here's how it looks: In our case we need for the sound to quickly start and then fade away. Pipe is rather simple: Now we will use it for our synth tune: Let's figure out what's going on here. We have two oscillators. First one is just a sine wave passed through ADSR pipe. Second one is same echo loop we've seen, except this time it passes through ConvolverNode . It creates room acoustics using impulse response. It's a big an interesting subject of its own, but it is outside this article's scope. All other tracks in our song are made similarly. Nodes are connected to each other, parameters are automated with LFOs or change smoothly via our pipes. I only went over a small portion of this subject, simplifying corner cases. We've made a complete conversion of Web Audio API into declarative Angular open-source library @ng-web-apis/audio . It covers all the nodes and features. This library is a part of a bigger project called Web APIs for Angular — an initiative with a goal of creating lightweight, high quality wrappers of native APIs for idiomatic use with Angular. So if you want to try, say, Payment Request API or play with your MIDI keyboard in browser — you are very welcome to browse all our releases so far .

Thumbnail Image of Tutorial Writing Retrowave in Angular

I got a job offer, thanks in a big part to your teaching. They sent a test as part of the interview process, and this was a huge help to implement my own Node server.

This has been a really good investment!

Advance your career with newline Pro.

Only $30 per month for unlimited access to over 60+ books, guides and courses!

Learn More

How to Show Google Maps in React Applications with google-map-react

In this article we will look at how to display interactive Google maps in React applications using google-map-react. We will see how to add and configure this library, display Google maps with it, and also display markers on a map.Google Maps Platform is a set of APIs and SDKs that helps to work with maps, routes, and places. It allows to display static and dynamic maps, get routes and directions, and add geolocation and searching for places to web applications. Using Google Maps Platform directly may become a tedious task. If we want to display a Google map, we need to download the map SDK from Google and inject it into our application. After that, we can use the map API to display an interactive map. There are libraries such as google-map-react and react-google-maps that wrap Google Map API and make it easy to display maps in React applications. In this article, we'll be using the google-map-react library, but react-google-maps would be a valid choice as well. For this article, we assume that you're familiar with React components basics, such as working with component's properties. Before adding a Google map, we need to obtain the Google Maps API key. This key is required by the map SDK. This Quickstart will help you to start working with Google Maps API and get your API key. Google-map-react is a library that helps to display interactive Google maps in React applications. It allows rendering a map with React components on top of it. We start by adding the google-map-react package to a project: Now, we can display a map with markers on it: We start by importing the GoogleMapReact component that we'll use to display our map. Next, we import the MyMarker component that is a simple component we'll be using to display markers on a map. This component renders a circle with a text and a tooltip using the text and tooltip properties: The data for our markers got defined in the points array. For each point of interest, there are coordinates, id, and title. Finally, we use the GoogleMapReact component to render a map. We pass the Google Maps API key to the bootstrapURLKeys property along with preferred localization settings. The defaultCenter and defaultZoom properties define the starting position and zoom of the map. Markers to display on a map are defined as children components for the GoogleMapReact component. In our case, we map over the points array to create a list of MyMarker components. Each component for a marker should have the lat and lng properties to specify the latitude and longitude of a marker. This information is used by GoogleMapReact to properly position a marker. It is important to set width and height to a map container. If the container has got no size, the map will collapse to zero width and height, and won't be visible. In our example, this is done by applying the App CSS class to the container: When showing markers on Google maps, we often need to react when users hover over a marker or click it. The GoogleMapReact component controls markers and whether we hover over them. When we hover over a marker, the component instance for the marker will receive the $hover property. In the component for a marker, we can check this property and apply additional hovering logic. For hovering we can use CSS :hover pseudo-selector as well. To process clicks we can leverage standard React elements clicking. It is up to us to control clicks and provide additional logic for that. For example, let's apply additional styling for markers when hovering and log to console clicks on markers: For this demo, we've modified the MyMarker component. We check for the $hover property to apply additional hover CSS class when it's available. We also added a handler for the onClick event that will write a message to the console. The google-map-react library makes it easy to display Google maps with markers. It wraps the Google Map SDK and provides an easy to use GoogleMapReact component. We can configure the map with this component and display markers on top of it. Any React component can be used as a marker which makes it easy to customize the map to your requirements.

Thumbnail Image of Tutorial How to Show Google Maps in React Applications with google-map-react

React Redux: An Intro to The Leading State Management Solution

📰 This blog post highlights why Redux is one of the leading state management solutions for React, and how the React Redux library helps us to utilize the powerful logic that Redux provides.All React developers have at one time or another worked with state variables that have gotten out of hand. This happens most often when working with deeply nested components. Whenever we need to share state between components at different levels, we need to 'lift the state' , which means keep the state within the closest parent component. That's fine when you're only one, or maybe two components deep but it still isn't preferred because of how many extra lines of code we have to write. Not to mention, it can be a pain to debug! 😩 This is why we have 'state management solutions', such as Redux. Apart from knowing the basics of React and state management, you should also understand functional programming with JavaScript. It would be quite useful to know ES6 syntax as we use the spread operator and default parameter operation in the examples below. Since this is not a ReactJS tutorial, we would recommend you check out the official React docs to get a basic understanding. Redux is undoubtedly one of the first solutions many developers consider when trying to solve this problem, and rightly so. Redux enables us to keep our state in a global store, and have the power to access that state in any connected components. This means we don't need to manually pass props (also called prop-drilling ) and it enables us to get a lot more debugging features that can take the developer experience to the next level. As it says on the official Redux website , it is: A few years ago when I picked up React, I immediately tried to jump into Redux. Unlike React, which was easy to grasp and implement quickly, Redux just didn't make sense to me. I'm sure most new developers feel the same. There's a lot of boilerplate, and you end up wondering why you have to create so many files just to solve this state management issue you're facing. However, only after my first couple of projects with React, the need for a state management solution such as Redux was made clear to me. In essence, apart from managing some sort of global or shared state, most medium-to-large scale applications require predictability as well as ease of debugging. Redux sets up a foundation for the data flow in your application so when you need to work with complex state changes or implement new features in your existing React app, you can do it with confidence. Redux is not the only state management solution available to developers, and we'll discuss different options further down below. Redux works on a modified implementation of the Flux architecture. In short, we have a central store that manages the state of the complete application. To modify that store, we need to trigger certain functions (or, actions) that can safely modify the state. Before understanding the complete Redux flow, let's take a look at some Redux terminology: Actions are plain JavaScript objects that have a 'type' key, as well as an optional 'payload'. We use the dispatch function from react-redux within our React components that broadcasts the action object, and the appropriate reducer function receives it. Action Creators are JavaScript functions that define and return action objects. It is a convention to describe and save action types as constant strings. It is not necessary but provides us with a lot of benefits such as easy debugging and consistency. Reducers are 'pure functions' , which means they always return the same output for the respective input you pass to them. This is what makes our state mutations with Redux predictable and easy to reason about. They are responsible for making changes to the state in our store according to the action that was dispatched. As the name suggests, it is the store where our state lives. Redux works with a single store object, however, it can be separated into different files and objects to make it easy to work with. The only way to interact with the store is through reducers. We do not call any asynchronous tasks within reducers to keep true to the 'pure function' nature of reducer functions. This is why we need to intervene in the flow to make any necessary asynchronous calls, such as HTTP requests in middleware functions. The most common types of middlewares are Redux Thunk , Saga and Observable , all of which have varying levels of difficulty and functionality. As always, it is a good idea to research all of your tools before opting for them in your projects. The Redux flow follows these steps: If we're using a middleware library, our self-defined middleware function is executed between step 1 and 2, when the action is dispatched. If we're calling an API, typically we call further actions based on the status of the request. For example, if our request to fetch a piece of data was successful, we may call a 'success' action, otherwise, we may call a 'failed' action. Let's try implementing a Redux store from scratch. Before we begin, we need to initialize a React application, and for that, we'll use create-react-app to get up and running in a minute. Simply open up a terminal and type, npx create-react-app redux-tutorial We're using npx which executes the library without having to globally install it. This command will create a new folder called redux-tutorial where our newly created React app lives. Inside the project folder, we need to install redux which is the core Redux library, as well as react-redux which provides us with React-specific bindings for Redux. npm i redux react-redux Finally, type npm start to spin up a local development server. Now, referencing the data flow defined in the previous section, let's create the Redux components step by step. At the root level, create a store.js file where we will create our Redux store. Although we can configure our store to use various Redux tools including middlewares, here we will simply use our single reducer to initialize our store and export it. We can create and combine multiple reducers, however, for the sake of simplicity, let's define and export a single reducer. As we discussed, Reducers are pure JavaScript functions. They accept the initial state as the first argument and the dispatched action in the second. We will use a switch-case block to determine what to do depending on the type of action. Actions are plain JavaScript objects. We simply use Action Creators to get a function that we can call in our application instead of manually creating an object every time. This helps us with consistency and debugging, similar to how constants help us. Since we need an item that we can add to our shopping cart, we will pass it as an argument to our action creator which sets the payload. In our App.js , we can use local state to synchronize our input field. Our "Add to Cart" and "Clear Cart" buttons simply invoke local functions. To dispatch functions and select our Redux state, we need to import two hooks from react-redux . Then, we can initialize the dispatch function by calling, and our state with, The useSelector hook accepts a function which gets our Redux state. In this case, since we're only dealing with a single array, we can return the whole state which is our shopping cart. Now we can import our Action Creators which will dispatch our Action objects. That's it! We have a simple shopping cart. It may be unpractical, yes, but also highly useful to clear the basic concepts of Redux. Clicking on the buttons in the UI trigger the respective state changes in the store, and the selector returns the state array which we render as a list. As you can see, even for the simplest use-cases we need to write a lot of code when using Redux. We end up creating multiple files for reducers, action creators, middlewares, and so on. Redux Toolkit aims to simplify our Redux logic, by automatically creating constants and action creators using our reducer definition. All we have to do is create a 'slice' of the global state, and define our reducer. We can simply de-structure and grab our actions from that slice. Moreover, it comes with redux-thunk as the default middleware. It also uses the immer library to let us modify the state directly. Remember how I said we can't use state.push() because of immutability? Well, with immer we can do exactly that and more. To read more about Redux Toolkit, check out the documentation . Although Redux is the leading state management solution, there are still other options we can consider. Let's discuss two of them briefly. The Context API is NOT a state management solution. Rather, it's simply a way to bypass the need for prop-drilling. We have to manually create a store and manage how the state changes with respect to our actions. In a few lines, this is how Context works: We can create a Context using the API, which gives us a Provider and a Consumer . At the common parent of our components where the state should be saved, we create a Provider and wrap the children. This allows us to pass a 'value' prop which is our state. This value prop can be accessed anywhere in the children using the Consumer . One important thing to note is that if we're not using primitive types, our connected components may re-render every time the state changes, even if it's unrelated. To fix this, we can create multiple Providers and Consumers for each separate state, although that can be a bit time-consuming. MobX is quite similar to Redux at a quick glance, however, it's different where it counts. Some key features of MobX include: Both libraries are widely used, however, Redux is much more popular which means it has a bigger developer community and support. As mentioned above, MobX does not use pure reducers, which makes it difficult to test and hard to scale in large environments. This is often the reason developers choose to opt for Redux. State management is hard. Redux makes it not-so-hard. Once you get over the initial learning curve, you can be confident that using Redux may solve most of your state management problems. Redux also has a wonderful ecosystem with various middleware and tools, such as Redux Toolkit which makes it very easy to get up and going with Redux in a minute.

Thumbnail Image of Tutorial React Redux: An Intro to The Leading State Management Solution

A journey to Asynchronous Programming: NodeJS FS.Promises API

This post is about how Node.Js performs tasks, how and why one would use async methods and why and when one should use sync methods. We will also see how to make use of the new FS. Promises API.Throughout this post, we will look at the many ways to write code the asynchronous way in Javascript and also look at: ✅ How asynchronous code fits in the event loop ✅ When you should resort to synchronous methods ✅ How to promisify FS methods and the FS.Promises API To make the best use of this post, one must already be: ✅ Sufficiently experienced in Javascript and NodeJS fundamentals (Variables, Functions, etc.) ✅ Had some level of exposure to the FileSystem module (Optional) Once you're done reading this post you will feel confident about asynchronous programming and will have learned something new but also know: ⭐- If you hang tight there's also some bonus content regarding some best practices when using certain FileSystem methods! Javascript achieves concurrency through what is known as an event loop. This event loop is what is responsible for executing the code you write, processing any event that fires, etc. This event loop is what makes it possible for Javascript to run on a single thread and handle asynchronous tasks, this just means that Javascript does one thing at a time. This might sound like a limitation but it is definitely something that helps. It allows you to work without worrying about concurrency issues and surprisingly the event loop is non-blocking! Of course, unless you as a developer purposely do something to block it. The event loop looks more or less like this: This loop runs as long as your program runs and hence called the event loop. To better understand asynchronous programming though, one must understand the following concepts: Let's take a look at the following code example and see how a typical execution flow looks like: Initially the synchronous tasks console.log() will be run in the order they were pushed into the call stack. Then the Promise thenables will be pushed into the Job Queue , while the setTimeout 's callback function is pushed into the Callback Queue . However, as the Job Queue is given a higher priority than the Callback Queue, the thenables are executed before the callback functions. What's a promise or a thenable, you ask? That's what we will look at in the next topic! As you previously saw in the setTimeout , a callback function is one of the ways that Javascript allows you to write asynchronous code. In Javascript, even Functions are objects and because of this a function can take another function as an argument/parameter and can also be returned by functions. Functions that take another function as an argument is called a Higher-Order Function. A function that is passed as an argument to another function is what is known as a callback. But quite often, having a whole lot of callbacks look like this: Taken from callbackhell , this shows how extremely complex and difficult it might get to maintain callbacks in a large codebase. Don't panic! That's why we have promises. A promise is an object that will produce some value in the future. When? We can't say, it depends. However, the value that is produced is one of two things. It is either a resolved value or a reason why it couldn't be resolved, which usually indicates something is wrong. A promise goes through a lifecycle that can be visualized like the following: Taken from a great resource on promises, MDN . But still, this didn't provide the cleanliness we wanted because it was quite easy to have a whole lot of thenables one after the other. This is why the async/await syntax was introduced, which looks like the following: Looks a whole lot better than what you saw in all the previous code examples! Before we jump into the exciting FS.promises API that I previously used, we must talk about the often unnoticed and unnecessarily avoided synchronous FileSystem methods. Remember how I mentioned previously that you can purposely block the event loop? A synchronous FS method does just that. Now you might have heard quite a lot of times about how you should avoid synchronous FS methods like the plague, but trust me because they block the event loop, there are times when you can use them. A synchronous function should be used over an asynchronous one when: A typical use case to satisfy both the above use cases can be expressed like this: DataStore is a means of storing products, and you'll easily notice the use of synchronous methods. The reason for this use is that it is completely acceptable to use a synchronous method like this as the constructor function is run only once per every creation of a new instance of DataStore . Also, it is essential to see if the file is available and create the file before it will be used by any other function. The asynchronous FileSystem methods in NodeJS, commonly use callbacks because, during the time they were made, Promises and async/await hadn't come out nor were they at experimental stages. The key advantage these methods provide over their synchronous siblings is the fact that you do not end up blocking the event loop when you use them. This allows us to write better more performant code. When code is run asynchronously, the CPU does not wait idly by until a task is completed but moves on to the next set of tasks. For example, let us take a task that takes 200ms to complete. If a synchronous method is used, CPU will be occupied for the entire 200ms but if you use around 190ms of that time is freed up and can now be used by the CPU to perform any other tasks that are available. A typical code example of asynchronous FileSystem methods are: As you can see, they are distinguished by the lack of Sync and the apparent usage of callback functions. When secret.txt has been completely read, the callback function will be executed and the secret data stored will be printed on the console. As humans, we're prone to making silly mistakes and when frustrated or when we experience a lot of stress, we tend to make unwise decisions, one such decision is mixing synchronous code with asynchronous code! Let's look at the following situation: Due to the nature of how NodeJS tackles operations, it is very much likely that the secret.txt file is deleted before we actually read it. Thankfully here though, we are catching the error so we will know that the file doesn't exist anymore. It is best to not mix asynchronous code with synchronous code, being consistent is mandatory in a modern codebase. Way back when FS.promises was introduced, developers had to resort to a few troublesome techniques. You might not need them anymore, but in the unlikely event you end up using an old version of NodeJS knowing how to achieve promisification will help greatly. One method is to use the promisify method from the NodeJS util module: But as you can see, this allows you to only turn one method into its promisified version at a time, so some developers often used an external module known as bluebird that allowed one to do this: Some developers still use bluebird as opposed to the natively implemented Promises API, due to performance reasons. As of NodeJS version 10.0, you can now use FS.promises a solution to all the problems that you'd face with thenables when you use Promises. You can neatly and directly use the FS.promises API and the clean async/await syntax. You do not have to use any other external dependencies. To use the FS.promises API you would do something like the following: It's much cleaner than the code you saw from the callback hell example, and the promises example as well! One must note however that async / await is simply syntax sugar, meaning it uses the Promise API under the hood. File streams are unfortunately one of the most unused or barely known concepts in the FileSystem module. To understand how a FileStream works, you must look at the Streams API in the NodeJS docs. One very common use case of FileStreams is when you must copy a large file, quite often whether you use an asynchronous method or synchronous method, this leads to a large amount of memory usage and a long time. This can be avoided by using the FileSystem methods fs.createReadStream and fs.createWriteStream . Phew! That was long, wasn't it? But now you must feel pretty confident regarding asynchronous programming, and you can now use the FS.promises API instead of the often used callback methods in the FileSystem module. Over time, we will see more changes in NodeJS, it is after all written in a language that is widely popular. What you should do now is check out the resources section and read some more about this or try out Fullstack Node.Js to further improve your confidence and get a lot of other different tools under your belt!

Thumbnail Image of Tutorial A journey to Asynchronous Programming: NodeJS FS.Promises API