Latest Tutorials

Learn about the latest technologies from fellow newline community members!

  • React
  • Angular
  • Vue
  • Svelte
  • NextJS
  • Redux
  • Apollo
  • Storybook
  • D3
  • Testing Library
  • JavaScript
  • TypeScript
  • Node.js
  • Deno
  • Rust
  • Python
  • GraphQL
  • React
  • Angular
  • Vue
  • Svelte
  • NextJS
  • Redux
  • Apollo
  • Storybook
  • D3
  • Testing Library
  • JavaScript
  • TypeScript
  • Node.js
  • Deno
  • Rust
  • Python
  • GraphQL

Node.js Tutorial: How JavaScript on the backend can make your life easier.

Node.js is JavaScript on the backend, built around the highly optimized google's V8 javascript engine. Welcome to the world of asynchronous non-blocking programming. Node.js excels at: What Node isn't that good at: For this article, we assume that you're familiar with JavaScript basics, such as working with promises Node.js is available for most platforms. If you are using brew on macOS you can install it as follows: For Windows, Linux, macOS and other operating systems, you can download the Node binaries from here Or you could use NodeSource to get your platform-specific binaries. In an empty directory create a file named app.js with a single line of code as follows To run it, in your shell of preference just CD to the directory, and type: And just as magic you will get an output in the console saying hello. The main point to notice here is that this is JavaScript we output to the console using mostly the same interface that we would use in the browser, and also, that immediately after that line of code executed, our application ends. Let's use the Node.js standard File System module fs , to write our hello world into a file: Let's take a moment, to explore in-depth the lifeCycle of a Node application: Following the Node application lifecycle, let's track the execution flow of the node application we just created, in the comments the "#" will outline the order of execution, please follow the # order: If we decided to remove the await from line 13, the entire flow would change, the way await works is that it transforms the current async function, into a promise. If we removed the await from line 13, line 14 would just not wait for the fs.writeFile() operation to be finished, and process immediately, and instead of the rest of the function being stacked in the event loop, only the fs.writeFile() operation would be stacked. With your install of Node.js, you will have the command line tool NPM, it is a package manager for Node, you can easily use it to download and install new modules, to import into your application. For each Node application, you want to have a package.json , this file will hold information about your project, Author details, version, how to execute it, custom NPM commands that you define. package.json also contains a list of all your dependencies, that way if someone wants to use your application you only have to distribute the source, and using NPM all the dependencies can be easily installed. To initialize the package.json file, run the following: Follow the steps the command line, set the Author details, and the main entry point as index.js, the rest you can leave it as default. To install a package using npm we do: npm install <name of the package> we also add the --save flag to save our new added dependencies to our package.json file. Now let's follow that and install our first npm module Express: Create a new file index.js, with the following source: We run our web app, with the following command: If you go to your localhost http://localhost:3000/ you will get the response we coded: A fun thing to consider, this time after we ran our application, it didn't automatically close! that's because of the following code: app.listen() attaches a listener into the poll phase of the event loop, that listens for HTTP calls in the declared port, and unless we manually detach it, is going to be always up, and on each iteration of the event loop, is going to stay alive listening for new HTTP calls to invoke our callback function on each one. Let's add a new route, to our API, that on each call uses fs.readFile() to read into the file system, get the contents of the helloworld.txt file, and send it back as the response of the request. Also lets change, change the original / endpoint to return JSON instead of a plain text Our API is looking pretty nice, but what would happen if you removed the helloworld.txt and then attempted to call the /readFile API endpoint? Oh no, we got an error, and since the error is thrown on line 14, we never get to send a response back, so the browser would be waiting until the requests timeouts The error in question would be: This error is caused by attempting to reading a nonexistent file with fs.readFile. In javascript when an Error is thrown, the execution of the current code block gets aborted, so to not lose control of the flow, we need to catch the error, error managing can be done elegantly in javascript using Try-Catch blocks, let's apply that, to our API endpoint. You should always catch your errors! We've been through a lot, within this short post, but by now you should have a well-rounded understanding of how a Node.js application works. Have fun implementing your own API's! Have fun and keep coding! Check out the documentation of the modules we used in this post: If you have any questions - or want feedback on your post - come join our community Discord . See you there!

Thumbnail Image of Tutorial Node.js Tutorial: How JavaScript on the backend can make your life easier.

Building World-class Apps with Angular Material

Angular Material has been one of the most popular component libraries for Angular developers in recent times. In this article, we will discover the reasons for its popularity, learn how to set up a new Angular Material app and customise it with your own theme and typography.Often times, the difference between an app that merely serves its purpose and one that delivers a great experience, is determined by the amount of thought and effort that goes into the smallest details of the individual components. These aspects are all addressed by Material Design, which is a design system created by Google exclusively for building digital experiences. They use it across their vast suite of services, on web and mobile. Material Design is inspired by the real world - concepts such as textures, light and shadow are used to specify guidelines for layout, navigation, typography and many other aspects of a user interface. Having to focus on all these aspects when developing individual apps will slow down a team or developer significantly. It would be great if there was a pre-built library of components that had all these considerations baked in, so we could then use it to build apps that automatically look and feel great! ✨ Angular Material aims to do exactly that - it is the official implementation of Material Design for the Angular Framework and is built by the Angular team. It provides a set of versatile, reliable and internationalized components that work across platforms. It also provides a Component Development Kit(CDK) to help developers implement their own components. The components are highly customizable but within the bounds of the Material Design specifications, which means that your app will always conform to standards of accessibility and ease of use. The Angular Material team has implemented an ng add schematic to automate the installation of the library and all the associated tasks. So you can just run this command: This performs the following actions: Each Angular Material component is exported as part of its own NgModule . To use a component, you must add the required NgModule to the imports array of the module you want to use it in. For example, if you wanted to use the Card component, you would import the MatCardModule as follows: Once you've imported the module, you can use the card component in your template. The MatCardModule also exports a number of directives that can be used to structure content within a card, such as MatCardTitle , MatCardContent and MatCardFooter among others. The general rules for instantiating Angular Material components and directives in your templates are: In addition to the ng add schematics for adding Angular Material to your project, there are also ng generate schematics available which can be used to generate composite Angular Material components to suit your use case. You can run these schematics using the command: The different available options are: Now that we have seen how to get set up and use Angular Material components in our apps, let's dive into one of the more complex, relatively less explored parts of the framework - Theming. As we've seen earlier, you can choose either a pre-built or a custom theme when adding Angular Material to your app. If you choose one of the pre-built themes, the path to the CSS file for the selected theme is added to the styles array in the build section for your project in angular.json . If you choose a custom theme, this path is not added. Instead, the global styles file is updated with the necessary ingredients to create your own Angular Material theme. The contents of this file may appear a bit cryptic to start with. To understand this better, let's look at it line by line. This imports the _theming.scss file from the Angular Material module. If we look at the build process for Angular Material, we can see that this file is generated by bundling together all the style files from various places in the codebase. Taking a closer look at theming-bundle.scss confirms this - it imports all the styles relating to the core components, theming, colours etc. Here is a simplified picture of how the Angular Material theming system works. The _theming.scss file provides two mixins - mat-core which includes all the non-theme-related styles, and angular-material-theme which includes all theme-related styles. The next line in our styles.scss file is: This adds all the non-theme related styles to our app. The next few lines define three color palettes for use in the app - a primary palette for regular use, an accent palette for emphasising elements, and a warning palette for errors and warnings: In this case, the palettes used are pre-built ones provided by Angular Material. We can also define our own color palettes according to the Material Design guidelines . A color palette uses a base color, and specifies different variants of it to use at various levels of brightness. It looks somewhat like this: The color used for brightness level 500 is the default, so this palette is based on the color #3f51b5 . Lower numbers here indicate higher brightness and vice versa. The contrast section specifies which color should be used as a contrasting color, in text for example, when a color is being used at a certain brightness level. We can now use this palette in our theme as follows: The next line in styles.scss defines our actual custom theme. There are two options available here - mat-light-theme and mat-dark-theme which influence the colors used for elements and backgrounds - mat-light-theme uses dark elements and light backgrounds, and mat-dark-theme uses light elements and dark backgrounds. The material design spec defines guidelines for fonts and typography . In Angular Material, these are implemented via CSS classes. There are classes such as mat-title , mat-display-1 , mat-display-2 etc. which can be used to style text in terms of font type, font size and line height. Since it might be a bit cumbersome to have to specify these classes each time you add an element of text, Angular Material provides an easier way to automatically add these styles. You can add the class mat-typography to the body element in your index.html , and this will automatically add the correct typographical styles to descendant elements. Angular Material uses the Roboto font by default. If you'd like to customize this, and other aspects of the typography, you can create your own custom configuration. Here, we are overriding the font family and the config for the headline and body-1 styles via the mat-typography-level mixin. This takes three arguments - a font size, a line height and a font weight. In this article, we have learnt how to set up Angular Material in an Angular app. We have also discovered how to leverage Angular Material's built-in schematics to generate components for specific use cases. We have explored in-depth the workings of the Angular Material theming system. We have then used this understanding to define custom themes and color palettes for our apps. Finally, we have understood how to use Angular Material's typographical styling and override it with a custom configuration. Angular Material is very easy to get started with, but it can get fairly complicated quite quickly when trying to customize its behaviors according to your needs. Here's hoping that this article has shed some light on the complicated parts of Angular Material, and makes it easier for you to adapt it into your projects! 🚀 Angular Material's guides are a great place to get started learning various aspects of the library. It helps to look directly at the source code to understand how Angular Material works under the hood. This great tool built by Mikel Bitson generates Angular Material color palettes for any base color - super useful when you are creating a custom theme from scratch.

Thumbnail Image of Tutorial Building World-class Apps with Angular Material

I got a job offer, thanks in a big part to your teaching. They sent a test as part of the interview process, and this was a huge help to implement my own Node server.

This has been a really good investment!

Advance your career with newline Pro.

Only $30 per month for unlimited access to over 60+ books, guides and courses!

Learn More

Deploying Next.js Application on Vercel, Heroku, and a Custom Static Server

In this post, we will be looking at Next.js app deployment on different types of servers and using different technologies, such as: We will go through deployment setup on each technology step by step and show the code. Some time ago web-developers shipped their applications in production by hand. They might use FTP or some other protocols to copy built application assets to production server. This approach has a lot of downsides. Automatic deployment solves these problems by extracting the shipping process from development. Thus, it makes it possible to ship applications consistently, continually and automatically. Deployment in general is all the processes required to make an application available for use. In a case with a web-app typical deployment usually consists of building an app and uploading its assets to a server (production or staging). Deployment is automatic if all those actions happen without human interaction. So if a developer has to build an application themselves it’s not automatic deployment. Automatic deployment has many advantages: In our case, the deployment will consist of building an app and delivering its assets to a server. At the end of this post you will learn how to set up automatic deployments with Vercel, Heroku, and on your own server via SSH from your GitHub repo. We suppose you have a GitHub account—it will be needed throughout the whole post. We will use pull-requests as triggers to deployments. Also, we suppose you know what Next.js is and how to create applications using it. If you don’t know it yet, you can learn it from the 5th chapter of “Fullstack React with TypeScript” book and for this tutorial you can use our app starter pack . For the last section of this tutorial, we suppose you have a server with SSH access to it. You’re going to need one to be able to allow GitHub and GitHub Actions to upload app assets. Vercel is a solution for deploying applications built with Next.js from creators of Next.js. As it put in official Next.js documentation , it is “the easiest way to deploy Next.js to production”. Let’s try it. To connect Vercel and start deploying you’re going to need a GitHub repository. It will allow connecting Vercel to codebase and trigger deployment on new commits in the master branch. If you already have a repository with an application built with Next.js you can skip this section. Create a new repo at the  “New” page . If you don’t want to create a repository, you can fork ours with app starter files. Clone the project repository on your local machine and open the project. If the repository is empty, add application code, commit and push it. Vercel account allows you to connect your projects’ repositories and monitor deployments as well as see the deployment history. Create an account on the signup page . If you already have an account, you can skip this section. When you created an account you should have access to Vercel’s Dashboard . The dashboard is like a control panel. Here you will be able to see the last deployments of every application and recent activity of an imported project. Let’s import the project repo. Find the “Import project” button and hit it. You will be redirected to the “Import” page . On that page find the “Import Git Repository” section. Click ”Continue“ button in that section. You will see a form with an input that requires an URL of a git repository. Enter your project’s repository URL there and submit the form. When it’s done Vercel will redirect you to GitHub. There you will be asked by GitHub for permissions to repository. Allow Vercel read and write access to the selected repository. It is possible to allow third-party applications to read and write every repository you have, but not recommended by security reasons. Try to limit third-party application’s access as minimal as possible. After you grant permissions to Vercel, GitHub will redirect you back to the “Import” page. There you will see import settings: Keep the project name the same as the repository name to make it less ambiguous. The last two options may be useful if you have, for example, a custom build script. Let’s say, to build a project you use instead of standard npm run build some another command. In that case, you can override default command with your own in the “Build and Output Settings” section. Same with the “Environment variables”. Sometimes you might need to configure the build process from outside. Usually, it is done with environment variables that are passed via command line. The most often example is NODE_ENV variable, which configures what kind of build should be triggered. Its value set to production usually tells that this is a production build and should be optimized. In a case with Next.js application, we don’t need to configure anything except for a project name. When you set it up, hit the “Deploy” button. You will see the congratulations screen with a “Visit” link, which will lead to a freshly deployed app on vercel.app domain. Hit this link to open and inspect the current deployment. And that’s it! You just deployed an application and made it available for users! Now, return to the Dashboard for a minute. The new project will appear in a list of your imported applications. Click a link to your project. On a project page, you will see the ”Production Deployment“ section. It contains information about current production deployment, such as: Below, you should see a “Preview Deployments” section. This section contains non-production deployments, such as staging and testing deployments. Let’s say you want to test some features in the environment very close to production, but don’t want to ship an untested feature in production. Preview deployments will help you do that. By default to create a preview deployment, you need to create a pull-request (or merge-request) to default branch in your repository. Vercel will analyze the state of a repo and if there are some pull-requests to the default branch, it will deploy every new commit in this pull-request as preview deployment. To test it, create a new branch in the repository. Checkout to this branch on your local repo, make some changes in your code, commit and push them to the created branch. After that create a pull-request from a new branch to default. When it’s done, Vercel-bot will automatically deploy this pull-request as a preview and leave a comment right in pull-request. You won’t even need to return to the Dashboard to inspect your project, the link to preview will be in the bot’s comment! And of course, the preview deployment will appear in the list of preview deployments in Dashboard. Heroku is a container-based cloud platform for deploying and managing applications. It also allows you to automate deploys and trigger them by pushing to repository's default branch. The first step is basically the same as in the previous section. You’re going to need a GitHub repository. New commits in the master branch will trigger deployments on Heroku as well. If you already have a repository with an application built with Next.js you can skip this section. Create a new repo at the “New” page . If you don’t want to create a repository, you can fork ours with app starter files . Clone the project repository on your local machine and open the project. If the repository is empty, add application code, commit and push it. Heroku also allows you to monitor your connected apps and their activity. To connect Heroku with your GitHub repository you’re going to need a Heroku account. Go to the signup page and create an account. If you already have one, you can skip this section. When you have an account on Heroku, you should have access to its Dashboard . The Dashboard contains lists of all the connected apps and services. To create a new app find a “New” button and hit it. In a select chose “Create new app” option. It will redirect you to a page with the new app settings screen. There you will be able to choose a name for your app, and a region. The region can affect performance and download time. For example, for users in Europe app deployed to the US region might load a bit slower than for users in the US because of the distance a request should pass between a user and a server. When it’s done, add a new pipeline with the button below. A Heroku pipeline is a set of actions being performed over your application. Let’s say you want not to just deploy an app, but to test is first and only then deploy—this set of testing and deploying actions is a pipeline. Heroku Pipelines represent steps of the continuous delivery workflow. In this case you can select “Production” since we won’t run any tests and want to just deploy the application. After it’s done, you will be asked about the deployment method. There might be 3 options: The first one is convenient when you have a git repository and you want to deploy an app right from the command line . If you have a Heroku CLI installed, there is a special command for deploying from the command line: But since we’re using GitHub select the “Connect to GitHub” method. Then select a repository to connect to from a list below. You might be asked for repository permissions. Again, try to keep third-party app access as minimal as possible. When you grant permissions to Heroku you can set up automatic deploys for some branch. By default automatic deploys may be turned off, so don’t forget to check the checkbox in this section. When you turn them on, select a branch from which to deploy. By default it is master , but you can select any other branch in your repository. Optionally check “wait for CI to pass before deploy” to ensure your tests pass before a project goes to production if there are any. GitHub Actions are an automatization workflow for building, testing, deployment, and other routines. They allow you to create a custom life cycle for your app. So, you can setup code-linting, code-formatting, some code checks, and all. They are like robots that receive messages and do some stuff. Actions are being set up by YAML-files, that describe what to do and what triggers this action. To tell GitHub that there is an action that should be played, create a directory called .github , mind the dot in front of the title. This is the directory that contains all the actions and workflows for this repository. Inside of that directory create another one called workflows . A workflow is a set of actions. So if you want to chain some actions together in a list you can use a workflow. In workflows directory create a file called main.yml , this is the workflow. Open this file and paste this code inside: Let’s break it down. The first line describes the name of this workflow. When GitHub will run this workflow, in a workflow dashboard you will see a list of all created workflows. The next directive is on . It describes what events should trigger this workflow. In this case, workflow should be triggered on push event in the master branch of this repo. So when someone pushes to master branch this workflow will be played. For deployment with Heroku, there is an action called “Deploy to Heroku”. Open its page and scroll to the “Getting Started” section . There you will see an example of the workflow setup. Let’s examine it. jobs contains all the work to do. In this case, there is only one job to do— build . This job runs on ubuntu-latest , this is the type of a machine that runs a workflow. The steps directive describes what steps to perform. In the example, each step is a GitHub Action. Latter is being run with some arguments that are described with with directive. heroku_api_key should be generated and stored in the “Secrets” section in GitHub. For this, you’re going to need Heroku CLI. If you already have it installed you can skip this section. Heroku CLI makes it easy to create and manage your Heroku apps directly from the terminal. It allows you for example deploy right from your terminal. However in this case you need it for another reason. You need to generate heroku_api_key for the repository secrets section. This is done with Heroku CLI. To install Heroku CLI go to its page and select an OS that you use. Note that different OSs use different installation methods. When it’s done check if there are no errors by running: Then, authenticate with this command: You’ll be prompted to enter any key to go to your web browser to complete the login. The CLI will then log you in automatically. Thus, you will be authenticated in your terminal when you will use heroku CLI command. Notice YOUR_EMAIL in its response, this should be the same you set in heroku email of main.yml . To generate a new token open your terminal, check if Heroku CLI is installed and run As a response, you will get a generated token. Copy its value and create a new GitHub Secret with it. Go to the “Secrets” section in GitHub: Settings → Secrets → New Secret. Set HEROKU_API_KEY as a name and generated token as a value. Save this secret. GitHub will use this value and replace ${{secrets.HEROKU_API KEY}} with it at build time automatically. In your package.json update start script to be as an example below. Pay attention to $PORT environment variable, that must be specified . When it’s done you should be able to trigger deploys by pushing changes to the master branch. Try to update code and push to master . In Dashboard, you will see a new app in the list of apps. Click on it and you will be redirected to an “Overview” page. There you should see latest activity and all the settings for this project. Find the “Open App” button to visit your application and inspect it. Sometimes third-party solutions wouldn’t work. It might happen for many reasons: it can cost a lot, or due to security reasons, but it might happen. In this case, there is an option to deploy an application on your own server. In this section, we suppose you have a server with SSH access to it. It will be required later. The first step is basically the same as in the previous section. You’re going to need a GitHub repository. New commits in the master branch will trigger deploy. If you already have a repository with an application built with Next.js you can skip this section. Create a new repo at the  “New” page . If you don’t want to create a repository, you can fork ours with app starter files . Clone the project repository on your local machine and open project. If the repository is empty, add application code, commit and push it. You’re going to deploy an app via SSH. For this, there is an action called “ssh deploy” . It uses Node.js and integrates by YAML-file as other GitHub Action do. This GitHub Action deploys a specific directory from GITHUB_WORKSPACE to a folder on a server via rsync over ssh. This workspace is a directory that is created by checkout action we used before. The workflow will be: Let’s create a new file in .github/workflows directory called custom.yml . This is the file that will describe your new workflow. In this file write the name of the workflow and events that should trigger it. This code means that this workflow will be triggered on every new push in the master branch. The detailed explanation of every line of this code you can find in the section above. Then, describe jobs to do and steps. Here we tell GitHub to check out this code, it will create a workspace that can be accessed with GITHUB_WORKSPACE . The second action sets up Node.js with version 12. (LTS on the moment this post is being written.) Then describe build step: It will install all the dependencies, build the project, and export static files. next export allows you to export your app to static HTML , which can be run standalone without the need of a Node.js server. It works by prerendering all pages to HTML. The reason we use npx is because we didn’t install next CLI tools globally, so it won’t be found in GitHub Action runtime, which will cause an error. npx on the other hand will execute a local package's binary. The last step in the workflow is: It tells GitHub Actions to use ssh-deploy action and pass some of the environment variables: secrets. mean that the value will be requested in GitHub Secrets, and for this to work you're going to need to specify those secrets that should be stored in the “Secrets” section in your GitHub repository. Create 4 new secrets in your project. Go to the “Secrets” section in GitHub repository and create: Connect to your server via SSH. (You may be asked a password if you connect the first time.) When connected, generate a new key pair. Keep the passphrase empty. Specify the type of the key as RSA. You might want to set some unique name for the key, just to make it easier to find it later, so when a command-line ask you how to name the file, you can change it. When generated, authorize those keys, otherwise, the server might not allow “ssh-deploy” to connect. Note the ~/.ssh/key-name —that’s the full path to the key file. It may vary depending on your file structure on a server. Now copy the private key value and paste it as a value for SERVER_SSH_KEY in the “Secrets” section. When everything’s set, you can trigger new deploy. Push some changes to the master branch, and GitHub will run the workflow, which will build the app, export it and let ssh-deploy deploy it. In this post, we explained how to deploy your Next.js application using Vercel, Heroku, and how to deploy it on a custom server using SSH.

Thumbnail Image of Tutorial Deploying Next.js Application on Vercel, Heroku, and a Custom Static Server

    newline Site Feature: Sync Gumroad Purchases (Beta)

    It's now possible to sync your Gumroad purchases to newline. This feature is currently in the beta-testing phase. To sync, hop over to your Account Settings page . Next, click the "Authenticate Gumroad" button: You'll be redirected to Gumroad, where you'll be asked to log in to your Gumroad account: Gumroad will then ask you if you want to allow newline to access your Gumroad account. Click "Authorize". Next, go to your newline library and you should see your purchases there, available for reading online.

    Thumbnail Image of Tutorial newline Site Feature: Sync Gumroad Purchases (Beta)

    Prelude to Vectors in the Rust Programming Language

    In this post, we are going to explore in detail how to work with resizable arrays in Rust. Specifically, we will take a closer look at the Vector type, its syntax, and some use cases like filtering and transforming a collection.In software development, we often face the need to deal with a list of objects or values. For example, enumerating words or ingesting series of numeric values, parsing structured data from tables or data storage like CSV files or a database. Also referred to as collections, such data structures serve as a container of discrete values, offering a great facility to organise all kinds of dynamic data in a program. The Rust Standard Library includes several different kinds of collections like static arrays, tuples, vectors, strings and hashmaps. Our focus here is set on one of the more commonly used array-like types - the Vector type. Unlike arrays and tuples, collections of type Vector are dynamic which means they can be changed at runtime, making them a versatile and convenient data type. In this post, we are going to explore the following aspects of the Vector type in Rust: Finally, we are going to take a moment to review how Rust’s internal safety mechanisms protect the developer from performing potentially unsafe operations. This tutorial assumes you are familiar with the Rust language syntax and have a general understanding of how a program allocates memory (e.g. heap vs stack). We often use terms like collections and arrays to describe structures of numbered lists of items. The vector type in Rust is one example of such a structure and it is the most commonly used form of collection. It has the type Vec , it is pronounced “vector”. The basic structure of a vector can be seen as a combination of the following information: A vector can always be represented as a tripled of these 3 values: pointer, capacity, and length. Also, the basic nature of the Vec type allows us to make use of several guarantees provided by the Rust runtime. For example, the pointer of a vector can never be null as the Vector type is null pointer optimized. If a vector is empty (contains no elements) or all elements are zero-sized, then Rust ensures that no memory will be allocated for the vector. The capacity() of a vector indicates the number of elements that can be added to a vector without re-allocation of memory. It can be seen as a sort of reserved or pre-allocated memory. You can learn more about the guarantees and memory specifics of the vector type in the documentation . There are several ways we can define a new vector. A vector can be initialized using the Vec::new function which returns a new empty vector. Once created, the new vector variable can be marked as mutable (using the mut keyword) in order to be able to add and remove elements from it. Try it out for yourself. It is worth pointing out that we did not declare the type of elements we intend to add to the collection. Let's see what happens if we declare the vector as we did above, but don't insert any elements to it e.g.: This statement alone will not compile and the error message will be cannot infer type for type parameter "T" . This happens because the Vec type uses generics in order to specify the type of elements that will be added to the vector collection. In the first example, we added elements to the vector and so the Rust compiler was able to infer the type of the variable vec to be Vec<&str> as the elements being added to it are of type &str . In the second example, only initializing a new empty vector was not enough for the Rust compiler to determine what kind of elements we intend to store this causing raising a compiler error. In order to solve this, we can choose to explicitly specify the type of the vector collection during initialization. For example: The syntax of the Vec::new function may seem a bit verbose as we first need to initialize a mutable variable and only then add elements to it. Luckily, Rust also includes the vec! macro which adds certain facilities to make it easier to initialize new vectors by also providing the initial elements in the collection. We could rewrite our example in order to use the vec! macro instead of the Vec::new function as follows: Since the initial elements of the vector are known upfront, this can be made even more concise, by directly initializing the vector with the initial elements: Try it out for yourself. Now that we know how to create a vector, let's have a look at some of the techniques we can utilize in order to access the contents (or elements) of a vector. The length of the vector corresponds to the number of elements currently being stored by the vector. We can obtain the length using the len() function: The capacity of the vector is the number of elements that a vector can hold without the need to reallocate additional memory. A vector normally stores its elements in a memory buffer which can grow over time as new elements are being added to it. By default, the capacity of the vector is automatically adjusted as we add elements to the collection. When we create a new empty vector, we can choose to define an initial capacity, essentially reserving the initial buffer size of the vector. This means that when new elements are added, the vector will not have to reallocate additional memory as long as there is remaining space in its buffer (capacity). Let's illustrate with an example. Try it out for yourself. If we just create a new empty vector, it has 0 elements and a capacity of 0. This means adding an element will require the vector to first allocate some memory to increase its capacity to at least 1, in order to accommodate the incoming element. Of course, this works just fine and may not be a problem at all. For example, we can create a new empty vector with an initial capacity for 10 elements: The vector type in Rust implements the Index trait, allowing us to directly access elements by index: A common source of bugs and security vulnerabilities is what is commonly known as out of bounds access i.e. trying to access an element outside the length of a vector. While very easy to use, direct access by index has a downside - we may accidentally request an element index which is out of bounds which will cause the program to panic: To help with that, Rust offers an alternative using the Vec::get function which returns a value of type Option instead, allowing us to gracefully handle this scenario and as a result, improving the reliability of the program: Try it out for yourself. A mutable vector can be changed by adding or removing elements from it. We do this using the Vec::push and Vec::pop functions. Respectively they either append an element to the end of a vector or remove the last element of a vector. The Vec::pop method also returns an Option value which either holds the removed element or None if no elements were removed from the vector (e.g. when it was already empty). Try it out for yourself. In some situations it may be useful to update an element which already belongs to a vector: Try it out for yourself. You may notice that we are directly accessing an element by index. Like we saw earlier, given an index outside the bounds of the vector the application will panic. We can use a technique we showed earlier with the Vec::get method and its companion Vec::get_mut which returns a mutable reference to an element, if it exists. We can then rewrite the above example in a safer way as follows: Like get , the get_mut method returns an Option with a reference to the element at the given index. If the element doesn't exist (when the index is out of bounds), get_mut returns None . If the element exists, get_mut returns a mutable reference which we can use to update the value. If we would like to perform a certain operation over each element in a vector, we can iterate through all elements rather than accessing them one at a time. One way would be to use a for loop: In this case, we are consuming the vector by executing the operation defined in the for loop block over each element of that vector. We could also limit the operation to just references to the elements of the collection. Using the same technique, we can obtain a mutable reference to the elements, allowing us to affect changes to the collection: Try it out for yourself. Another powerful technique to access the elements of a vector is through means of an iterator. To obtain an iterator over a vector, we use the Vec::iter method: Try it out for yourself. Generally speaking, Rust makes it easy to use iterators for almost everything. In the ergonomics of the language, it is almost preferred to use iterators instead of directly interacting with a vector. An example use of iterators will be the case of transforming the values of a collection from one type to another. Given a collection of words, let's build a vector which holds the length of each word: Try it out for yourself. Iterators are a powerful concept in Rust and they prove to be very useful when we are interested in obtaining a subset of a given collection. We can use the Vec::filter method in order to filter the elements of a vector: Try it out for yourself. Nothing prevents our application from adding the same value multiple times to a vector. There are however circumstances when it may be needed to remove the duplicates from a collection. Imagine for example, if the vector is based on user-provided data and we are only interested in working with unique (non-repeatable) values. This is easy to achieve using the Vec::dedup method. Once called on an instance of a vector, dedup works on that same instance and removes consecutively repeated elements. This means that for the deduplication logic to work as we expect, the vector needs to be sorted so that repeating elements follow each other. For example [1, 3, 2, 3] will not work very well because the repeating values 3 are not adjacent to each other. Once the vector is sorted to [1, 2, 3, 3] we can make use of the Vec::dedup method in order to remove the repeating values. The Vec::deduce needs the elements of the vector to implement the PartialEr trait in order for the comparison to work. This means it can also work for custom structs, as long as they implement PartialEr . Let's check an example: Try it out for yourself. Here we declare the vector as mutable since Vec::dedup updates the contents of the collection in place. A common type to represent resizable arrays is the Vector type. It is one of the more versatile collection types, enabling a great deal of flexibility when accessing and working with its elements. In this post, we saw how to get started with using the vector type for common operations like filtering and transforming a collection of elements. In addition, we discussed some of the safety protections provided by the Rust runtime in such scenarios like guarding against out of bounds access or null pointers. You may also find it useful to explore the Vec specification from the Rust documentation where you can read about all available methods and additional sample use cases and code snippets. You can also check one of my other posts which cover additional use cases for using Iterators with vectors in Rust.