Latest Tutorials

Learn about the latest technologies from fellow newline community members!

  • React
  • Angular
  • Vue
  • Svelte
  • NextJS
  • Redux
  • Apollo
  • Storybook
  • D3
  • Testing Library
  • JavaScript
  • TypeScript
  • Node.js
  • Deno
  • Rust
  • Python
  • GraphQL
  • React
  • Angular
  • Vue
  • Svelte
  • NextJS
  • Redux
  • Apollo
  • Storybook
  • D3
  • Testing Library
  • JavaScript
  • TypeScript
  • Node.js
  • Deno
  • Rust
  • Python
  • GraphQL

Deploying a Node.js and PostgreSQL Application to Heroku

Serving a web application to a global audience requires deploying, hosting and scaling it on reliable cloud infrastructure. Heroku is a cloud platform as a service (PaaS) that supports many server-side languages (e.g., Node.js, Go, Ruby and Python), monitors application status in a beautiful, customizable dashboard and maintaining an add-ons ecosystem for integrating tools/services such as databases, schedulers, search engines, document/image/video processors, etc. Although it is built on AWS, Heroku is simpler to use compared to AWS. Heroku automatically provisions resources and configures low-level infrastructure so developers can focus exclusively on their application without the additional headache of manually setting up each piece of hardware and installing an operating system, runtime environment, etc. When deploying to Heroku, Heroku's build system packages the application's source code and dependencies together with a language runtime using a buildpack and slug compiler to generate a slug , which is a highly optimized and compressed version of your application. Heroku loads the slug onto a lightweight container called a dyno . Depending on your application's resource demands, it can be scaled horizontally across multiple concurrent dynos. These dynos run on a shared host, but the dynos responsible for running your application are isolated from dynos running other applications. Initially, your application will run on a single web dyno, which serves your application to the world. If a single web dyno cannot sufficiently handle incoming traffic, then you can always add more web dynos. For requests exceeding 500ms to complete, such as uploading media content, consider delegating this expensive work as a background job to a worker dyno. Worker dynos process these jobs from a job queue and run asynchronously to web dynos to free up the resources of those web dynos. Below, I'm going to show you how to deploy a Node.js and PostgreSQL application to Heroku. First, let's download the Node.js application by cloning the project from its GitHub repository: Let's walkthrough the architecture of our simple Node.js application. It is a multi-container Docker application that consists of three services: an Express.js server, a PostgreSQL database and pgAdmin. As a multi-container Docker application orchestrated by Docker Compose , the PostgreSQL database and pgAdmin containers are spun up from the postgres and dpage/pgadmin4 images respectively. These images do not need any additional modifications. ( docker-compose.yml ) The Express.js server, which resides in the api subdirectory, connects to the PostgreSQL database via the pg PostgreSQL client. The module api/lib/db.js defines a Database class that establishes a reusable pool of clients upon instantiation for efficient memory consumption. The connection string URI follows the format postgres://[username]:[password]@[host]:[port]/[db_name] , and it is accessed from the environment variable DATABASE_URL . Anytime a controller function (the callback argument of the methods app.get , app.post , etc.) calls the query method, the server connects to the PostgreSQL database via an available client from the pool. Then, the server queries the database, directly passing the arguments of the query method to the client.query method. Once the database sends the requested data back to the server, the client is released back to the pool, available for the next request to use. Additionally, there's a getAllTables method for retrieving low-level information about the tables available in our PostgreSQL database. In this case, our database only contains a single table: cp_squirrels . ( api/lib/db.js ) The table cp_squirrels is seeded with records from the 2018 Central Park Squirrel Census dataset downloaded from the NYC Open Data portal. The dataset, downloaded as a CSV file, contains the fields obs_date (observation date) and lat_lng (coordinates of observation) with values that are not compatible with the PostgreSQL data types DATE and POINT respectively. Instead of directly copying the contents of the CSV file to the cp_squirrels table, copy from the output of a GNU awk ("gawk") script. This script... ( db/create.sql ) Upon the initialization of the PostgreSQL database container, this SQL file is ran by adding it to the docker-entrypoint-initdb.d directory. ( db/Dockerfile ) This server exposes a RESTful API with two endpoints: GET /tables and POST /api/records . The GET /tables endpoint simply calls the db.getAllTables method, and the POST /api/records endpoint retrieves data from the PostgreSQL database based on a query object sent within the incoming request. To bypass CORS restrictions for clients hosted on a different domain (or running on a different port on the same machine) sending requests to this server, all responses must have the Access-Control-Allow-Origin header set to the allowable domain ( process.env.CLIENT_APP_URL ) and the Access-Control-Allow-Headers header set to Origin, X-Requested-With, Content-Type, Accept . ( api/index.js ) Notice that the Express.js server requires three environment variables: CLIENT_APP_URL , PORT and DATABASE_URL . These environment variables must be added to Heroku, which we will do later on in this post. The Dockerfile for the Express.js server instructs how to build the server's Docker image based on its needs. It automates the process of setting up and running the server. Since the server must run within a Node.js environment and relies on several third-party dependencies, the image must be built upon the node base image and install the project's dependencies before running the server via the npm start command. ( api/Dockerfile ) However, because the filesystem of a Heroku dyno is ephemeral , volume mounting is not supported. Therefore, we must create a new file named Dockerfile-heroku that is dedicated only to the deployment of the application to Heroku and not reliant on a volume. ( api/Dockerfile-heroku ) Unfortunately, you cannot deploy a multi-container Docker application via Docker Compose to Heroku. Therefore, we must deploy the Express.js server to a web dyno with Docker and separately provision a PostgreSQL database via Heroku Postgres add-on . To deploy an application with Docker, you must either: For this tutorial, we will deploy the Express.js server to Heroku by building a Docker image with heroku.yml and deploying this image to Heroku. Let's create a heroku.yml manifest file inside of the api subdirectory. Since the Express.js server will be deployed to a web dyno, we must specify the Docker image to build for the application's web process, which the web dyno belongs to: ( api/heroku.yml ) Because our api/Dockerfile already has a CMD instruction, which specifies the command to run within the container, we don't need to add a run section. Let's add a setup section, which defines the environment's add-ons and configuration variables during the provisioning stage. Within this section, add the Heroku PostgreSQL add-on. Choose the free " Hobby Dev " plan and give it a unique name DATABASE . This unique name is optional, and it is used to distinguish it from other Heroku PostgreSQL add-ons. Fortunately, once the PostgreSQL database is provisioned, the DATABASE_URL environment variable, which contains the database connection information for this newly provisioned database, will be made available to our application. Check if your machine already has the Heroku CLI installed. If not yet installed, then install the Heroku CLI. For MacOSX, it can be installed via Homebrew: For other operating systems, follow the instructions here . After installation, For the setup section of the heroku.yml manifest file to be recognized and used for creating a Heroku application, switch to the beta update channel and install the heroku-manifest plugin: Without this step, the PostgreSQL database add-on will not be provisioned from the heroku.yml manifest file. You would have to manually provision the database via the Heroku dashboard or heroku addons:create command. Once installed, close out the terminal window and open a new one for the changes to take effect. Note : To switch back to the stable update stream and uninstall this plugin: Now, authenticate yourself by running the follow command: Note : If you want to remain within the terminal, as in entering your credentials directly within the terminal, then add the -i option after the command. This command prompts you to press any key to open a login page within a web browser. Enter your credentials within the login form. Once authenticated, Heroku CLI will automatically log you in. Within the api subdirectory, create a Heroku application with the --manifest flag: This command automatically sets the stack of the application to container and sets the remote repository of the api subdirectory to heroku . When you visit the Heroku dashboard in a web browser, this newly created application is listed under your "Personal" applications: Set the configuration variable CLIENT_APP_URL to a domain that should be allowed to send requests to the Express.js server. Note : The PORT environment variable is automatically exposed by the web dyno for the application to bind to. As previously mentioned, once the PostgreSQL database is provisioned, the DATABASE_URL environment variable will automatically be exposed. Under the application's "Settings" tab in the Heroku Dashboard, you can find all configuration variables set for your application under the "Config Vars" section. Create a .gitignore file within the api subdirectory. ( api/.gitignore ) Commit all the files within the api subdirectory: Push the application to the remote Heroku repository. The application will be built and deployed to the web dyno. Ensure that the application has successfully deployed by checking the logs of this web dyno: If you visit https://<application-name>.herokuapp.com/tables in your browser, then a successful response is returned and printed to the browser. In case the PostgreSQL database is not provisioned, manually provision it using the following command: Then, restart the dynos for the DATABASE_URL environment variable to be available to the Express.js server at runtime. Deploy your own containerized applications to Heroku!

Thumbnail Image of Tutorial Deploying a Node.js and PostgreSQL Application to Heroku

I got a job offer, thanks in a big part to your teaching. They sent a test as part of the interview process, and this was a huge help to implement my own Node server.

This has been a really good investment!

Advance your career with newline Pro.

Only $30 per month for unlimited access to over 60+ books, guides and courses!

Learn More

React Query Builder - The Ultimate Querying Interface

From businesses looking to optimize their operations, data influences the decisions being made. For scientists looking to validate their hypotheses, data influences the conclusions being arrived at. Regardless, the sheer amount of data collected and harnessed from various sources presents the challenge of identifying rising trends and interesting patterns hidden within this data. If the data is stored within an SQL database, such as PostgreSQL , querying data with the expressive power of the SQL language unlocks the data's underlying value. Creating interfaces to fully leverage the constructs of SQL in analytics dashboards can be difficult if done from scratch. With a library like React Query Builder , which contains a query builder component for fetching and exploring rows of data with the exact same query and filter rules provided by the SQL language, we can develop flexible, customizable interfaces for users to easily access data from their databases. Although there are open source, administrative tools like pgAdmin , these tools cannot be integrated directly into a custom analytics dashboard (unless embedded within an iframe). Additionally, you would need to manage more user credentials and permissions, and these tools may be considered too overwhelming or technical for users who aren't concerned with advanced features, such as a procedural language debugger, and intricate back-end and database configurations. By default, the <QueryBuilder /> component from the React Query Builder library contains a minimal set of controls only for querying data with pre-defined rules. Once the requested data is queried, this data can then be summarized by rendering it within a data visualization, such as a table or a line graph. Below, I'm going to show you how to integrate the React Query Builder library into your application to gain insights into your data. To get started, scaffold a basic React project with the Create React App and TypeScript boilerplate template. Inside of this project's root directory, install the react-querybuilder dependency: If you happen to run into the following TypeScript error... Could not find a declaration file for module 'react'. '<project-name>/node_modules/react/index.js' implicitly has an 'any' type. ... then add the "noImplicitAny": false configuration under compilerOptions inside of tsconfig.json to resolve it. React Query Builder composes a query from the rules or groups of rules set within the query builder interface. This query, in JSON form, should be sent to a server-side application that's connected to a PostgreSQL database to properly format the query into a SQL statement and execute the statement to fetch records of data from the database. For this tutorial, we will send this query to an Express.js API running within a multi-container Docker application. This application also runs a PostgreSQL database and the pgAdmin in separate containers. The API connects to the PostgreSQL database and defines a POST route for processing the query. With Docker Compose, you can execute a single command to spin up all of these services at once on a single host machine! To run the entire back-end, you don't need to manually install PostgreSQL or pgAdmin on your machine; you only need Docker installed on your machine. Plus, if you decide to run other services, such as NGINX or Redis , then you can add them within the docker-compose.yml configuration file. Clone the following repository: Inside the root this cloned project, add a .env.development file with the following environment variables: To run the server-side application, execute the following command: This command starts up the server-side application. When you re-build and restart the application with this same command, it will do so from scratch with the latest images. It's up to you if you want to leverage caching to expedite the build and start up processes. Nevertheless, let's break down what this command does: For each docker-compose command, pass a set of environment variables via the --env-file option. This approach in setting environment variables allows these variables to be accessed within the docker-compose.yml file and easily works in a CI/CD pipeline. Since the .env.<environment> files are typically not pushed to the remote repository (i.e., ignored by Git), especially for public-facing projects, when deploying this project to a cloud platform, the environment variables set within the platform's dashboard function the same way as those set by the --env-file option. The PostgreSQL database contains only one table named cp_squirrels that is seeded with 2018 Central Park Squirrel Census data downloaded from the NYC Open Data portal. Each record represents a sighting of an eastern gray squirrel in New York City's Central Park in the year 2018. Let's verify that pgAdmin is running by visiting localhost:5050 in the browser. Here, you will be presented a log-in page. Enter your credentials ( NYCSC_PGADMIN_EMAIL and NYCSC_PGADMIN_PASSWORD ) into the log-in form. On the pgAdmin welcome page, right-click on "Servers" in the "Browser" tree control (in the left pane) and in the dropdown, click Create > Server . Under "General," set the server name to nyc_squirrels . Under "Connection," set the host name to nycsc-pg-db , the container name set for our nycsc-pg-db . It is where our PostgreSQL database is virtually hosted at on our local machine. Set the username and password to the values of NYCSC_PGADMIN_EMAIL and NYCSC_PGADMIN_PASSWORD respectively. Save those server configurations. Wait for pgAdmin to connect to the PostgreSQL database. Once connected, it should appear under the "Browser" tree control. Right-click on the database ( nyc_squirrels ) in the "Browser" tree control and in the dropdown, click the Query Tool option. Inside of the query editor, type a simple SQL statement to verify that the database has been properly seeded: This statement should return the first ten records of the cp_squirrels table. Let's verify that the Express.js API is running by visiting localhost:<NYCSC_API_PORT>/tables in the browser. The browser should display low-level information about the tables available in our PostgreSQL database. In this case, our database only contains a single table: cp_squirrels . Great! With the server-side working as intended, let's turn our attention back to integrating the React Query Builder component into the client-side application. Inside of our Create React App project's src/App.tsx file, import the <QueryBuilder /> component from the React Query Builder library. At a minimum, this component accepts two props: This is what the query builder looks like without any styling and with only these two props passed to the <QueryBuilder /> component: This probably doesn't make much sense, so let's immediately jump into a basic example to better understand the capabilities of this component. Let's make the following adjustments to the src/App.tsx file to create a very basic query builder: Open the application within your browser. The following three element component is shown in the browser: The first element is the combinator selector , which is a <select /> element that contains two options: AND and OR . These options correspond to the AND and OR operators of a SQL statement's WHERE clause. The second element is the add rule action , which is a <button /> element ( +Rule ) that when pressed will add a rule. If you press this button, then a new rule is rendered beneath the initial query builder component: A rule consists of a field , an operator and a value editor , and it corresponds to a condition specified in a SQL statement's WHERE clause. The field <select /> element lists all of the fields passed into the fields prop. Notice that the label of the field is shown in this element. The operator <select /> element lists all of the possible comparison/logical operators that can be used in a condition. Lastly, the value editor <input /> element contains what the field will be compared to. For example, if we type -73.9561344937861 into the <input /> field, then the condition that will be specified in the WHERE clause is X = -73.9561344937861 . Basically, this will fetch all squirrel sightings located at the longitudinal value of -73.9561344937861 . With only one rule, the combinator selector is not applicable. However, if we press the add rule action button again, another rule will be rendered, and the combinator selector will become applicable. With two rules, two conditions are specified and combined with the AND operator: X = -73.9561344937861 AND Y = 40.7940823884086 . The third element is the add group action , which is a <button /> element ( +Group ) that when pressed will add an empty group of rules. If you press this button, then a new group is rendered beneath whatever has already been rendered in the query builder component: Currently, there are no rules within the newly created group. When we add two new rules to this group by pressing its add rule action button twice and change the value of its combinator selector to OR , like so: The two rules within this new group are combined together similar to placing parentheses around certain conditions in a WHERE clause to give a higher priority to them during evaluation. For the above case, the overall condition specified to the WHERE clause would be X = -73.9561344937861 AND Y = 40.7940823884086 AND (X = -73.9688574691102 OR Y = 40.7837825208444) . A total of eight fields are defined. Essentially, they are based on the columns of the cp_squirrels table. For each field, the name property corresponds to the actual column name, and the label property corresponds a more presentable column title that is shown in the field <select /> element of each rule. If you look into developer tools console, then you will see many query objects logged to the console: Every single action performed on the query builder that changes the query will invoke the logQuery function, which prints the query to the console. If we import the formatQuery function from the react-querybuilder library and call it inside of logQuery with the query, then we can format the query in many different ways. For now, let's format the query to a SQL WHERE clause: ( src/App.tsx ) If we modify any of the controls' values, then both the query (in its raw object form) and its formatted string (as a condition of a WHERE clause) are printed to the console: With the fundamentals out of the way, let's focus on sending the query to our Express.js API to fetch data from our PostgreSQL database. Inside of src/App.tsx , let's add a "Send Query" button below the <QueryBuilder /> component: Note : The underscore prefix of the _evt argument indicates an unused argument. When the user clicks this button, the client will send the most recent query to the /api/records endpoint of the Express.js API. This endpoint takes the query, formats it into a SQL statement, executes this SQL statement and responds back with the result table. We will need to store the query inside a state variable to allow other functions, such as , within the <App /> component to access the query. This changes our uncontrolled component to a controlled component . ( src/App.tsx ) Anytime onQueryChange is invoked, the setUpdateQuery method will update the value of the updateQuery variable, which must adhere to the type RuleGroupType . Update the sendQuery function to send updateQuery to the /api/records endpoint and log the data in the response. ( src/App.tsx ) Inside of the query builder, if we want retrieve squirrel sightings found at the coordinates (40.7940823884086, -73.9561344937861), then create two rules: one for X (longitude) and one for Y (latitude). When we press the "Send Query" button, the result table (in JSON) is printed to the console: Only one squirrel sighting was observed at that particular set of coordinates. Let's display the result table in a simple table: ( src/App.tsx ) Press the "Send Query" button again. The result table (with only one record) should be displayed within a table. The best part is you can add other visualization components to display your fetched data. The sky's the limit! Click here for the final version of this project. Visit the React Query Builder to learn more about how you can customize it to your application's needs.

Thumbnail Image of Tutorial React Query Builder - The Ultimate Querying Interface

ffmpeg - Thumbnail and Preview Clip Generation (Part 2)

Disclaimer - If you are unfamiliar with FFmpeg, then please read this blog post before proceeding. When you upload a video to a platform such as Youtube , you can select and add a custom thumbnail image to display within its result item. Amongst the many recommended videos, a professionally-made thumbnail captures the attention of undecided users and improves the chances of your video being played. At a low-level, a thumbnail consists of an image, a title and a duration (placed within a faded black box and fixed to the lower-right corner): To generate a thumbnail from a video with ffmpeg : Let's test the drawtext filter by extracting the thumbnail image from the beginning of the video and writing "Test Text" to the center of this image. This thumbnail image will be a JPEG file. Notice that the drawtext filter accepts the parameters text , fontcolor , fontsize , x and y for configuring it: The parameters are delimited by a colon. To see a full list of drawtext parameters, click here . Now that we've covered the basics, let's add a duration to this thumbnail: Unfortunately, there's no convenient variable like w or tw for accessing the input's duration. Therefore, we must extract the duration from the input's information, which is outputted by the -i option. 2>&1 redirects standard error ( 2 for stderr ) to standard output ( 1 for stdout ). We pipe the information outputted by the -i option directly to grep to search for the line containing the text "Duration" and pipe it to cut to extract the duration (i.e., 00:00:10 for ten seconds) from this line. This duration is stored within a variable DURATION so that it can be injected into the text passed to drawtext . Here, we use two drawtext filters to modify the input media: one for writing the title text "Test Text" and one for writing the duration "00:00:10". The filters are comma delimited. To place the duration within a box, provide the box parameter and set it to 1 to enable it. To set the background color of this box, provide the boxcolor parameter. Note : Alternatively, you could get the video's duration via the ffprobe command. Let's tidy up this thumbnail by substituting the placeholder title with the actual title, uppercasing this title, changing the font to "Open Sans" and moving the duration box to the bottom-right corner. Like the duration, the title must also be extracted from the input media's information. To uppercase every letter in the title, place the ^^ symbol of Bash 4 at the end of the title's variable via parameter expansion ( ${TITLE^^} ). Since Bash is required for the uppercasing, let's place these commands inside of a .sh file beginning with a Bash shebang , which determines how the script will be executed. To find the location of the Bash interpreter for the shebang, run the following command: ( thumbnail.sh ) To specify a font weight for a custom font, reference that font weight's file as the fontfile . Don't forget to replace <username> with your own username! Additionally, several changes were made to the thumbnail box. The box color has a subtle opacity of 0.625. This number (any number between 0 and 1) proceeds the @ in the boxcolor . A border width of 8px provides a bit of spacing between the edges of the box and the text itself. Note : If you run into a bash: Bad Substitution error, update Bash to version 4+ and verify the Bash shebang correctly points to the Bash executable. When you hover over a recommended video's thumbnail, a brief clip appears and plays to give you an idea of what the video's content is. With the ffmpeg command, generating a clip from a video is relatively easy. Just provide a starting timestamp via the -ss option (from the original video, -ss seeks until it reaches this timestamp, which will serve as the point the clip begins at) and an ending timestamp via the -to option (from the original video at which the clip should end). Because video previews on Youtube are three seconds long, let's extract a three second segment starting from the four second mark and ending at the seven second mark. Since the clip lasts for a few seconds, we must re-encode the video (exclude -c copy ) to accurately capture instances when no keyframes exist. To clip a video without re-encoding, ffmpeg must capture a sufficient number of keyframes from the video. Since MP4s are encoded with the H.264 video codec ( h264 (High) is stated under the video's metadata printed by ffmpeg -i <input> ), if we assume that there are 250 frames between any two keyframes ("a GOP size of 250"), then for the ten second Big Buck Bunny video with a frame rate of 30 fps, there is one keyframe each eight to nine seconds. Clipping a video less than nine seconds with -c copy results in no keyframes being captured, and thus, the outputted clip contains no video ( 0 kB of video). Eight Second Clip (with -c copy ): Nine Second Clip (with -c copy ): Note : Alternatively, the -t option can be used in place of the -to option. With the -t option, you must specify the duration rather than the ending timestamp. So instead of 00:00:07 with -to , it would be 00:00:03 with -t for a three second clip. Suppose you want to add your brand's logo, custom-made title graphics or watermark to the thumbnail. To overlay such an image on top of a thumbnail, pass this image as an input file via the i option and apply the overlay filter. Position the image on top of the thumbnail accordingly with the x and y parameters. ( thumbnail.sh ) Passing multiple inputs (in this case, a video and watermark image) requires the -filter_complex option in place of the -vf option. The main_h and overlay_h variables represent the main input's height (from the input video) and the overlay's height (from the input watermark image) respectively. Here, we place the watermark image in the lower-left corner of the thumbnail. The watermark image looks a bit large compared to the other elements on the thumbnail. Let's scale down the watermark image to half its original size by first scaling it down before any of the existing chained filters are executed. ( thumbnail.sh ) To scale the watermark image to half its size, we must explicitly tell the scale filter to only scale this image and not the video. This is done by prepending [1:v] to the scale filter to have the scale filter target our second input -i ./watermark-ex.png . The iw and ih variables will represent the watermark image's width and height respectively. Once the scaling is done, the scaled watermark image is outputted to ovrl , which can be referenced by other filters for consumption as a filter input. Because the overlay filter takes two inputs, an input video and an input image overlay, we prepend the overlay filter with these inputs: [0:v] for the first input -i ./Big_Buck_Bunny_360_10s_30MB.mp4 and ovrl for our scaled watermark image. Imagine having a large repository of videos that needs to be processed and uploaded during continuous integration. Write a Bash script to automate this process.

Thumbnail Image of Tutorial ffmpeg - Thumbnail and Preview Clip Generation (Part 2)

ffmpeg - Editing Audio and Video Content (Part 1)

Online streaming and multimedia content platforms garner a large audience and consume a disproportionate amount of bandwidth compared to other types of platforms. These platforms rely on content creators to upload, share and promote their videos and music. To process and polish video and audio files, both professionals and amateurs automatically resort to using interactive software, such as Adobe Premiere. Such software features many tools to unleash the creativity of its users, but each comes with its own set of entry barriers (learning curve and pricing) and unique workflows for editing tasks. For example, in Adobe Premiere , to manually concatenate footage together, you create a nested sequence, which involves several steps of creating sequences and dragging and dropping clips into a workspace's timeline. If you produce lots of content weekly for a platform, such as YouTube, and work on a tight schedule that leaves no extra time for video editing, then you may consider hiring a devoted video editor to handle the video editing for you. Fortunately, you can develop a partially autonomous workflow for video editing by offloading certain tedious tasks to FFmpeg. FFmpeg is a cross-platform, open-source library for processing multimedia content (e.g., videos, images and audio files) and converting between different video formats (i.e., MP4 to WebM ). Commonly, developers use FFmpeg via the ffmpeg CLI tool, but there are language-specific bindings written for FFmpeg to import it as a package/dependency into your project/s. With ffmpeg , Bash scripts can automate your workflow with simple, single-line commands, whether it is making montages, replacing a video's audio with stock background music or streamlining bulk uploads. This either significantly reduces or completely eliminates your dependence on a user interface to manually perform these tasks by moving around items, clicking buttons, etc. Below, I'm going to show you... Some operating systems already have ffmpeg installed. To check, simply type ffmpeg into the terminal. If the command is already installed, then the terminal prints a synopsis of ffmpeg . If ffmpeg is not yet installed on your machine, then visit the FFmpeg website, navigate to the " Download " page, download a compiled executable (compatible with your operating system) and execute it once the download is complete. Note : It is recommended to install the stable build to avoid unexpected bugs. Alternatively... For extensive documentation, enter the command man ffmpeg , which summons man ual pages for the ffmpeg command: For this blog post, I will demonstrate the versatility of ffmpeg using the Big Buck Bunny video, an open-source, animated film built using Blender. Because downloading from the official Big Buck Bunny website might be slow for some end users, download the ten second Big Buck Bunny MP4 video ( 30 MB, 640 x 360 ) from Test Videos . The wget CLI utility downloads files from the web. Essentially, this command downloads the video from Wikimedia Commons to the current directory, and this downloaded video is named Big_Buck_Bunny_360_10s_30MB.mp4 . The -c option tells wget to resume an interrupted download from the most recent download position, and the -O option tells wget to download the file to a location of your choice and customize the name of the downloaded file. The ffmpeg command follows the syntax: For a full list of options supported by ffmpeg , consult the documentation . Square brackets and curly braces indicate optional items. Items grouped within square brackets are not required to be mutually exclusive, whereas items grouped within curly braces are required to be mutually exclusive. For example, you can provide the -i option with a path of the input file ( infile ) to ffmpeg without any infile options. However, to provide any outfile option, ffmpeg must be provided the path of the output file ( outfile ). To specify an input media file, provide its path to the -i option. Unlike specifying an input media file, specifying an output media file does not require an option; it just needs to be the last argument provided to the ffmpeg command. To print information about a media file, run the following command: Just providing an input media file to the ffmpeg command displays its details within the terminal. Here, the Metadata contains information such as the video's title ("Big Buck Bunny, Sunflower version") and encoder ("Lavf54.20.4"). The video runs for approximately ten and a half minutes at 30k FPS. To strip away the FFmpeg banner information (i.e., the FFmpeg version) from the output of this command, provide the -hide_banner option. That's much cleaner! To convert a media file to a different format, provide the outfile path (with the extension of the format). For example, to convert a MP4 file to an WebM file... Note : Depending on your machine's hardware, you may need to be patient for large files! To find out all the formats supported by ffmpeg , run the following command: To reduce the amount of bandwidth consumed by users watching your videos on a mobile browser or save space on your hard/flash drive, compress your videos by: Here, we specify a video filter with the -vf option. We pass a scale filter to this option that scales down the video to a quarter of its original width and height. The original aspect ratio is not preserved. Note : To preserve aspect ratio, you need to set either the target width or height to -1 (i.e., scale=360:-1 sets the width to 360px and the height to a value calculated based on this width and the video's aspect ratio). The output file is less than 100 KBs! Here, we specify the H.265 video codec by setting the -c:v option to libx265 . The -preset defines the speed of the encoding. The faster the encoding, the worst the compression, and vice-versa. The default preset is medium , but we set it to fast , which is just one level above medium in terms of speed. The CRF is set to 28 for the default quality maintained by the codec. The -tag:v option is set to hvc1 to allow QuickTime to play this video. The output file is less than 500 KBs, and it still has the same aspect ratio and dimensions as the original video while also maintaining an acceptable quality! Unfortunately, because browser support for H.265 is sparse , videos compressed with this standard cannot be viewed within most major browsers (e.g., Chrome and Firefox). Instead, use the H.264 video codec, an older standard that offers worst compression ratios (larger compressed files, etc.) compared to H.265, to compress videos. Videos compressed with this standard can be played in all major browsers . Note : We don't need to provide the additional -tag:v option since QuickTime automatically knows how to play videos compressed with H.264. Note : 23 is the default CRF value for H.264 (visually corresponds to 28 for H.265, but the size of a H.264 compressed file will be twice that of a H.265 compressed file). Notice that the resulting video ( Big_Buck_Bunny_360_10s_30MB_codec_2.mp4 ) is now twice that of previous ( Big_Buck_Bunny_360_10s_30MB_codec.mp4 ), but now, you have a video that can be played within all major browsers. Simply drag and drop these videos into separate tabs of Chrome or Firefox to see this. Big_Buck_Bunny_360_10s_30MB_codec_2.mp4 in Firefox: Big_Buck_Bunny_360_10s_30MB_codec.mp4 in Firefox: Check out this codec compatibility table to ensure you choose the appropriate codec based on your videos and the browsers you need to support. Much like formats, to find out all the codecs supported by ffmpeg , run the following command: First, let's download another video, the ten second Jellyfish MP4 video ( 30 MB, 640 x 360 ), from Test Videos. To concatenate this video to the Big Buck Bunny video, run the following command: Since both video files are both MP4s and encoded with the same codec and parameters (e.g., dimensions and time base), they can be concatenated by passing them through a demuxer , which extracts a list of video files from an input text file and demultiplexes the individual streams (e.g., audio, video and subtitles) of each video files, and then multiplexing the constituent streams into a coherent stream. Essentially, this command concatenates audio to audio, video to video, subtitles to subtitles, etc., and then combines these concatenations together into a single video file. By omitting the decoding and encoding steps for the streams (via -c copy ), the command quickly concatenates the files with no loss in quality. Note : Setting the -safe option to 0 allows the demuxer to accept any file, regardless of protocol specification. If you are just concatenating files referenced via relative paths, then you can omit this option. When you play the concatenated.mp4 video file, you will notice that this video's duration is 20 seconds. It starts with the Big Buck Bunny video, and then immediately jumps to the Jellyfish video at the 10 second mark. Note : If the input video files are encoded differently or are not of the same format, then you must re-encode all of the video files with the same codec before concatenating them. Suppose you wanted to merge the audio of a video with stock background music to fill the silence. To do this, you must provide the video file and stock background music file as input files for ffmpeg . Then, we specify the video codec ( -c:v ) to be copy to tell FFmpeg to copy the video's bitstream directly to the output with zero quality changes, and we specify the audio codec ( -c:a ) to be aac (for Advanced Audio Coding ) to tell FFmpeg to encode the audio to an MP4-friendly format. Since our audio file will be MP3, which can be handled by an MP4 container, you can omit the -c:a option. To prevent the video from lasting as long as the two and a half minute audio file, and only lasting as long as the original video, add the -shortest option to tell FFmpeg to stop encoding once the shortest input file (in this case, the ten second Big Buck Bunny video) is finished. Additionally, download the audio file Ukulele from Bensound . If your audio file happens have a shorter duration than your video file, and you want to continuously loop the audio file until the end of the video, then pass the -stream_loop option to FFmpeg. Set its value to -1 to infinitely loop over the input stream. Note : The -stream_loop option is applied to the input file that comes directly after it in the command, which happens to be the short.mp3 file. This audio file has a duration less than the video file. Consult the FFmpeg Documentation to learn more about all of the different video processing techniques it provides.

Thumbnail Image of Tutorial ffmpeg - Editing Audio and Video Content (Part 1)