Schema

Setting up the database schema and migrations

Project Source Code

Get the project source code below, and follow along with the lesson material.

Download Project Source Code

To set up the project on your local machine, please follow the directions provided in the README.md file. If you run into any issues with running the project source code, then feel free to reach out to the author in the course's Discord channel.

  • |

Lesson Transcript

  • [00:00 - 00:06] Let's get started with the database and our schema. As you know, we're going to be using Postgres for our database.

  • [00:07 - 00:26] There are a number of ways you can work with databases and different companies do different things, but I think it's nice to have one production database running, and then every developer has their own database running locally. That makes it very easy to try out migrations and test different things with a database without messing things up for other developers.

  • [00:27 - 00:33] So we're going to want to install Postgres on our local machine if you don't have it already. It's trivial to do this using Docker.

  • [00:34 - 00:44] It's basically a one-liner. Let's try and run that. I will go into my shell and I can do that from this folder.

  • [00:45 - 00:59] It doesn't really matter. I will just run this Docker one-liner that says, "Run specifying a name." We'll call this easy-booking_db, and we'll give it two environment variables, which is Postgres password and Postgres DB.

  • [01:00 - 01:07] Those are required, and then we will initialize the Postgres image. This is the official Postgres image from Docker Hub.

  • [01:08 - 01:19] We're not going to add a tag to specify a version or anything. If you run into more advanced features and you need a specific version of Post gres, that is something I would recommend you start specifying.

  • [01:20 - 01:33] Also, you should try and keep the developer version in sync with whatever your production version is, but for now, we'll just take whatever is on there and we'll start it like so. Since I already have the image on my machine, it's very quick to start.

  • [01:34 - 01:45] You can see that it should be running. If I listed here, I now have a Postgres server running on port 542, which is the official and standard port for Postgres.

  • [01:46 - 01:53] It will take a little bit to download on your machine. With that in place, let's try and do something with our database.

  • [01:54 - 01:57] We're going to be using a library called Next.js. At least that's what I think it's called.

  • [01:58 - 02:07] I'm not actually sure how to pronounce it. Next.js is a query builder, which you can say is really just a glorified string concatenation machine.

  • [02:08 - 02:20] So it's different from an ORM or object relations mapper in that it's working with SQL, not with objects. And if you are mostly used to working with an ORM, you might consider this a limitation or a weak abstraction.

  • [02:21 - 02:35] And I guess you're right, but in some sense, at least I find that it's very nice to work with query builders rather than ORMs, because it really gives you access to the full power of Postgres. And I should state that I'm by no means a SQL expert far from it.

  • [02:36 - 02:56] I'm much more comfortable writing in my GPL, general purpose programming language, but I still think that it's very nice to be able to access Postgres and all the features that it offers. Postgres is a rock solid piece of software, and it offers many things that you wouldn't want to limit yourself from, which an ORM will often do, at least the ones that I've experienced with.

  • [02:57 - 03:08] We'll start by installing it in our schema package. And the reason for this is not that we're going to be writing actual queries in the schema package, but we will be using it for migrations.

  • [03:09 - 03:18] So next has support for both migrations and database seating, and we're going to be using both of those. And so we need the package installed in our schema as well.

  • [03:19 - 03:35] So let's go to the schema folder, and we'll add NEX and the Postgres driver, which it also needs. And then we will create a configuration file for it.

  • [03:36 - 03:46] And just similar to what we've done with other tools, NEX also has an init script, which can create a configuration file for you. But we're not going to be using that right now.

  • [03:47 - 04:00] It can actually create TypeScript configurations for you, but I've just found that there are a number of limitations to that that I feel are not worth it. So we'll create this configuration file as a JS file in here.

  • [04:01 - 04:11] And maybe at some point in the future, I will switch to TypeScript. So you can see here that it exports just a development configuration.

  • [04:12 - 04:22] And the reason for this is that we will just be using this for migrating our own developer machine. This is not the configuration that we use to connect to production database.

  • [04:23 - 04:35] We'll be specifying that in the backend package. You can see that we have a Linter error here complaining that we're using the old-fashioned require syntax.

  • [04:36 - 04:51] Obviously, it would be nicer to switch to ESM, but I've had some issues getting next to support that. I'm sure it's possible, but I just didn't feel that it was worth it for this because it's just a configuration file. So let's just disable this for now, like so.

  • [04:52 - 05:02] So you can see here that I'm specifying that there's a migration folder. And it's in the schema package folder and then in migrations.

  • [05:03 - 05:05] And that doesn't actually exist yet. So let's just go and create that.

  • [05:06 - 05:15] There we go. We now have a migrations folder and that is what the configuration is set up to use.

  • [05:16 - 05:27] So let's try and create our first migration. So we use the NPX, which runs, we can create a script for this in our package JSON if we want to, but for now let's just use NPX.

  • [05:28 - 05:33] So we can say migrate create. Sorry, migrate make.

  • [05:34 - 05:41] And then we can specify the name of our migration. And that's just called initial setup.

  • [05:42 - 05:50] So you can see that Next has now created a file for us here. So it's prefaced with a timestamp.

  • [05:51 - 06:03] And the reason for this is that it's very important that migrations are run in order. And so with the timestamp here, we are specifying which part of the order this migration is in.

  • [06:04 - 06:11] And it has created a little scaffolding for us. At times I have changed this to actually import SQL files.

  • [06:12 - 06:29] But for now, let's just keep it like it is and just use next.raw to specify SQL in here so that we can migrate our database with some simple SQL statements. So I'll just copy something from my article here.

  • [06:30 - 06:39] We'll use this to create a table called users. And then for the down migration, we will drop that table.

  • [06:40 - 06:47] Oops. So a quick note on down migrations.

  • [06:48 - 06:56] I don't think that in my entire career, I've ever used a down migration in production. So I don't consider these to be very important.

  • [06:57 - 07:11] What I use them for is basically when I'm developing my migration. If I'm not happy with it, I run the down migration in the command line to down migrate my development database. I don't think I would ever do it in production.

  • [07:12 - 07:16] And the reason is that database schemas don't live on their own. They're not an island.

  • [07:17 - 07:30] They are connected to things in external services and possibly other things that like emails might have been sent. So just down migrating a database doesn't mean that you roll things back. Sadly, things aren't that clear cut.

  • [07:31 - 07:54] So what I have almost always done when I need to undo a change to the database is that I will write a new migration that reverts those things. So it's sort of similar to the revert commit in git instead of undo commit, which I might do on my local branch, but I very rarely do on branches and things that have been pushed to GitHub or wherever I have my repository.

  • [07:55 - 08:05] With this in place, let's try and run it. This is the command line.

  • [08:06 - 08:09] Oh, I'm missing that. Next, there we go.

  • [08:10 - 08:12] And see if it can connect to our database. It looks like it can.

  • [08:13 - 08:25] And so now our database should have this users table and it should have an ID and email and a password. This is just a silly little table that we're not going to be using. I just wanted to try and get something up and running.

  • [08:26 - 08:40] Now that we have a table in place, let's try and use kernel to generate types or type from it. We'll install it as a developer dependency in our schema package together with the next plugin for it.

  • [08:41 - 08:52] So I'll add kernel and kernel next like so. And then we need to create a configuration file.

  • [08:53 - 08:58] So, kernel doesn't have an init script or anything. And so we'll need to create this manually.

  • [08:59 - 09:03] And we are going to create it as a JS file again. So the same applies here.

  • [09:04 - 09:12] I should probably change it so that you could configure it using TypeScript, but I haven't done so just yet. So we'll need to use JavaScript for this as well.

  • [09:13 - 09:25] So I've just copied a tiny configuration file here that I'll just quickly go through. First of all, we need to specify a connection.

  • [09:26 - 09:37] And we're doing this by importing the next file that we already created, so that we make sure that we're using the same developer database as the next file specifies. So we can basically access that through this.

  • [09:38 - 09:47] Then we need to specify where it should create the model files. And we're going to use an output path here because I often find that I need this for other things in the configuration.

  • [09:48 - 09:53] So this is just how I normally do it. I'm basically specifying that it will go into source/models.

  • [09:54 - 10:00] We will tell it to predelite the output folder every time we run Kennel. I like to do this.

  • [10:01 - 10:17] It means that you can't put any other file in your models folder because that will be deleted as well when you run Kennel. On the other hand, if you were to say delete a table or rename something, you will make sure that you don't have any craft sitting there that isn't in your actual database anymore.

  • [10:18 - 10:22] This is a way to guarantee that. Then we're specifying this type filter here.

  • [10:23 - 10:35] And what we're doing with that is basically making sure that the two next migration tables aren't included. So when we ran the migration, it has created two tables for us.

  • [10:36 - 10:47] And we don't want to create types out of those because those are basically just utility tables that we're not going to be using in our code. This filter is predefined, so that's from this plugin package.

  • [10:48 - 10:55] It will just filter those two out, so the way we just do that is specifying this. And then finally, we're adding a generate index file hook.

  • [10:56 - 11:07] What this does is that we will take all of the model files and it will create one index.ts file where it exports all of those once and for all. It's just a little helper.

  • [11:08 - 11:09] I just find it's nice to have. It's not necessary.

  • [11:10 - 11:21] You might want to import them by specifying the paths instead, but this makes it a little bit easier for you. So with this in place, let's just save it and try and run kennel.

  • [11:22 - 11:39] So I do that like so. And it says that it starts clearing all files in source slash models, which obviously didn't exist, but you can see it has now actually has created a source folder for us, which we didn't even have.

  • [11:40 - 11:50] But so that exists now and in there, that's just a models folder. The models folder contains one folder for each schema, and we just have the public schema in our Postgres database.

  • [11:51 - 12:00] The reason why it's called public, I think, is historical. It's something about Postgres used to have just one schema initially, and that was the public one.

  • [12:01 - 12:08] And when they then added multiple schemas or support for that, they had to name the first one public. And so that's why it's called that.

  • [12:09 - 12:15] You can change it to a better name if you want, but we're not going to do that right here. You can see that we have a users.ts file here.

  • [12:16 - 12:28] So this is what a kennel has created for us. There are three interfaces, the table itself, or the selector, if you will, the initializer and the mutator.

  • [12:29 - 12:45] So this one is used when you want to insert something into the users table, and this one when you want to update something. And these two resemble this one very much, except that all of the things are only required if they are actually required for initialization or updating.

  • [12:46 - 12:51] So for updating, nothing is required. So all of the fields here are optional.

  • [12:52 - 13:07] When initializing, you can specify an ID, but since the ID has a default value, you don't need to specify it, and probably you never would. So when you create a new thing that you want to initialize, you would probably only specify the email and the password.

  • [13:08 - 13:18] And pass that to the insert script, and then the database would create the ID for you. One final thing to note here is that we have a users ID type.

  • [13:19 - 13:35] It is basically just a number type, because the ID was just a number, but we want to create this as a branded type. What this does, if you're not familiar with branded types in TypeScript, it's basically a way to sort of pretend that we're doing nominal typing.

  • [13:36 - 13:48] So TypeScript, per default, is structural typing or doc typing, which means that any two objects that have the same fields in them will be considered to be equal types. And that is a very nice feature.

  • [13:49 - 14:00] It's one of my favorite features about TypeScript, actually. It's a very nice way to work, but there are a few situations where you might prefer the other type system type, which is nominal typing.

  • [14:01 - 14:17] And that basically says, well, if you have a doc and a goose, even if they both have the same methods and properties, they're not the same thing because they have two different names, basically. And IDs are one such thing where I personally find that that is very important.

  • [14:18 - 14:47] An ID is typically the same underlying type for all of your tables, either a number or a string for a UUID or whatever. But if you accidentally call a function that takes an account ID and you give it a member ID instead, at least if you're using numbers and those are just sequential, so you could actually be passing on something that is a valid member ID and not account ID that you meant to pass, it is a very bad situation.

  • [14:48 - 15:01] You can alleviate this a little bit by using UUIDs, which means that you're very, very unlikely to pass something in that is actually a valid ID for a different table. So that helps a bit, but it's still just runtime safety.

  • [15:02 - 15:14] With this, you will get compile time safety. So a user's ID cannot be used for a function that takes a member ID or account ID or something else, even if it's both still just a number.

  • [15:15 - 15:31] So the way this type is constructed is that it adds this brand structure that isn't really used anywhere and will be removed completely when you compile it to JavaScript. This adds no runtime overhead whatsoever, it only provides some compile time safety.

  • [15:32 - 15:38] Right. So one last thing I want to do is that I want to create an index file here in ...

  • [15:39 - 15:52] So you can see the index file that has been created from the plugin that we use or the extension, that exports the... initialize some mutator and selector as well as the ID in the models folder.

  • [15:53 - 16:01] So we just want to create an index file out in the source folder that will export all of the models. Oops.

  • [16:02 - 16:17] So we'll just do export everything from models there.