Hi, everyone!
This article is the third article of my series āRefactoring Gladys Developer Platformā. Last time, I was explaining how I designed the PostgreSQL database of the new platform.
Today, Iām going to talk about Node.js development. For our new platform we need a fast REST API, and we are going to use Node.js to build it.
Source:
On this back-end, we are managing:
Thatās all!
Letās cover the routes we need:
User :
Module :
Script :
Sentence :
Admin :
Iām going to implement an admin dashboard to accept/reject published module. I donāt have currently any dashboard and itās really annoying to manually accept module in the database.
To handle image upload, there are different options:
The first option is clearly not the best for us. We want something scalable, easy to deploy and easy to migrate on another server if we want to upgrade the server. The other problem is that storage is not illimited!
Second option seems better, but still, if we have lots of users uploading at the same time, the back-end will be busy working with files transfer: Thatās clearly not his job.
Iām going to pick the third option. All the heavy work is going to be done by our cloud provider, not our back-end. In our case, itās going to be an Amazon S3 bucket.
But, does it means the client uploads what he wants in our S3 bucket ??
No, of course no. Before uploading, the client just need to ask our back-end for a pre-signed URL. Itās an URL that allows the client to upload only in a specific place in our S3 bucket, during a limited time.
That means that the user cannot upload what he wants, when he wants, where he wants. But still, the user is uploading directly on Amazon S3. No extra server load on our side :)
So, new route on our back-end:
This is what our back-end file structure will looks like:
-- core
---- api
------ user
-------- controller
---------- user.signup.js
-------- model
---------- user.create.js
----- service
-- index.js
-- package.json
I prefer organizing my back-end by entity (user, module) with inside both controllers and models, rather than doing the opposite (controllers and models, with inside āuserā, āmoduleā). Itās much more clear, and easier when you develop, because all files you need are just near. And when your app is becoming bigger, itās still easy to find a file.
To hash password, Iām going to use bcrypt.
For authentification, we want a stateless way of authenticating users. We are going to use jsonwebtoken.
What is a jsonwebtoken ?
Itās an encoded token composed of three parts:
{
"alg": "HS256",
"typ": "JWT"
}
{
"sub": "ce95683c-e682-4bcd-a18d-2e3d250aad48",
"name": "John Doe"
}
HMACSHA256(base64UrlEncode(header) + "." + base64UrlEncode(payload), secret)
. Only the back-end is able to generate this signature because the HMACSHA256 function takes a secret
. And only the back-end knows the secret!The idea is simple:
Does it means that the user can log in as any user just by changing the data inside the payload ?
No, because if the user changes the payload, the signature is not valid anymore, and the back-end will reject the JWT.
What about expiration ? Does it means the token is valid forever ?
Good question, of course no! The JWT specification allows us to set a exp
attribute inside the payload. For example, we can say that a token is valid for 2 days. We will put inside the exp
attribute the timestamp of today + 2 days. When the user in 2 days will try to send a request, the back-end will open the JWT, see that the JWT is no longer valid, and reject the user. The user will need to log in again.
To validate data, Iām going to use Joi, itās an awesome NPM package that allows us to validate JSON according to a defined schema. For example, for our user, I defined the schema as this:
const Joi = require('joi');
var schema = Joi.object().keys({
email: Joi.string().email().required(),
password: Joi.string().min(6).required(),
name: Joi.string().token().min(2).required()
});
_
.Then, in my model, when Iām creating a user, I just have to do that:
// params contains the body sent by the user
// schema is Joi schema defined before
// stripUnknown: true means that we want to remove other field that are not in the schema
// example: if the user put a `lastname` attribute that is not in the schema, it will be cleaned
return validate(params, schema, {stripUnknown: true})
.then((user) => {
// create user
});
For database requests, we have mainly two options:
Having an ORM saves you time, but performance are not that great, and if you want to write specific query, you wonāt be able to do it with the ORM, you will need to go back to SQL.
Iām not going to use an ORM for all SELECT requests, for performance reason mainly.
For insert/update requests, the problem is that attributes are not all required, and we donāt want to hand-generate all types of SQL request. For this, I will use squel, a SQL query builder which supports PostgreSQL.
We have the best of both worlds:
Logging is really important. You canāt know if your system is broken if you donāt have any logs.
The thing is that you canāt browse all logs just by hand, it takes too much times. And if something is broken, you need alerts!
Here, two options:
Here Iām going to use the second solution, as it takes time to host my own logging platform. I donāt know yet which provider Iām going to use.
Sending transactional emails (Confirmation email, reset password) is a serious job if you donāt want to fall in the SPAM folder of your user. Iām going to use Mailgun that I was already using before. Email are delivered correctly, and not that expensive (First 10 000/months are free, then itās $0.00050 per email, so 20 000 emails = 5$, cheeaap)
Iām working hard on this new platform, and the back-end is in good way! Donāt hesitate to get a look on the code on the GitHub repository. Yes, this platform is open-source!
I hope this article was clear, donāt hesitate to ask question in comments :)
Have a nice week-end!
Summary of this series:
If you loved this story, you can subscribe to my newsletter here.