The past year I've been exploring the Node.js world heavily, building several backends with different technologies. It was a bumpy road, making a lot of 'mistakes' and wrong technology choices for the projects. In this blog post I'll share my experiences and the lessons I've learned the past year.
I've been coding in Node.js for about 6 years now; 5 years of personal/college projects and 1 year professionally for the enterprise world. The past year I've learned more than I did the 5 years before because there's quite a difference between the requirements of personal/college projects and a project for a company:
Security is always important, but when it comes to the enterprise the consequences of weak security can be much more severe. Later in this blog article we will talk about keeping secret environment variables and authentication.
When you're working for a company you're working in a team, perhaps the project even has multiple teams (frontend, backend, testing). Communication is key in teams, and making a contract between team members or across teams is vital for a good project flow. The API documentation is a contract that defines what the services are and how to access them.
When working with multiple teams, there will be the requirement of multiple stages of the Node.js backend. Usually each part of a team will have its own stage(s). And a separate stage for the production environment. This does bring a lot of extra complexity to the backend: managing environment variables for each stage and deploying each stage to its own backend environment. Automating these tasks is key here.
- Automated Deployment
As we will be managing multiple environments, it's needed to able to easily create or tear down environments.
'100%' uptime is often a requirement for customer projects. This means if we deploy a new version of our Node.js backend we can't just kill the server and replace the code on the server; as that would result in a short downtime. Further in this blog post we will talk about 'rolling updates'.
- Error Reporting
Knowledge is power. If something goes wrong with your application it needs to be reported to the proper instances so it can be translated to a (bug) ticket if needed.
- Inversion Of Control
- Proxy Pattern
- Strategy Pattern
- Decorator Pattern
- Observer Pattern
Nest.js also has a testing package which makes unit and e2e testing easy. It's really an amazing framework and has been instrumental to the code quality of the projects I have worked on.
My go-to NoSQL database. It functions amazing as a simple key-value store. But that's also the only thing it should be used for. If your data structure has a lot of relations or is in need of complex sorting queries you shouldn't store it in DynamoDB. Where DynamoDB shines is combined with another datastore technology like a SQL database and/or Elastic Search. These are some of the use cases of DynamoDB in my past projects:
- Temporary session storage
DynamoDB has a built-in TTL feature, which is amazing for session storage.
- High throughput storage
The write speeds of DynamoDB are amazing. Combined with with DynamoDB Streams you can process the data and save the processed data to another data store like RDS, Elastic Search or do another action.
- Simple data structures
Sometimes the data structure of your application is just not complicated and can work with a simple key-value store. However, if one of the requirements of your application involves complex sorting/search you will probably be in need of something like Elastic Search.
- Temporary session storage
SQL datastores have been around for many years, and will continue to do so for many years. Any data structures that don't fit in DynamoDB go in a SQL datastore.
- Elastic Search
When one of the requirements of the application is in need of advanced search functionality, you will give yourself headaches trying to implement search functionality on DynamoDB or even SQL. Elastic Search, as the name implies makes handling advanced search queries easy.
- Husky / TSLint / Prettier
TSLint and Prettier help define rules for code styling and check for functionality errors. The code style and rules are easily shared across the team thanks to these technologies. Husky adds 'git hooks' support to the project, which you can use to hook TSLint and Prettier to the
$ git pushcommand. This makes it easy to enforce the linter and prettier rules before a team member can make a commit. Keeping the PR and commit history cleaner.
- Serverless Framework
The Serverless framework is an amazing framework that helps with managing serverless applications. It helps deploy serverless API's to the Cloud without the hassle that usually comes with it. For example a way reduced syntax over regular AWS CloudFormations to deploy on AWS. There's also an amazing community behind this framework, who create amazing plugins like 'serverless-offline' which emulates an AWS API Gateway for offline use.
Environment Secret Management
Every environment will need its own set of environment variables, and some of these variables should be kept secret like credentials. Storing these in the code repository for every developer to see is not a great idea. These should be kept in a secure spot with limited access.
AWS offers a solution for this problem: 'AWS Secrets Manager'. It keeps the variables encrypted in a secure spot, with fine-grained access control thanks to AWS IAM. There's one problem with this, you can only access the secrets over API calls. This isn't logic you would want to add to your code core. It would also be horrible if the retrieval of environment variables would fail during the runtime of the application.
To counter this problem, I created a small script that runs during the CI/CD build. It retrieves the values for that environment and stores them to a
.envfile. I use the node package dotenv that parses the values from that file to the process environment. If the initial retrieval of the secrets fails, the build fails.
The environment secrets are now securely stored and your code core isn't affected by it all.
Auth is vital to a good security of the application. Auth done wrong compromises the whole security of the application. That's why I prefer handing this responsibility partially to a managed service. The service I use is AWS Cognito. AWS Cognito is split in 2 big parts:
- User Pools
This is a user directory/store. It ships with features like login and user registration.
- Identity Pools
Cognito Identity Pools federate non AWS Identities to an AWS Identity, so they are assigned an IAM role and can access AWS resources. Identities can be federated from a Cognito User Pool but also from Facebook, Google, Saml providers,...
The API documentation should be written before the development of the backend starts. This will be the contract between the frontend and backend how communication will be done. This will save the frontend team some headaches.
The API documentation is written in the OpenAPI 3.0 specification and imported in Postman. Postman makes it easy to share the documentation across the team(s). Postman also ships with a mocking feature, which allows the frontend team to already start developing using the API endpoints before they are written.
Managing all the different environments for your Node.js application can be a pain, certainly if it's not automated. However once I started using the Serverless framework, working with multiple stages has become a breeze thanks to its built-in support for stages. One important lesson here is to have different resources for each environment (like databases and Cognito user pools). Having all stages fully isolated from each other makes testing and debugging easier.
Automating the deployment and spinning up of infrastructure is key in an agile multi-environment era of software development. The serverless framework makes this easy combined with some CloudFormations. You define your AWS infrastructure in YAML format, so it can be re-used in the future for different environments. You can also tear down your infrastructure as fast as you created it.
The Serverless framework uses AWS Lambda's to host the code. If a new version is deployed, the code within the Lambda function will hot swapped with the new code. This guarantees uptime while the update happens.
However, if there are instabilities in the new code all the traffic will be routed to the new unstable code. AWS Lambda has support for gradually shifting traffic to a new version of the Lambda function thanks to function aliases. This allows you to shift the traffic over a period of time to the new code while you analyze the behaviour of the code.
There are Serverless framework plugins that help you accomplish these 'canary deployments'.
Error reporting is vital so problems get translated to tickets. Error reporting on a serverless stack isn't always easy, as the majority of the error reporting tools require you to make a HTTP connection which adds some latency to every request.
All requests to Lambda's and their console output can be logged to CloudWatch without any extra latency. The solution I use is simply log the error with
console.errorand create a CloudWatch event to listen for these. If the event is triggered it sends an e-mail to the development team and posts a message in the Slack channel. This way the error reporting logic is separated from the core logic of the backend.
- User Pools
I hope my experiences of the past years are useful to someone out there! Thanks for reading and till the next time.