
Source code: https://git.io/fj2UZ
In our last blog post, we saw how Serverless framework (SLS) is the best way to create/deploy serverless projects. In this one, we will see how we can provision multiple environments easily using serverless framework without paying any money for it.
1. Stages and Environments
Any decent size IT projects developed using SDLC methodology consists of the following stages
Sr. No. | Stage | Purpose |
1. | Development | To develop application (write code) |
2. | Integration | To perform integration testing by aggregating code and running it as one |
3. | Quality | To perform comprehensive testing |
4. | UAT/Staging | To perform smoke and user acceptance test |
5. | Production | To release application to its consumers |
A stage serves a specific function in lifecycle of the project. Initially when the project starts, a team of developers start doing development on their laptop/desktop computers. Whenever they reach a logical end, their code needs to be wrapped up and deployed in a separate environment.
Hence corresponding to the 5 stages above, there needs to be 5 separate environments. This means physical servers containing all the hardware, software and services required for the code to run.
Hence any IT project usually maintains these multiple environments, that serve specific purpose. Out of these, the costliest obviously is the Production, as it should have enough capacity to handle peak loads. Second most costly is the UAT/Staging, as this is maintained like a production replica. And rest of the environments are maintained at lower capacity as per the project requirements.
There is a significant cost incurred in provisioning and maintaining all these environments and they must be kept running as long as the application(s) running on them are being used.
Hence, for our Books catalog microservices project, we would also need to plan for multiple environments for promoting code from one stage to another.
2. Setup
Open your Cloud9 Ide and create the folder structure as shown below and copy all the files shown below from GitHub

Now open your terminal under “src” folder and run the following command
npm install
this will install the uuid package when we are using in our JavaScript code.

Now come one folder back to booksAPI path and run the following command to check whether you have serverless framework installed and working properly.
serverless version

This should show you version number of serverless framework that we had installed last time.
If this does not show up, then run the following command to install serverless framework
npm install -g serverless
Now we are ready to deploy our code to multiple environments. In our case, we will have the following 4 stages.
- Development
- QA
- UAT
- Production
So, we have to configure serverless framework to create 4 environments for us corresponding to the 4 stages above.
3. Deployment Configuration
Ae we have learnt before, in order to deploy a project using serverless framework, we have to write precise instructions in serverless.yml file.
We have already created this file, so let’s open and review it section by section and understand what we are doing with it.

First line always starts with name of the project/microservice we want to create/deploy

In the custom section, we are defining our custom variables, that will be used in other places in this file.
First, we want to define what stage this deployment is for. So, stage variable will be set to either of the below values
dev, qa, uat, prod.
This value will be passed by us when we execute deployment command
sls deploy –stage uat
In this case, the stage variable will then be set to value “uat”
If we don’t supply any value, it will take default value of “dev”
sls deploy
So ${opt:stage} placeholder is responsible for getting value passed at command line and setting it as value of stage variable
Next, we come to stage_settings variable.
Now, we know that the IT infrastructure capacity that is required for Production is not necessary for other environments, especially Dev and QA.
So, we need a mechanism to specify capacity values for each environment. This is what the 4 files in stages folder corresponds to. If we open dev.yml and prod.yml, we can see the difference in values, as shown below

Here, we can see that values set are different for dev and prod.
We will later see what these values represent, but for now we need a mechanism to pick the right file during deployment of a particular stage.
Hence if we are deploying for “dev” stage, it should read values from dev.yml and same for others
So the stage_settings variable is doing the same thing. It will pick up correct file and read values from it depending on what stage we are deploying

Now we proceed to set values for Provider section
Since we are deploying to AWS cloud, name has value aws. Our JavaScript code is required to run on nodejs.8.10, hence runtime has that value. We are doing all this work in us-east-1 region. We want to set stack name in CloudFormation to be like booksAPI-dev-stack. Hence it is having variables which will create this value. Similarly, we want the dev stage API Gateway deployment to have name as booksAPI-dev, so it is been set with appropriate variables

We would like to pass value of Stage and DynamoDB table to lambda functions using Environment variable option available with Lambda. Hence, we create these Global variables and set values here, so that they can be used in this file later.
So, their values for dev deployment will be
STAGE = dev
DYNAMODB_TABLE=booksAPI-dev

Also, we want one IAM role created for each deployment as per above permissions. This role will be used by Lambda functions to connect with DynamoDB table. The role will be named as follows if this is deployment for “prod” stage
booksAPI-dev-us-east-1-lambdaRole
The list of actions is representative of all the operations we will conduct using our CRUD APIs. Now these operations need to be conducted on the DynamoDB table created for this specific stage(environment). So, we will first create a separate DynamoDB table for “prod” called “booksAPI-prod” and then in this resource section, we have to create the ARN of specific table, so that Lambda functions of one environment are not able to connect to DynamoDB tables of another environment. Hence the Resource script above will construct the following arn value.
arn:aws:dynamodb:us-east-1:123456789012:table/booksAPI-prod”

Now we come to writing deployment instructions for our Lambda function
We are calling our first function booksCreate inside this yaml file. It has appropriate handler value and description.
We intend to keep different values of memorySize for each stage/environment, hence it is reading this value from the variable LambdaMemorySize in the correct file.

So, if this deployment was for “qa” stage, the memorySize value will be set to 256
The events section contains the details required to create API.
The same thing has been repeated for other 4 functions in this file.
Lastly, we come to resources section. This is where we will put details to create DynamoDB table.

Here, we are naming this table as BooksTable in this file. This name has been referenced while creating ARN for role, as show below

The properties section has script to create a table with “id” as primary key. The value of ProvisionedThroughput is again dynamically read from file depending on what stage this deployment is for. So, for uat stage, its values will be as shown below

Finally, the TableName is set by reading its value from the variable DYNAMODB_TABLE set in environment section, as shown below

So, if this deployment is for “uat” stage, the table name will be booksAPI-uat
This completes creation of configuration file. Now let’s proceed to deploy it and create 4 separate environments
4. Deployment
Development Environment:
First, we will start with deployment of dev state/environment. Execute the following command by opening terminal in booksAPI folder
sls deploy –stage dev
When it is finished, we will see information like one shown below

Quality Environment:
sls deploy –stage qa

Staging Environment:
sls deploy –stage uat

Production Environment:
sls deploy –stage prod

Deployment is done. We have successfully provisioned 4 separate environments for hosting our code for 4 different stages of project lifecycle
5. Verification
We can now check whether all the resources for each environment are created properly as required.






6. Variables and Dynamic values
Now let’s review how we are using variables to pass different values for different stages.

You would recollect that the above lines in script were used to set Environment variables in Lambda function.
So, lets verify what values have been set


So as you can see, it sets correct values as per the stage. But you may be wondering where this is used and how is it accessed. If we open file get.js, we will see the following line of code where we are using it

Similarly, we had used different capacity values for different stages. So lets check if that has been set correctly.




This is how serverless framework helps to make more dynamic scripts easily so that they can be reused for different environments.
7. Environment Cost
Now comes the best part, which is the biggest advantage of serverless project. Each of this environment costs Zero money to deploy and keep, well almost zero.
So, unless you start using these environments, they can be kept as long as you want and there will be no cost of a VM running and waiting for you to utilize it.
Isn’t this Super Cool !!!
Do you remember any time in your IT career where cost of provisioning an environment was negligible?
Now let’s clarify Almost zero. There might be some charges incurred on DynamoDB side for keeping the Tables active and for storage of data. This can be managed in two ways
- Export data from your DynamoDB tables and delete them. So they can be provisioned again and data imported when required
- Lower currently provisioned capacity to minimum levels. This way, you utilization can come under AWS generous free tier and you won’t incur any charges
So, this is where Serverless projects shine. You only pay for the execution of resources and there is almost zero charge for provisioning resources and keeping them running. So, you can create 10 separate environments for conducting various kinds of tests and they will be for free
8. Testing
You can use Postman API testing tool and test any environment APIs from the URLs shown above. Please refer to any of our previous blogs to know more about testing individual API endpoints
9. Clean up
Run the same command 4 times, giving different stage value and all the resources created in this exercise will be deleted.
- sls deploy -stage dev
- sls deploy -stage qa
- sls deploy -stage uat
- sls deploy -stage prod
Conclusion:
This concludes our blog post. Hope you have seen how the power of using variables in serverless framework eases our deployment burden and allows us to use a single deployment file to provision many environments having different capacities.
Happy Clouding!