/
Development & Deployment of AWS Resources

Development & Deployment of AWS Resources

1. Development 🖥

1.1. Lambda Functions

  1. Write your lambda function and store it in the code directory. You should create a folder for every single function, with a dedicated build.sh file for installing dependencies. As your function will be uploaded as a zip file to AWS later, you should install your dependencies under the same directory or a sub-directory as your source code and make sure your code can call these dependencies correctly.

    1. For example, for a lambda function written in Python, you can put a requirements.txt file under the same folder, then call pip install -r requirement -t package/ to install them. In your code, you should use sys.path.insert(0, 'package/') to make your package callable by the handler.

    2. Note that it is not a good practice to upload your dependencies to the repository.

  2. In deployment folder, make a copy of lambda.hello_world.tf file. (Please do read the file as you will use it as a template to create one such file for every one of your functions.)

  3. Make necessary changes to the new file (especially for those lines marked TODO)

    1. Also, make sure you are not using duplicate resource/data names under the same namespace.

    2. Please refer to Terraform Registry for the complete document for all resources.

  4. Check your files carefully before you commit them.

  5. Push your code and the .tf file to the master branch

  6. Deploy your service by manually triggering a GitHub Action (see the link in “Deployment ∞” section)

Please make sure you read section 1.4 for important notes.

1.2. Containers

  1. Provide your code in code directory with a Dockerfile to build your images.

  2. Build your images locally, or write a GitHub Action yourself to build it. Then push the image to Dockerhub. ∞

  3. In deployment folder, make a copy of ecs.hello_world.tf file. (Please do read the file as you will use it as a template to create one such file for every one of your services.)

  4. Make necessary changes to the new file (especially for those lines marked TODO)

    1. Also, make sure you are not using duplicate resource/data names under the same namespace.

    2. Please refer to Terraform Registry for the complete document for all resources.

  5. Check your files carefully before you commit them.

  6. Push your code and the .tf file to the master branch

  7. Deploy your service by manually triggering a GitHub Action (see the link in “Deployment ∞” section)

Please make sure you read section 1.4 for important notes.

1.3. Environment Variables

These are the variables being injected by the pipeline by default.

  • GATEWAY_API_ID: The ID of the API Gateway. It is needed in the Terraform configuration when you declare a route under the gateway.

  • GLOBAL_S3_NAME: The name of the shared S3 bucket being used as the data lake. It may be needed by your service if your service has something to do with the bucket.

  • GATEWAY_AUTH_ID: The ID of the Authorization method. It is needed in the Terraform configuration if you need to protect your resource behind the authenticator.

  • VPC_CONNECTION_ID: The ID of the VPC connection. It is needed if you are deploying ECS (container) in order to let the gateway connect to your container. It is not needed for Lambda functions.

  • VPC_ID: The ID of the VPC. Same as VPC_CONNECTION_ID.

  • EXEC_REGION: The region where your resources will be deployed. It is used to initialise the pipeline/Terraform. Normally, you don’t need to explicitly use them.

  • EXEC_PROVIDER_S3_NAME: The name of the S3 bucket being used to persist your Terraform state. It is used to initialise the pipeline/Terraform. Normally, you don’t need to explicitly use them.

1.4. Important Notes

  1. Once you name a block and deployed it, changing its name may confuse Terraform. Make sure you use moved block if you have to change the name.

  2. Do not create multiple repositories with a same .GROUP_NAME. This will mess up Terraform states and cause resource conflicts. This means it can also be a problem when you have multiple branches. Try only running the script on one branch at the same time.

  3. Do not abuse terraform destroy! If you need to make changes to a deployed resource, you can make changes to the Terraform file and run the deployment pipeline again - Terraform will find the difference and build on top of the current state.

2. Deployment ∞

Content below assumes you adhere a similar directory structure as the template repository.

Two centralised maintained pipelines are available to be used in GitHub Action for all repositories under the same GitHub Organization. One will deploy the stack in your repository and the other will destroy it. It has been set up to call them in the service template repository. You should manually run the workflow when a new deployment is needed (see Manually running a workflow - GitHub Docs).

 

Deploying the Stack

  1. Preprocessing - the pipeline will read the .GROUP_NAME file in your repository and inject other necessary information into the configuration file you wrote in deployment folder (including some environment variables, credentials, etc.) Therefore it is essential that you make sure your .GROUP_NAME file has been filled with content correctly. If you think you need extra variables being injected, feel free to post on the forum.

  2. Execution - the pipeline will switch to the environment you provided as the input of the pipeline, execute terraform apply on the deployment folder with preprocessed information in the last step.

  3. Output - after the execution, terraform output will be called, and if you have defined output in your Terraform configuration, they will be printed out in this step. Also, your service will be deployed if the run is successful.

Destroying the Stack

Destroying follows the same steps as deploying, except that the output step will be omitted after the deconstruction. But if you just need to make changes to a deployed resource, you can make changes to the Terraform file and run the deployment pipeline again - Terraform will find the difference and build on top of the current state. So you may rarely need this script.

Related content