In most scenarios, we like to deploy our infrastructure to two environments, stage and production, both a replication of each other. This gives us and our clients confidence when it comes to going live i.e. there are no hidden gotchas in the production environment.
However, as with a lot of software development, there are often multiple pieces of work in the pipeline at the same time. In situations like this, what is deployed to stage before going live can often need to be in flux. Usually this would mean deploying a specific branch to staging rather than auto deploying the develop branch on merge, which is our preferred practice. This can be even more tricky if the branch needs to be on tested for a longer period, perhaps involving other stakeholders.
This blog will explain one simple way to work around this if you are using AWS Copilot (see our previous post) for your deployments and explain how we use it to solve issues like additional cost & and differing Django migrations between branches.
We recently needed a third environment for a client, which would ideally be cheaper than another replication of stage/production and allow us to have a development process that:
For the most part, the solution was quite simple, we already had everything we needed, it just required a few small tweaks.
We first created and deployed a new environment (called test) in CoPilot, which used the same manifest as production and stage. Then we created a pipeline (a manual bitbucket pipeline in this case) that allows a developer to select a branch via bitbucket and select the pipeline. The pipeline simply calls:
copilot deploy --env test --name <project_name>
When the developer runs this, it deploys the branch, which takes around 10 minutes. To keep the cost of this new environment low, since it would not always be in us, we created a second scheduled pipeline, which tears down the environment every Friday, which fits with our needs. This pipeline calls the following to remove the ECS service:
copilot svc delete --env test --name <project_name> --yes
We were also able to keep the costs down further by adding a CoPilot sidecar for Redis, rather than using AWS ElasticCache as per stage and production. Pointing the Redis host to
localhost in our secrets/settings for this environment. To make this sidecar only for this environment, simply add it to the environment section of your manifest as below, note we also make the base image (Django) depend on this sidecar starting as per the startup sidecar in our initial post:
You could also take this further with other containerized parts of your infrastructure if you wish, such as the database, given these are essentially temporary deployments that are torn down regularly. In our example however, we leveraged an RDS.
One more thing we did was set our startup sidecar’s command (which runs the Django migrations) to the below. It leverages django_extensions to reset the
test db on deployment & we also wrote a simple management command to copy the staging DB afresh on each deployment. This stops us from getting into a mess when the branches are moving backward and forwards between migrations. Again this is applied within the environments section to not affect the other two environments startup.
Of course, you could take this a lot further and have all your feature branches deploy and create the environments on the fly, but for our needs it was a nice simple solution with surprisingly changes needed.
Recently we needed to deploy a Django application. The brief was simple, it needed to be on AWS, cost-effective, have continuous deployment, and be easy for the client to bring in-house down the line, without needing specialist DevOps expertise.
MangoPay presents itself as "Payments for Marketplaces", but it's far from being just a payment provider...
Sometimes you want to show a user a different feature. Or you want to test it in the production environment without affecting the other users. Or you have a group of beta testers for which you rely on early feedback to improve your app. Below is how to do this...
A lot of websites display notifications in one form or another these days. Facebook shows the new message notification, Gmail have the new mail notifications (with the support of browser notifications). I was asked today how hard it would be to implement a real-time notification system. The answer, as it...