logo

A faster and more reliable backend

Last night, the Swift Calcs team completed the transition of the entire Swift Calcs platform from Heroku to Amazon Web Services (AWS).

Why the change?

Heroku is a fantastic service (this blog you're reading? Yup, on Heroku). It allows the near immediate creation of sophisticated web environments with little to no effort. When we started Swift Calcs, we wanted to get our prototypes online, not worry about server configurations or database provisioning. Heroku does all this work for you seamlessly, allowing you to focus on what you care about.

Unfortunately, this comes with some costs. First, COST! Heroku runs on AWS, so they are basically upcharging you for the services you are getting from Amazon. For a small/starter site, this is a non-issue...but as servers get larger and more complex, that extra overhead adds up. Plus, Swift Calcs is a member of the AWS Activate program, which provides us some free AWS services as a young company, credits that we can't use with Heroku.

The second cost, and arguably the reason we made the switch, is customization. Heroku is very good at getting you up and running fast, but that means its offerings are pre-packaged and standardized. Tweaking servers, adding services, customizing the database, and more becomes increasingly difficult as you scale. AWS, on the other hand, is more burdensome to setup, but once it's going, it's incredibly flexible. Customizing the server configurations, database, and workers allows for a better solution for Swift Calcs.

What it means for you

As a user, this change should be completely transparent (apart from the 20 minutes of maintenance/downtime yesterday during the switchover). The point is that you don't notice it. Swift Calcs should be reliable and fast regardless of the time of day or load on our servers, and with this change to AWS, we will be more able to ensure our continued reliability and success. We will also be rolling out new features soon that became available (or more straightforward to implement) thanks to the AWS move, including the ability to reply to support tickets emails directly (currently you have to visit our website to reply), the ability to create new tickets by emailing [email protected], bounce notifications for invitations, and more.

How we did it

For the tech stars out there, our transition process wasn't as worrisome as we feared. Feel free to reach out to us for advice, but we followed a few steps to move (Our Application is primarily Rails based, and was running on a Heroku rails setup):

  1. We create an Elastic Beanstalk (EB) application based on the same Ruby version in use in our Heroku app. Key notes: There are various articles online describing the need for an .ebextensions folder to tell the EC2 instance to use yum to install some dependent libraries (git, etc). We didn't need to do this, as none of our gems are loaded through git, so we skipped this step and had no issues with the default set of installed libraries in the linux AMI we used.
  2. We did NOT create a database with the elastic beanstalk environment (why? Even Amazon tells you not to do this...if you do, the RDS instance is associated with the EB environment, so if you terminate or change the EB environment, the RDS instance can be recreated and all your data can be booted...). Instead, we went to RDS and provisioned a database directly.
  3. Using a database dump from Heroku, we used pg_restore to load our data in to our new RDS instance
  4. In our Elastic Beanstalk environment, we updated the configuration to add our environment variables (including the database host/user/pass, various third party API keys, etc) that are used by the app.
  5. We loaded our SSL certificate into the load balancer associated with the EB environment.
  6. We repointed our DNS settings to amazon. We use Cloudflare to manage our DNS (among other things...) and updating the CNAME records was easy. You may want to visit a few days beforehand and update your expiration time to 2 minutes or so in order to ensure a quick changeover. You can then re-up them to 24+ hours later.
  7. Done! Our app was up and running on AWS.

Final notes: Heroku has a very nice service called 'scheduler' that makes it incredibly easy to define cron jobs to run periodic tasks. Its more difficult to do this on AWS, because EC2 instances can be starting and stopping all the time, and if you set cron on all instances, your jobs will run on each instance, when you only want them to run once. There are many gymnastics around this (check if the instance is the leader, etc) but we went with creating a second EB environment, but in a worker tier.

A worker tier EB environment can be set to run periodic jobs by listing them in a file called cron.yaml in the root of your app. Essentially, it will post to a desired endpoint in your worker tier web-app at specified intervals (See here). We simply use our exact same Rails app as on our web server in the worker tier, but specify different routes to a special controller that contains all the logic for our periodic tasks. Then when they are called, the 'cron' controller in our app handles the request and performs the periodic task, similar to how we used rake tasks for the same purpose in Heroku with scheduler.

A side benefit of doing this is that you can use the worker tier for a variety of other tasks, such as sending emails, handling inbound email, and more, and it integrates with Amazon's SQS service to allow queing jobs quick and easy. We are already finding a number of new interactions for our app using SQS with SNS and SES (AWS loves their 3 letter acronyms) and this flexibility is opening new doors for our dev team.

We also had to update our build scripts for elastic beanstalk, but this was fairly easy as well. Basically, instead of a git push heroku we now use an eb deploy statement in our deploy scripts. Our only complaint? Heroku is git based, so deploys are fast as it only needs to sync changes. EB, on the otherhand, uploads a full zip archive for each deploy, which is slower, and when things go wrong you have to dig in to the logs, as EB doesn't push the same amount of info to the console. This makes deployments to our staging server a bit more time consuming, but these are small gripes, especially because logs aren't too hard to pull with eb logs, and our entire app when zipped is only a couple MB, so the full upload each time isn't too much of a delay.

comments powered by Disqus