Enter your search

AWS re:Invent 2017: A few of our favourite things

By
(besides raindrops on roses and whiskers on kittens, that is)

New at re:Invent 2017

We spent last week in Las Vegas at AWS re:Invent, Amazon Web Services’ global cloud computing conference.

As always there was a huge number of new services and features announced this year. It can be hard to get your head around all the new stuff AWS throws out every year, so here’s our pick of the ones that caught our eye and made us the most excited for the future.

This obviously isn’t an exhaustive list – it’s mostly those services most relevant to us. If you want the full list, AWS has a big page of everything announced at re:Invent which we recommend you check out. But without further ado, here is a list of our favourite things:

Fargate

We’re super-excited for the introduction of Fargate for Elastic Container Service. Fargate essentially completely removes the concept of host instances from running containers on ECS. You simply tell AWS “run these containers for me” and Fargate takes care of all the infrastructure invisibly and under the hood. This especially makes things like Auto Scaling a lot easier where you don’t have to worry about whether you’re using your host instance resources efficiently.

New Instance Types: M5, H1

AWS is constantly iterating on the performance of their EC2 instance types, so the M5 instances come as no surprise – they effectively replace the previous generation of M4 instances, with better performance and lower price. The H1 instances come with a ton of local disk storage and are ideal for performing big batch jobs against large amounts of data.

New Spot Instance Options

Probably the biggest change to EC2’s pricing model since the Spot market was first introduced a few years ago, Spot Instances can now be launched instantly via the RunInstances command exactly like On-Demand instances and without a bid price, with all the pricing benefit previously afforded by Spot Instances. It’s now just as easy to launch a Spot instance as an On-Demand; the only difference is that Spots are often around a tenth of the price (and they occasionally get shut down if the price fluctuates). There’s also been a few changes under the hood to how the pricing system works so it’s less likely to fluctuate wildly and shut down a bunch of your instances at once.

Spot Hibernation

One of the key features of Spot Instances is that they may get shut down at any moment if AWS needs the extra capacity for On-Demand instances. For a while now it’s been possible to receive a warning that your instance is about to be shut down, but until now the only option at that point has been to save all your data somewhere external before the instance is terminated. Now, Spot Instances support hibernation, effectively saving the machine’s full state to EBS so it can be seamlessly resumed when the Instance is re-launched, rather than being forced to stash your data somewhere external and retrieve it once the instance is restored.

T2 Unlimited

We love T2 instances at GoSquared. We’ve written about them in the past. They’re an ideal instance type for low-CPU or very spiky workloads. Out of the box you get a certain amount of CPU usage for a T2 instance, which allows you to maintain a specific low utilisation on average, or the occasional burst of high-CPU for intensive workloads. The new T2 Unlimited feature allows you to extend the amount you’re allowed to burst by borrowing future CPU credits up to a day in advance, or charging on a millisecond-by-millisecond surplus basis.

Aurora

Aurora is AWS’s game-changing cloud-first implementation of MySQL and PostgreSQL. It’s their fastest-growing product ever, and with the features it provides that’s hardly surprising. And this year they’ve added yet more features that may change the way we think about cloud-based relational databases:

Multi-Master

Aurora has always supported scaling database reads by adding several read replicas – much like the old-school MySQL replication system, you can spool up as many replicas as you like and spread your read requests across them all to scale indefinitely. Previously however it wasn’t possible to scale any writes to the database. With Aurora Multi-Master that’s now possible – you can spin up multiple write instances as well as read replicas to make Aurora fully horizontally-scalable.

Multi-Region Multi-Master

This announcement didn’t even make it onto AWS’s announcement page for new products at re:Invent, but we think this is game-changing too. Aurora has supported cross-region replication for read replicas for a while, but in 2018 the service will be adding support for multi-region multi-master configurations. That means the database can scale reads and writes across multiple regions and provide availability that was previously unimaginable with just standard primary/secondary replication.

Aurora Serverless

Aurora Serverless is a new option in the works for the MySQL version of Aurora which takes away a lot of the thinking required when predicting workloads for MySQL. Instead of having to choose a particular instance type and scale for a certain amount of capacity and headroom, Aurora Serverless effectively removes all the thought of scaling databases up and down based on usage and replaces it with a model similar to how ELBs currently work (there are still instances under the hood, but you only pay in terms of “capacity units” used)

DynamoDB Global Tables

Aurora wasn’t the only service to get a multi-region upgrade at re:Invent this year. DynamoDB now has support for Global Tables, effectively replicating writes and eventually-consistent reads across multiple regions. Combine this with recent announcements such as increased runtime limits for Lambda@Edge and it’s now easier than ever to move your data closer to your users.

Amazon MQ

Amazon MQ is a managed Apache ActiveMQ service that takes care of all the ops work needed to maintain a highly-available broker service. It’s not a fully-managed scalable serverless service like SQS or DynamoDB; instead you still choose an instance type for deployment, which makes it a little more like ElastiCache or RDS. Still, since we use RabbitMQ internally here at GoSquared for a lot of its brokering/routing power, so we’re interested to see where Amazon MQ goes and how it can help take away that ops work for us.

Bigger Lambdas

Lambda functions can now consume up to 3008MB of memory, which is great if you’re using them for memory-intensive workloads such as big data analysis or processing large images.

More Control over Lambda concurrency

Lambda has always had an invisible under-the-hood limit on the number of concurrent functions you can have running at any one time in your account. This limit has previously always been opaque and the only way to know that you’ve run into it is when your functions suddenly stop working. Lambda now has new controls over this limit, allowing you to view concurrency metrics and limit concurrency on a per-function level (so no single function can run away exhaust your account-level limit, for example)

New Lambda deployment options

We use Lambda all over the place here at GoSquared. One thing we’ve always found tricky is how best to develop and test our functions. In an ideal world we’d write completely exhaustive test suites, so we know that every function works perfectly before we roll it out. But in the real world, there’s always something that your tests can’t catch. Now with Lambda’s new Traffic Shifting feature, it’s possible to gradually shift traffic from old to new versions of functions, making it easier to spot errors without having to do a full all-or-nothing deploy.

API Gateway deployment options

Much like the new Lambda deployment options, API Gateway now allows for better control of deployments with gradual rollouts rather than all-or-nothing releases which allow little room for errors not covered by tests.

AI

AWS has been making big strides in the field of AI and machine learning recently. We’ve been involved in the announcements for a few of these so we’re super-excited for what they entail:

DeepLens

DeepLens is AWS’s first deep-learning-enabled cloud-connected camera. It’s primarily geared up to facilitate the teaching of machine learning. While it’s not the sort of thing we’ll be using directly here at GoSquared, we’re impressed by AWS’s ambition for this device to enable the future of machine learning education.

AI Services: Comprehend, Translate, Transcribe

Comprehend, Translate and Transcribe are AWS’s new services for natural language processing, machine translation, and automatic speech recognition. These do pretty much exactly what it says on the tin, as a scalable and serverless API. We’re excited to see what we can do with these to enable future features in Assistant.

SageMaker

SageMaker is a new service which makes it easier than ever to get up and running with custom ML models. Previously there has been a lot of knowledge required in order to get up and running with training, optimising, deploying and scaling ML models, but SageMaker takes away a lot of that undifferentiated heavy lifting and makes it much easier to focus on what matters: working with models and data.

GuardDuty

AWS has had features for logging activity in your AWS account for a long while: CloudTrail, network flow logs, and DNS logs have been available for a while. But unless you know what you’re looking for, it’s hard to spot suspicious activity. GuardDuty takes on this role, automatically plugging into all your available logs and flagging any suspicious activity on your account.

Managed WAF rules

AWS WAF is great for mitigating suspicious HTTP activity on ELBs, filtering out requests that match certain rules and preventing them from hitting your infrastructure. But you have to set up and maintain those rules for yourself and know what you’re looking for. Now AWS have enabled sellers on their Marketplace to set up managed rules, effectively rule templates which get continually updated, so you don’t have to worry about keeping them maintained yourself.

S3 and Glacier Select

When doing large-scale analysis across lots of data in S3 it’s important to be efficient. But if for example you only need certain columns of data from a CSV file, or certain items in large JSON objects, your only option was previously to fetch the entire file and then filter out the data you’re interested in. S3 Select is an interesting new way of running that filtering entirely on the S3 side, drastically cutting down on the amount of data you need to shift back and forth. And even better, it’s also available for data that’s been cycled out to Glacier, which means dramatic saving on the amount of data you need to restore to S3’s standard storage classes.

But wait, there’s more!

We’ve simply picked out our favourite new AWS services and features here. There’s a whole bunch more that got announced during re:Invent. AWS’s page on new releases has a more exhaustive list, plust they’re announcing new things all the time. We can’t wait to see what they announce next.

Written by
JT is a co-founder and the lead front-end engineer at GoSquared. He's responsible for the shiniest of the shiny projects we work on.

You May Also Like

Group 5 Created with Sketch. Group 11 Created with Sketch. CLOSE ICON Created with Sketch. icon-microphone Group 9 Created with Sketch. CLOSE ICON Created with Sketch. SEARCH ICON Created with Sketch. Group 4 Created with Sketch. Path Created with Sketch. Group 5 Created with Sketch.