Insights, News & Events

AWS re:Invent 2020: Week 1

Our summary of re:Invent week 1.

AWS re:Invent was always going to be a bit different this year, for reasons that everyone in the world is aware of.

As much as we miss the Las Vegas heat and meeting people in person to talk about all things AWS, there have been a bunch of exciting announcements that we’re very keen to sink our teeth into. Although recreating the Las Vegas trip is hard, it’s fantastic that by being a free, virtual event, people that wouldn’t usually be able to attend have been able to get involved.

For announcements we haven’t covered, make sure to check out CSO@re:Invent for a full breakdown. It’s a great resource for hearing expert analysis on the various announcements, so make sure to check it out.

Standouts for us include 1ms billing granularity and Strong read after write consistency in S3. We’ll be covering the bigger releases in much more detail post re:Invent so stay tuned.

EC2

The focus so far with EC2 has been on releasing new instance types that solve common customer problems—these will no doubt be received very well across the board.

The infrastructure keynote is on December 10th, 15:30 – 17:30 GMT, where we can expect to see lots more announcements around EC2, RDS, container services, and so on.

EC2 G4ad Instances

First up is Amazon EC2 G4ad Instances Featuring AMD GPUs for Graphics Workloads. This is particularly exciting for customers with high-performance graphic workloads—game streaming, animation, and video rendering, and so on.

These new instances will result in higher performance, with less cost. They’ll be available soon in US East (N. Virginia), US West (Oregon), and Europe (Ireland).

You can read the full announcement here: https://aws.amazon.com/blogs/aws/new-amazon-ec2-g4ad-instances-featuring-amd-gpus-for-graphics-workloads/

EC2 M5zn Instances

New EC2 M5zn Instances have been announced. These are like the z1d instances (extremely high per-core performance and a high memory-to-core ratio) but with no local NVMe storage, higher networking throughput, and a reduced memory-to-vCPU ratio. This makes them perfect for workloads such as gaming, financial applications, simulation modelling applications (such as those used in the automobile, aerospace, energy and telecommunication industries), and High-Performance Computing (HPC).

Check out the full announcement here: https://aws.amazon.com/blogs/aws/new-ec2-m5zn-instances-fastest-intel-xeon-scalable-cpu-in-the-cloud/

EC2 C6gn Instances

EC2 C6gn Instances are coming soon. These deliver up to 100 Gbps network bandwidth, up to 38 Gbps Amazon Elastic Block Store (EBS) bandwidth, up to 40% higher packet processing performance, and up to 40% better price/performance versus comparable current-generation x86-based network optimised instances.

Customers with workloads needing high networking bandwidth will appreciate these AWS Graviton2-powered instances.

The full announcement is here: https://aws.amazon.com/blogs/aws/coming-soon-ec2-c6gn-instances-100-gbps-networking-with-aws-graviton2-processors/

EC2 R5b Instances

A new addition to the EC2 R5 instance family has been announced: R5b. Powered by the AWS Nitro System to provide the best network-attached storage performance available on EC2, the new instance offers up to 60Gbps of EBS bandwidth and 260,000 I/O operations per second (IOPS). They also offer 3x higher EBS performance.

Read about EC2 R5b instances in full here: https://aws.amazon.com/blogs/aws/new-amazon-ec2-r5b-instances-providing-3x-higher-ebs-performance/

D3 and D3en Instances

New D3 and D3en Instances have been announced. These are relevant to customers that need massive amounts of very economical on-instance storage for their data warehouses, data lakes, network file systems, Hadoop clusters, and so on.

The new D3 instances are available in four sizes, with up to 32 vCPUs and 48 TB of storage.

The full announcement post is here: https://aws.amazon.com/blogs/aws/ec2-update-d3-d3en-dense-storage-instances/

EC2 Mac Instances

You can now use Amazon EC2 Mac instances to build the & test macOS, iOS, iPadOS, tvOS, and watchOS apps.

The instances feature an 8th generation, 6-core Intel Core i7 (Coffee Lake) processor running at 3.2 GHz, with Turbo Boost up to 4.6 GHz. You can use these instances to create build farms, render farms, and CI/CD farms that target all of the Apple environments.

As always with EC2, you pay only for what you use, and you get to benefit from the usual elasticity, scalability, security, and reliability.

Read the full announcement post here: https://aws.amazon.com/blogs/aws/new-use-mac-instances-to-build-test-macos-ios-ipados-tvos-and-watchos-apps/

EBS Storage

New gp3 Volume Type

gp3—a new type of SSD EBS volume that lets you provision performance independent of storage capacity, and offers a 20% lower price than existing gp2 volume types has been announced.

The new gp3 is the 7th variation of EBS volume types. It is ideal for applications that require high performance at a low cost such as MySQL, Cassandra, virtual desktops and Hadoop analytics.

Read about the new gp3 in full here: https://aws.amazon.com/blogs/aws/new-amazon-ebs-gp3-volume-lets-you-provision-performance-separate-from-capacity-and-offers-20-lower-price/

Lambda

Container Image Support

You can now package and deploy Lambda functions as container images of up to 10 GB in size. AWS are also providing base images for all the supported Lambda runtimes (Python, Node.js, Java, .NET, Go, Ruby) so that you can easily add your code and dependencies.

Also being released (as open source) is a Lambda Runtime Interface Emulator that enables you to perform local testing of the container image and check that it will run when deployed to Lambda. This is included in all AWS-provided base images and can be used with arbitrary images as well.

Read about Container Image Support in its entirety here: https://aws.amazon.com/blogs/aws/new-for-aws-lambda-container-image-support/

Functions with Up to 10 GB of Memory and 6 vCPUs

You can now allocate up to 10 GB of memory to a Lambda function. This is a 3x increase compared to previous limits. Lambda allocates CPU and other resources linearly in proportion to the amount of memory configured, which means you can now have access to up to 6 vCPUs in each execution environment.

This means that new use cases, like machine learning applications, modelling, gaming, high-performance computing, and so on, become easier to implement and scale with Lambda functions.

Read about it in full here: https://aws.amazon.com/blogs/aws/new-for-aws-lambda-functions-with-up-to-10-gb-of-memory-and-6-vcpus/

1ms Billing Granularity

Starting now, AWS are rounding up duration to the nearest millisecond with no minimum execution time.

Previously it has been to the nearest 100ms, so this is a really big deal cost-wise if you have a lot of sub-100ms Lambda functions.

With this new pricing, you are going to pay less most of the time, but it’s going to be more noticeable when you have functions whose execution time is much lower than 100ms, such as low latency APIs.

Read all about it here: https://aws.amazon.com/blogs/aws/new-for-aws-lambda-1ms-billing-granularity-adds-cost-savings/

Containers

New Public Container Registry

Amazon Elastic Container Registry Public (ECR Public) has been launched.

ECR Public allows you to store, manage, share, and deploy container images for anyone to discover and download globally. We’ve been able to host private container images on AWS for a while, but with ECR Public we can now host public ones too.

This means that anyone (with or without an AWS account) can browse and pull your published container artifacts.

This is in response to the limitations imposed by Docker earlier this year. It’s great to see how speedy AWS have been in responding to this with a great new launch.

AWS has also launched a website where you can browse and search for public container images, view developer provided details, and discover the commands you need to pull containers.

The full announcement post is here: https://aws.amazon.com/blogs/aws/amazon-ecr-public-a-new-public-container-registry/

EKS Distro (EKS anywhere coming 2021)

EKS Distro is a Kubernetes distribution based on and used by Amazon Elastic Kubernetes Service (Amazon EKS) to create reliable and secure Kubernetes clusters. With EKS Distro, you can rely on the same versions of Kubernetes and its dependencies deployed by EKS.

EKS Distro includes upstream open source Kubernetes components and third-party tools including configuration database, network, and storage components necessary for cluster creation.

In 2021, EKS Anywhere will be launched, which will provide an installable software package for creating and operating Kubernetes clusters on-premises and automation tooling for cluster lifecycle support.

You can read the full blog post here: https://aws.amazon.com/blogs/aws/amazon-eks-distro-the-kubernetes-distribution-used-by-amazon-eks/

ECS Anywhere

ECS Anywhere is an extension of Amazon ECS. ECS Anywhere will allow customers to deploy native Amazon ECS tasks in any environment, including traditional AWS managed infrastructure, as well as customer-managed infrastructure. ECS Anywhere will be generally available in 2021.

The full post is here: https://aws.amazon.com/blogs/containers/introducing-amazon-ecs-anywhere/

S3

Strong Read-After-Write Consistency

As of now, all S3 GET, PUT, and LIST operations, as well as operations that change object tags, ACLs, or metadata, are now strongly consistent. What you write is what you will read, and the results of a LIST will be an accurate reflection of what’s in the bucket.

This applies to all existing and new S3 objects, works in all regions, and is available to you at no extra charge.

It can’t be understated how big of a deal this one is—we’ve found that a lot of people new to S3 assumed that this was default behaviour, and would eventually be tripped up after using S3 for a while.

Because S3 now has strong consistency, migration of on-premises workloads and storage to AWS should now be easier than ever before.

We’ll cover this in more detail after re:Invent.

The full announcement post can be read here: https://aws.amazon.com/blogs/aws/amazon-s3-update-strong-read-after-write-consistency/

Multi-destination replication

AWS has announced S3 Replication support for multiple destination buckets—you can now replicate data from one source bucket to multiple destination buckets.

With S3 Replication (multi-destination) you can replicate data in the same AWS Regions using S3 SRR or across different AWS Regions by using S3 CRR, or a combination of both. This removes the need for you to develop your own solutions to replicate the data across multiple destinations.

S3 Replication (multi-destination) is an extension to S3 Replication, and it supports all existing S3 Replication features like Replication Time Control (RTC) and delete marker replication.

Read about S3 replication in its entirety here: https://aws.amazon.com/blogs/aws/new-amazon-s3-replication-adds-support-for-multiple-destination-buckets/

RDS

Aurora Serverless v2

At re:Invent 2017 AWS announced the original Amazon Aurora Serverless. 3 years later, Amazon Aurora Serverless v2 is available in preview.

Amazon Aurora Serverless v2 provides the ability to scale database workloads to hundreds of thousands of transactions in a fraction of a second. Instead of doubling capacity every time a workload needs to scale, it adjusts capacity in fine-grained increments to provide just the right amount of database resources for an application’s needs.

This means that you pay only for the capacity your application consumes, and you can save up to 90% of your database cost compared to the cost of provisioning capacity for peak load.

Read about Aurora Serverless v2 in full here: https://aws.amazon.com/about-aws/whats-new/2020/12/introducing-the-next-version-of-amazon-aurora-serverless-in-preview/

That’s it for week 1. We’re excited about all of these announcements and are really looking forward to seeing what week 2 has in store.

Written By Rob Greenwood 3 Dec 2020
Share