re:Invent: December 3rd announcements

Day 2 of re:Invent had a focus on bringing AWS even closer to the edge user, with the announcements of AWS Outposts (AWS in your office / DC), AWS local-zones (AWS closer to large populations), and AWS Wavelength (AWS within your 5G mobile carrier).

There has also been a number of releases around infrastructure as a service, machine learning & artificial intelligence and a handful of key serverless releases, specifically Fargate.

Like in our pre re:Invent post, I’ve outlined the major releases below. We’ll start with how AWS is bringing their service closer to the edge user, before moving on to IaaS, machine learning/AI, and serverless.

There’s quite a lot here—cmd+f might be helpful if you’re looking for something specific.

AWS Moving Closer to the Edge User

AWS Outposts

AWS Outposts were first mentioned back at re:Invent 2018. AWS are now ready to take orders and install Outposts racks in data centres or colo facilities.

Outposts are a comprehensive, single-vendor compute and storage solution designed to meet the needs of customers who need local processing and very low latency. Once installed, AWS take care of monitoring, maintaining, and upgrading the Outposts.

To learn more about the incredibly cool Outposts, follow this link:

AWS Local Zones

AWS Local Zones are a new type of AWS infrastructure deployment that brings select AWS services very close to a particular geographic area.

The Local Zone in Los Angeles has now been launched. The Local Zone is designed to provide very low latency to applications that are accessed from Los Angeles and other locations in Southern California.

Fingers crossed that some UK Local Zones are on the way—we’d love a Manchester Local Zone.

Read more about AWS in Los Angeles here:

AWS Wavelength (preview)

AWS Wavelength enables developers to build applications that deliver single-digit millisecond latencies to mobile devices and end-users.

Benefits of AWS Wavelength include ultra-low latency for 5G, a consistent AWS experience, it’s flexible and scalable, and works on the global 5G network.

You can read more about AWS Wavelength here:

Infrastructure as a Service

Inf1 Instances & Graviton2 Instances

AWS have launched Inf1 instances in four sizes: inf1.xlarge, inf1.2xlarge, inf1.6xlarge, and inf1.24xlarge. They’re powered by AWS Inferentia chips, and are designed to provide fast, low-latency inferencing.

AWS Inferentia chips are designed to accelerate the inferencing process. Each chip can deliver the following performance:

  • 64 teraOPS on 16-bit floating point (FP16 and BF16) and mixed-precision data.
  • 128 teraOPS on 8-bit integer (INT8) data.

Learn more about Inf1 Instances here:

Also coming soon is Graviton2-powered EC2 instances. The first generation (A1) of Arm-based, Graviton-powered EC2 instances were announced at re:Invent 2018. Since then, thousands of AWS users have used them to run containerised microservices, web servers, data/log processing, and any other type of scale-out workloads.

Graviton2 is the next-generation of Arm-based EC2 instances. These instances are built on AWS Nitro System and will be powered by the new Graviton2 processor—a custom AWS design that is built using a 7-nanometer manufacturing process based on 64-bit Arm Neoverse cores.

Read about Graviton2 in much more depth here:

Amazon Managed Cassandra Service

Amazon Managed Cassandra Service (MCS) has been launched in open preview. MCS is a scalable, highly available, and managed Apache Cassandra-compatible database service.

MCS is serverless, meaning that you only pay for the resources you use—the service automatically scales tables up and down in response to application traffic.

With Amazon MCS, users can run Cassandra workloads on AWS using the same Cassandra application code and developer tools that they use today.

Read more about Amazon Managed Cassandra Service here:

EBS Direct APIs

EBS direct APIs provide users with access to snapshot content and are available now in the US East (N. Virginia), US West (Oregon), Europe (Ireland), Europe (Frankfurt), Asia Pacific (Singapore), and Asia Pacific (Tokyo) regions. They’ll be available in the other regions over the next few weeks.

The APIs are designed for developers of backup/recovery, disaster recovery, and data management products and services. The APIs will allow them to make their offerings faster and more cost-effective.

Learn more here:

AWS Compute Optimizer

AWS Compute Optimizer helps users optimise compute resources for their workloads.

AWS Compute Optimizer uses machine learning techniques to analyse the history of resource consumption on your account, and make actionable recommendations tailored to your resource usage. 

It is also integrated with AWS Organizations, meaning that you can view recommendations for multiple accounts from your master AWS Organizations account.

Read more about AWS Compute Optimizer here:

Amazon RDS Proxy (preview)

Amazon RDS Proxy is a fully managed, highly available database proxy for Amazon Relational Database Service (RDS) that makes applications more scalable, more resilient to database failures, and more secure.

Amazon RDS Proxy allows applications to pool and share connections established with a database, improving efficiency and application scalability.

Amazon RDS Proxy sits between your application and your relational database to efficiently manage connections to the database and improve scalability of the application.

Learn more about Amazon RDS Proxy here:

AWS Transit Gateway: Multicast & Inter-Region Peering

AWS Transit Gateway enables AWS customers to connect thousands of Amazon Virtual Private Clouds and their on-premises networks using a single gateway.

Multicast makes it easy for users to build multicast applications in the cloud and distribute data across thousands of connected Virtual Private Cloud networks, delivering a single stream of data to many users simultaneously.

AWS is the first cloud provider to offer a native multicast solution which will enable customers to migrate their applications to the cloud and take advantage of the elasticity and scalability that AWS provides.

AWS Transit Gateway inter-region peering makes it easy to create secure and private global networks across multiple AWS regions. Users can create centralised routing policies between the different networks in their organisation and simplify management and reduce costs.

Read about AWS Transit Gateway Multicast & Inter-Region Peering here:

AWS Transit Gateway: Network Manager

AWS Transit Gateway Network Manager provides a single global view of your private network.

Network Manager reduces the operational complexity of managing a global network across AWS and on-premises. Users can easily set up a global view of their private network by registering their Transit Gateways and on-premises resources. The network can then be visualised and monitored through a centralised operational dashboard.

Read more here:

Machine Learning / AI

Amazon SageMaker Studio

Amazon SageMaker Studio has been launched—the first fully integrated development environment for machine learning.

Amazon SageMaker Studio brings together all the tools needed for ML development. Devs can write code, track experiments, visualize data, and perform debugging and monitoring all within a single, integrated visual interface, which significantly boosts developer productivity.

Learn more about Amazon SageMaker Studio here:

Amazon SageMaker Model Monitor

Amazon SageMaker Model Monitor is a new capability of Amazon SageMaker that automatically monitors machine learning models in production, and alerts you when data quality issues appear.

You can use SageMaker Model Monitor on any endpoint, whether the model was trained with a built-in algorithm, a built-in framework, or your own container.

Read about Amazon SageMaker Model Monitor here:

Amazon SageMaker Experiments

Amazon SageMaker Experiments lets you organise, track, compare and evaluate machine learning experiments and model versions.

The goal of SageMaker Experiments is to make it as simple as possible to create experiments, populate them with trials, and run analytics across trials and experiments.

Learn more here:

Amazon SageMaker Debugger

Amazon SageMaker Debugger automatically identifies complex issues that develop in machine learning training jobs.

In your existing training code for TensorFlow, Keras, Apache MXNet, PyTorch and XGBoost, you can use the new SageMaker Debugger SDK to save internal model state at periodic intervals—it will be stored in Amazon Simple Storage Service.

Read about Amazon SageMaker Debugger in much more depth here:

Amazon SageMaker Autopilot

Amazon SageMaker Autopilot automatically creates the best classification and regression machine learning models, while allowing full control and visibility.

Amazon SageMaker Autopilot helps with the various difficulties that come with machine learning models. It allows users to rely on a fully managed service where you can call an API and get results quickly.

Learn all about Amazon SageMaker Autopilot here:

Amazon Redshift: New Instances and Optimised Storage

Amazon Redshift is the world’s most popular data warehouse and delivers 3x the performance of any other cloud data warehouse service. Existing Redshift customers using Dense Storage (DS2) instances will get up to 2x better performance and 2x more storage at the same cost.

The new ra3.16xlarge instances have 48 vCPUs, 384 GiB of Memory, and up to 64 TB of storage. You can create clusters with 2 to 128 instances, getting over 8 PB of compressed storage.

Regarding the new optimised storage, there’s a cache of large-capacity, high-performance SSD-based storage on each instance, backed by S3, for scale, performance, and durability. The storage system uses multiple cues, including data block temperature, data blockage, and workload patterns, to manage the cache for high performance.

Read more about the new instances and optimised storage here:

Amazon Redshift: Data Lake Export

You can now unload the result of a Redshift query to your S3 data lake in Apache Parquet format. This format is up to 2x faster to unload, and consumes up to 6x less storage in S3, compared to text formats.

This means that you can save the data transformation and enrichment you have done in Redshift into your S3 data lake in an open format.

Learn more here:


Fargate: EKS Fargate

You can now start using Amazon EKS to run Kubernetes pods on AWS Fargate.

EKS and Fargate make it easy to run Kubernetes-based applications on AWS by removing the need to provision and manage infrastructure for pods.

Customers no longer have to worry about patching, scaling, or securing a cluster of EC2 instances to run Kubernetes applications in the cloud. Using Fargate, customers define and pay for resources at the pod-level. This makes it easy to right-size resource utilisation for each application and allows customers to clearly see the cost of each pod.

Learn about how to build a cluster here:

Fargate: Fargate Spot

Fargate Spot is a new capability on AWS Fargate that can run interruption tolerant Amazon ECS tasks at up to a 70% discount off the Fargate price.

The concept is similar to EC2 Spot Instances—spare capacity in the AWS cloud is used to run your tasks. When AWS needs the capacity back, tasks running on Fargate Spot will be interrupted with two minutes of notification. For this reason, you shouldn’t run tasks on Fargate Spot that cannot tolerate interruptions.

For your fault-tolerant workloads, Fargate Spot enables you to optimise your costs.

To learn more about Fargate Spot, follow this link:

Lambda: Provisioned Concurrency

Provisioned Concurrency is a feature that keeps functions initialised and hyper-ready to respond in double-digit milliseconds.

This is ideal for implementing interactive services, such as web and mobile backends, latency-sensitive microservices, or synchronous APIs.

When you enable Provisioned Concurrency for a function, the Lambda service will initialize the requested number of execution environments so they can be ready to respond to invocations.

Read more about Provisioned Concurrency here:

Express Workflows

AWS Step Functions were launched at re:Invent 2016, with AWS customers using them as a core element of their multi-step workflows. Step Functions make it easy to build, test and scale workflows.

Express Workflows are an alternative to the existing Standard Workflows. The Express Workflows use the same declarative specification model, but are designed for high-volume, short-duration use cases.

See an in-depth look at Express Workflows here:

That’s it for Day 2. There’s a wide range of announcements and updates that we’re excited to start using, especially the serverless releases.

Our next re:Invent post will be live soon!

SH.RD3A S01.36.4.49


Leave a reply

Your email address will not be published.

SH.RD3A S02.41.29.68