Contents of this page is copied directly from AWS blog sites to make it Kindle friendly. Some styles & sections from these pages are removed to render this properly in 'Article Mode' of Kindle e-Reader browser. All the contents of this page is property of AWS.

Page 1|Page 2|Page 3|Page 4

Introducing Amazon SageMaker Canvas - a visual, no-code interface to build accurate machine learning models

Posted On: Nov 30, 2021

Amazon SageMaker Canvas is a new capability of Amazon SageMaker that enables business analysts to create accurate machine learning (ML) models and generate predictions using a visual, point-and-click interface, no coding required.

Amazon SageMaker Canvas provides an intuitive user interface to quickly connect to and access data from disparate sources, and prepare data for building ML models. SageMaker Canvas leverages powerful AutoML technology from Amazon SageMaker, which automatically trains and build models based on your dataset. This allows SageMaker Canvas to identify the best model based on your dataset so you can generate single or bulk predictions. SageMaker Canvas is integrated with SageMaker Studio, making it easy for business analysts to share models with data scientists. SageMaker Canvas helps analysts within an enterprise—regardless of their technical skill—to create accurate machine learning models from disparate datasets, and collaborate more effectively with data scientists. 

Amazon SageMaker Canvas is now available in the US East (Ohio), US East (N. Virginia), US West (Oregon), Europe (Frankfurt), Europe (Ireland) AWS Regions. To learn more, read the blog post here and refer to the documentation to get started.

» Amazon S3 Object Ownership can now disable access control lists to simplify access management for data in S3

Posted On: Nov 30, 2021

Amazon S3 introduces a new S3 Object Ownership setting, Bucket owner enforced, that disables access control lists (ACLs), simplifying access management for data stored in S3. When you apply this bucket-level setting, every object in an S3 bucket is owned by the bucket owner, and ACLs are no longer used to grant permissions. As a result, access to your data is based on policies, including AWS Identity and Access Management (IAM) policies applied to IAM identities, session policies, Amazon S3 bucket and access point policies, and Virtual Private Cloud (VPC) endpoint policies. This setting applies to both new and existing objects in a bucket, and you can control access to this setting using IAM policies. With the new S3 Object Ownership setting, you can easily review, manage, and modify access to your shared data sets in Amazon S3 using only policies.

ACLs were the original way to control access in S3. Subsequently, IAM and policies were introduced for permission control across AWS resources. Now, by enabling the S3 Object Ownership feature, you can change how S3 performs access control for a bucket so that only IAM policies are used. S3 Object Ownership's new Bucket owner enforced setting disables ACLs for your bucket and the objects in it, and updates every object so that each object is owned by the bucket owner. When you apply this setting, ownership change happens automatically, and applications that write data to a bucket no longer need to specify any ACL. You can enable this setting for existing buckets or when you create a new bucket.

Amazon S3 Object Ownership is available at no additional cost in all AWS Regions, excluding the AWS GovCloud (US) Regions and AWS China Regions. You can configure S3 Object Ownership through the S3 console, AWS Command Line Interface (CLI), Amazon S3 REST API, AWS Software Development Kits (SDKs), or AWS CloudFormation. To learn more about S3 Object Ownership, visit the S3 User Guide or read the AWS News Blog.

» Introducing AWS Mainframe Modernization - Preview

Posted On: Nov 30, 2021

AWS Mainframe Modernization is a unique platform for mainframe migration and modernization. It allows customers to migrate and modernize their on-premises mainframe workloads to a managed and highly available runtime environment on AWS. This service currently supports two main migration patterns – replatforming and automated refactoring – allowing customers to select their best-fit migration path and associated toolchains based on their migration assessment results.

AWS Mainframe Modernization delivers:

  • Application and infrastructure agility to speed up time to market with cloud-speed development and operations
  • Managed security, resiliency, elasticity, and cost-efficiency, allowing the enterprise to focus on transforming business value rather than on the underlying infrastructure
  • A managed platform with proven toolchains aligned with popular migration and modernization patterns
  • Learn more on the web and in the documentation, and start planning your mainframe migration and modernization to AWS today.

    » Deny services and operations for AWS Regions of your choice with AWS Control Tower

    Posted On: Nov 30, 2021

    You can now use AWS Control Tower to deny services and operations in your Control Tower environments for the AWS Region(s) of your choice. Region deny capabilities complement existing AWS Control Tower Region selection and Region deselection features, providing you with the capabilities to address compliance and regulatory requirements while improving cost efficiency of expanding into additional Regions.

    Control Tower Region deny helps you comply with business policies and regulatory requirements, for example, AWS customers in Germany can deny access to AWS services in regions outside of the Frankfurt region. You can select which regions you would like to restrict our end users from deploying resources to during the Control Tower setup process or in the Landing zone settings page for already established environments. Region deny is available when you update your AWS Control Tower landing zone version. To learn more about Region deny, including which AWS services are exempt, see documentation on Guardrail Reference.

    AWS Control Tower offers the easiest way to set up and govern a new, secure, multi-account AWS environment based on AWS best practices. Customers will create new accounts using AWS Control Tower’s account factory and enable governance features such as guardrails, centralized logging and monitoring in supported AWS Regions. To learn more, visit the AWS Control Tower homepage or see the AWS Control Tower User Guide. For a full list of AWS Regions where AWS Control Tower is available, see the AWS Region Table.

    » Contact Lens for Amazon Connect announces new machine-learning powered call summarization

    Posted On: Nov 30, 2021

    Today, Contact Lens for Amazon Connect announced a new machine learning (ML) capability called call summarization that helps businesses improve the productivity of contact center agents and managers, so they can focus on providing excellent customer experiences.

    Contact center agents typically spend several minutes after each call summarizing notes, and managers spend a significant amount of time listening to call recordings or reading transcripts when they are investigating customer issues. With call summarization, Contact Lens identifies key parts of the customer conversation, assigns a label (e.g. issue, outcome, or action item), and displays a summary that can be expanded to view the full transcript of the call. Call summarization lets agents consistently and accurately capture key parts of the customer interaction, and revisit the summary when following up with a customer to resolve an issue. Managers can view the call summary alongside the call recording and contact details in Amazon Connect to quickly understand the context of an interaction, so they don’t have to spend time reading the whole transcript.

    Call summarization is available in all the Contact Lens supported regions, which includes US West (Oregon), US East (Northern Virginia), Canada (Central), Europe (London), Europe (Frankfurt), Asia Pacific (Singapore), Asia Pacific (Tokyo), Asia Pacific (Seoul), and Asia Pacific (Sydney) AWS Region. With Contact Lens, you only pay for what you use based on the number of minutes used. There are no required up-front payments, long-term commitments, or minimum monthly fees. Call summarization has no additional charge and works straight out-of-the the-box, without the need for any technical expertise. Please visit our blog or documentation to learn more about using Contact Lens’ call summarization capability.

    » Announcing preview of AWS Private 5G

    Posted On: Nov 30, 2021

    Today, we are announcing the preview of AWS Private 5G, a new managed service that helps enterprises set up and scale private 5G mobile networks in their facilities in days instead of months. With just a few clicks in the AWS console, customers specify where they want to build a mobile network and the network capacity needed for their devices. AWS then delivers and maintains the small cell radio units, servers, 5G core and radio access network (RAN) software, and subscriber identity modules (SIM cards) required to set up a private 5G network and connect devices. AWS Private 5G automates the setup and deployment of the network and scales capacity on demand to support additional devices and increased network traffic. There are no upfront fees or per-device costs with AWS Private 5G, and customers pay only for the network capacity and throughput they request.

    Many enterprise networks are constrained by increasing growth in users, devices, and application demands. Increased video content, new applications that require ultra-low latency connectivity to end-user devices, and thousands of smart IoT devices demand extended coverage, more capacity, better reliability, and robust security and access control. Customers want to build their own private 5G networks to address these limitations, but private mobile network deployments require customers to invest considerable time, money, and effort to design their network for anticipated peak capacity, and procure and integrate software and hardware components from multiple vendors. Even if customers are able to get the network running, current private mobile network pricing models charge for each connected device and make it cost prohibitive for use cases that involve thousands of connected devices. AWS Private 5G simplifies the procurement and deployment allowing customers to deploy their own 4G/LTE or 5G network within days instead of months, scale up and down the number of connected devices rapidly, and benefit from a familiar on-demand cloud pricing model.

    AWS Private 5G is available in preview in the United States. To request access, visit the sign-up page here.

    » Introducing Amazon FSx for OpenZFS

    Posted On: Nov 30, 2021

    Amazon FSx for OpenZFS enables you to launch, run, and scale fully managed file systems on AWS that replace the ZFS or other Linux-based file servers you run on premises while helping to provide better agility and lower costs. FSx for OpenZFS is the newest member of the Amazon FSx family of services, which provides fully-featured and highly-performant file storage powered by your choice of widely-used file systems that include NetApp ONTAP, Windows File Server, and Lustre. FSx for OpenZFS file systems are accessible from Linux, Windows, and macOS compute instances and containers via the industry-standard NFS protocol (v3, v4.0, v4.1, v4.2).

    FSx for OpenZFS is built on the open-source OpenZFS file system, which is widely used on premises to store and manage exabytes of application data for workloads that include machine learning, electronic chip design automation, application build environments, media processing, and financial analytics, where scale, performance, and cost efficiency are of utmost importance. Powered by AWS Graviton processors and the latest AWS disk and networking technologies, Amazon FSx for OpenZFS delivers up to 1 million IOPS with latencies of hundreds of microseconds. With complete support for OpenZFS features like instant point-in-time snapshots and data cloning, FSx for OpenZFS makes it easy for you to replace your on-premises file servers with AWS storage that provides familiar file system capabilities and eliminates the need to perform lengthy qualifications and change or re-architect existing applications or tools.

    Amazon FSx for OpenZFS is available in the following AWS Regions: US East (N. Virginia), US West (Oregon), US East (Ohio), Europe (Ireland), Asia Pacific (Tokyo), Europe (Frankfurt), Canada (Central).

    To learn more about Amazon FSx for OpenZFS, visit the product detail page, overview video, getting started tutorial in the Amazon FSx user guide, and the AWS News blog post.

    » Announcing the new S3 Intelligent-Tiering Archive Instant Access tier - Automatically save up to 68% on storage costs

    Posted On: Nov 30, 2021

    The Amazon S3 Intelligent-Tiering storage class now automatically includes a new Archive Instant Access tier with cost savings of up to 68% for rarely accessed data that needs millisecond retrieval and high throughput performance. S3 Intelligent-Tiering is the first cloud storage that automatically reduces your storage costs on a granular object level by automatically moving data to the most cost-effective access tier based on access frequency, without performance impact, retrieval fees, or operational overhead. S3 Intelligent-Tiering delivers milliseconds latency and high throughput performance for frequently, infrequently, and now rarely accessed data in the Frequent, Infrequent, and new Archive Instant Access tiers. Now, you can use S3 Intelligent-Tiering as the default storage class for virtually any workload, especially data lakes, data analytics, new applications, and user-generated content.

    The Amazon S3 Intelligent-Tiering storage class is designed to optimize storage costs by automatically moving data to the most cost-effective access tier when access patterns change. S3 Intelligent-Tiering automatically stores objects in three access tiers: a Frequent Access tier, an Infrequent Access tier with 40% lower-cost than the Frequent Access tier, and an Archive Instant Access tier with 68% lower-cost than the Infrequent Access tier. For a small monthly monitoring and automation charge per object, S3 Intelligent-Tiering monitors access patterns and moves objects that have not been accessed for 30 consecutive days to the Infrequent Access tier and now, after 90 days of no access, to the new Archive Instant Access tier. For data that does not require immediate retrieval, you can set-up S3 Intelligent-Tiering to monitor and automatically move objects to the optional asynchronous Archive Access tier after 90 days with cost savings of 71%, and after 180 days of no access to the Deep Archive Access tier to realize up to 95% in storage cost savings. 

    If you are an S3 Intelligent-Tiering customer, any existing objects that have not been accessed for 90 consecutive days will automatically move to the new Archive Instant Access tier, delivering immediate 68% cost savings for those objects, without any impact on performance. 

    There are no retrieval charges in S3 Intelligent-Tiering. If an object in any of the access tiers is accessed, it is automatically moved back to the Frequent Access tier. No additional tiering charges apply when objects are moved between access tiers within the S3 Intelligent-Tiering storage class. If you would like to standardize on S3 Intelligent-Tiering as the default storage class for newly created data, you can modify your applications by specifying INTELLIGENT_TIERING on your S3 PUT API request header. S3 Intelligent-Tiering is designed for 99.9% availability and 99.999999999% durability, and automatically delivers the same low latency and high throughput performance of S3 Standard. You can use AWS Cost Explorer to measure the additional savings from the Archive Instant Access tier.

    Introduction to the Amazon S3 Intelligent-Tiering Storage Class

    The S3 Intelligent-Tiering Archive Instant Access tier is available today in all AWS Regions, including the AWS GovCloud (US) Regions, the AWS China (Beijing) Region, operated by Sinnet, and the AWS China (Ningxia) Region, operated by NWCD.

    To learn more about S3 Intelligent-Tiering and the new Archive Instant Access tier, visit the AWS News blog post, watch the video, read the user guide, and visit the S3 Intelligent-Tiering storage class web page. To get started, visit the S3 console.

    » Announcing AWS IoT TwinMaker (Preview), a service that makes it easier to build digital twins

    Posted On: Nov 30, 2021

    Today, we are announcing AWS IoT TwinMaker, a new service that makes it faster and easier for developers to create and use digital twins of real-world systems to monitor and optimize operations. Digital twins are virtual representations of physical systems such as buildings, factories, production lines, and equipment that are regularly updated with real-world data to mimic the structure, state, and behavior of the systems they represent. Although digital twin use cases are many and diverse, most customers want to get started by easily using their existing data to get a deeper understanding of their operations.

    With AWS IoT TwinMaker, you can quickly get started with creating digital twins of equipment, processes, and facilities by connecting data from different data sources like equipment sensors, video feeds, and business applications, without having to move the data into a single repository. You can use built-in data connectors for the following AWS services: AWS IoT SiteWise for equipment and time-series sensor data; Amazon Kinesis Video Streams for video data; and Amazon Simple Storage Service (S3) for storage of visual resources (for example, CAD files) and data from business applications. AWS IoT TwinMaker also provides a framework for you to create your own data connectors to use with other data sources (such as Snowflake and Siemens MindSphere). AWS IoT TwinMaker forms a digital twin graph that combines and understands the relationships between virtual representations of your physical systems and connected data sources, so you can accurately model your real-world environment.

    Once the digital twin graph is built, customers want to visualize the data in context of the physical environment. Using AWS IoT TwinMaker, you can import existing 3D models (such as CAD files, and point cloud scans) to compose and arrange 3D scenes of a physical space and its contents (e.g. a factory and its equipment) using simple 3D tools. To create a spatially aware visualization of your operations, you can then add interactive video and sensor data overlays from the connected data sources, insights from connected machine learning (ML) and simulation services, and equipment maintenance records and manuals.

    To help developers create a web-based application for end users, AWS IoT TwinMaker comes with a plugin for Amazon Managed Grafana. End users, such as plant operators and maintenance engineers use Grafana applications to observe and interact with the digital twin to help them optimize factory operations, increase production output, and improve equipment performance. Amazon Managed Grafana is a fully managed service for the open source dashboard and visualization platform from Grafana Labs.

    AWS IoT TwinMaker is available today in preview in US East (N. Virginia), US West (Oregon), Europe (Ireland), and Asia Pacific (Singapore), with availability in additional AWS Regions to come.

    To learn more and get started, visit the AWS IoT TwinMaker product page. To find an AWS Partner that can help you harness the AWS IoT TwinMaker capabilities for your business visit the partner page.

    » AWS IoT Device Management Fleet Indexing now supports two additional data sources (Preview)

    Posted On: Nov 30, 2021

    AWS IoT Device Management Fleet Indexing now provides integration with two additional data sources, AWS IoT Core named shadows and AWS IoT Device Defender detect violations. With this release, supported data sources for Fleet Indexing increased to 5 from 3 (AWS IoT Core registry, shadows, and connectivity lifecycle events). These two additional data sources will help IoT customers who store IoT fleet data across different services and systems and regularly access the data for fleet monitoring, health checks, over-the-air (OTA) updates, and troubleshooting.

    Now, customers have more data source flexibility when monitoring and managing their devices from Fleet Indexing or from AWS IoT Device Management Fleet Hub. Customers can now set fleet metric alarms, perform queries, or target devices in their fleet with additional device state and behavior anomaly data. With the additional data sources for Fleet Indexing, customers can spend more time on high-value fleet monitoring, analysis, and troubleshooting and less time on building and maintaining DIY fleet management solutions.

    The two additional data sources are available today in preview in all regions where AWS IoT Device Management is available except AWS IoT Device Defender is not available in South America (Sao Paolo) region. For more information, see the Fleet Indexing and Fleet Hub developer guide.

    » Announcing the new Amazon S3 Glacier Instant Retrieval storage class - the lowest cost archive storage with milliseconds retrieval

    Posted On: Nov 30, 2021

    Amazon S3 Glacier Instant Retrieval is a new archive storage class that delivers the lowest cost storage for long-lived data that is rarely accessed and requires milliseconds retrieval. With S3 Glacier Instant Retrieval, you can save up to 68% on storage costs compared to using the S3 Standard-Infrequent Access storage class, when your data is accessed once per quarter. S3 Glacier Instant Retrieval delivers the fastest access to archive storage, with the same throughput and milliseconds access as the S3 Standard and S3 Standard-IA storage classes. In addition, the existing S3 Glacier storage class is renamed to be S3 Glacier Flexible Retrieval, and now includes free bulk retrievals and a 10% storage price reduction, making it optimized for backup and disaster recovery use cases. 

    The Amazon S3 Glacier storage classes are purpose-built for data archiving, and are designed to provide you with the highest performance, the most retrieval flexibility, and the lowest cost archive storage in the cloud. You can now choose from three archive storage classes optimized for different access patterns and storage duration. For archive data that needs immediate access, such as medical images, news media assets, or genomics data, choose the S3 Glacier Instant Retrieval storage class, an archive storage class that delivers the lowest cost storage with milliseconds retrieval. For archive data that does not require immediate access but needs the flexibility to retrieve large sets of data at no cost, such as backup or disaster recovery use cases, choose S3 Glacier Flexible Retrieval (formerly S3 Glacier), with retrieval in minutes or free bulk retrievals in 5-12 hours. To save even more on long-lived archive storage such as compliance archives and digital media preservation, choose S3 Glacier Deep Archive, the lowest cost storage in the cloud with data retrieval from 12-48 hours.

    If you have unknown, or unpredictable access patterns, such as data lakes, analytics, or user generated content, the S3 Intelligent-Tiering storage class now automatically includes a new Archive Instant Access tier at the same price and milliseconds retrieval as the new S3 Glacier Instant Retrieval storage class. Beginning today, customers of S3 Intelligent-Tiering automatically save up to 68% for data not accessed in the last 90 days. S3 Intelligent-Tiering is designed to optimize storage costs by automatically moving data to the most cost-effective access tier based on access frequency, without performance impact, retrieval fees, or operational overhead.

    Introduction to the Amazon S3 Glacier Instant Retrieval Storage Class

    You can get started with S3 Glacier Instant Retrieval with a few clicks in the S3 console. You can upload data directly into S3 Glacier Instant Retrieval through the Amazon S3 API, CLI, or use S3 Lifecycle to transition data from the S3 Standard and S3 Standard-IA storage classes into S3 Glacier Instant Retrieval. Like other Amazon S3 storage classes, S3 Glacier Instant Retrieval supports all S3 features—including S3 Storage Lens to view storage usage and activity metrics, and S3 Replication to replicate data to any AWS Region. S3 Glacier Instant Retrieval is designed for 99.999999999% (11 9s) of data durability and 99.9% availability by redundantly storing data across multiple physically separated AWS Availability Zones.

    The new S3 Glacier Instant Retrieval storage class, and the Archive Instant Access tier in S3 Intelligent-Tiering are available today in all AWS Regions, including the AWS GovCloud (US) Regions, the AWS China (Beijing) Region, operated by Sinnet, and the AWS China (Ningxia) Region, operated by NWCD. The S3 Glacier Flexible Retrieval (formerly S3 Glacier) storage class price reduction and free bulk retrievals, and the S3 storage class price reductions are effective December, 1, 2021, read the blog post for more information.

    To learn more about the S3 Glacier Instant Retrieval storage class, visit the AWS News blog post, the storage class page, watch the video, and read the user guide. To Learn more about S3 Intelligent-Tiering and the new Archive Instant Access tier, visit the What’s New post. To get started, visit the S3 console.

    » Introducing Amazon EMR Serverless in preview

    Posted On: Nov 30, 2021

    We are happy to announce the preview of Amazon EMR Serverless, a new serverless option in Amazon EMR that makes it easy and cost-effective for data engineers and analysts to run petabyte-scale data analytics in the cloud. Amazon EMR is a cloud big data platform used by customers to run large-scale distributed data processing jobs, interactive SQL queries, and machine learning applications using open-source analytics frameworks such as Apache Spark, Apache Hive, and Presto. With EMR Serverless, customers can run applications built using these frameworks with a few clicks, without having to configure, optimize, or secure clusters. EMR Serverless automatically provisions and scales the compute and memory resources required by the application, and customers only pay for the resources they use.

    With EMR Serverless, you simply specify the open-source framework and version that you want to use for your application, and submit jobs using APIs, EMR Studio, or JDBC/ODBC clients. EMR Serverless automatically determines and provisions the compute and memory resources required to process requests, and scales the resources up and down at different stages of processing based on changing requirements. For example, a Spark job may need two executors for the first 5 minutes, ten executors for the next 10 minutes, and five executors for the last 20 minutes to process your data.  EMR Serverless automatically provisions and adjusts resources as required, so you do not have to worry when data volumes change over time. And, since you only pay for the resources that are used, EMR Serverless is cost-effective for running petabyte-scale analytics. Customers can check the status of running jobs, review job history, and use familiar open source tools to debug jobs using EMR Studio.

    Amazon EMR Serverless is available in Preview in US-East (N Virginia) region. Click here to sign up for the preview, read the blog, and refer to documentation for more details.

    » Amazon S3 announces a price reduction up to 31% in three storage classes

    Posted On: Nov 30, 2021

    We are excited to announce that Amazon S3 has reduced storage prices by up to 31% in three S3 storage classes. Specifically we are reducing the storage price for S3 Standard-Infrequent Access and S3 One Zone-Infrequent Access by up to 31% in 9 AWS Regions: Asia Pacific (Hong Kong), Asia Pacific (Mumbai), Asia Pacific (Osaka), Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), US West (Northern California), and South America (Sao Paulo).

    In addition, S3 Glacier Flexible Retrieval (formerly S3 Glacier) is now optimized for use cases such as backup and disaster recovery, by offering free bulk retrievals, and a 10% storage price reduction in all AWS Regions. Together, these changes will help to further reduce your cost of S3 storage while benefiting from the simplicity, durability, and virtually unlimited scalability of Amazon S3.

    All of the S3 storage price reductions will be effective December 1, 2021 and will be automatically reflected in your AWS bill. For more information on the Region specific price reductions in each S3 storage class, see the blog post, and visit the S3 pricing page.

    » Amazon Connect releases unified agent application to improve agent experience and customer interactions

    Posted On: Nov 30, 2021

    Amazon Connect now provides an agent application for managing contacts and resolving customer issues. In the contact center, agents need a way to easily handle multiple contacts (voice, chat, tasks) while viewing the right customer information and having knowledge articles surfaced in the context of the customer’s issue they are trying to solve. After launching the agent application in their browser, agents are immediately able to sign-in and manage customer authentication, calls, and chats alongside viewing key customer insights and knowledge articles. For example, when an agent receives a call or chat, Amazon Connect Customer Profiles shares customer information, such as name, phone number, and email address. While the agent is talking to the customer, Amazon Connect Voice ID analyzes the caller's unique voice characteristics using machine learning to verify the caller's identity in real-time, displaying a confidence score and status. Then throughout the contact, Amazon Connect Wisdom detects customer issues and proactively provides knowledge article recommendations in real-time. With the Amazon Connect agent application, you can help give your agents the right information to solve customer issues, deliver a personal experience, and improve customer satisfaction. 

    Amazon Connect agent application is now available in US East (N. Virginia), US West (Oregon), Asia Pacific (Tokyo), Europe (Frankfurt), Asia Pacific (Singapore), Asia Pacific (Sydney), and Europe (London) AWS regions. To learn more about the Amazon Connect agent application, visit our Amazon Connect Agent training guide

    » AWS Snow Family launches offline tape data migration capability

    Posted On: Nov 30, 2021

    Today, AWS Snow Family launches a secure, offline tape data migration capability for AWS Snowball Edge, enabling you to migrate petabytes of data stored on physical tapes to AWS without changing your existing tape-based backup workflows. Using this capability, you can migrate tape data to AWS from environments, where you have network connectivity limitations, bandwidth constraints, and high network connection costs. Moving tape data to AWS helps you eliminate physical tape infrastructure expenses and gain online access to your tape data.

    The Snow Family tape data migration capability uses Tape Gateway’s Virtual Tape Library (VTL) capabilities on Snowball Edge Storage Optimized to deliver you a simple, integrated experience for tape data migration. From Snow device ordering, setup and use at your site, to ingesting, storing, and accessing tape data in AWS is easier. AWS partners can use the Snowball Edge in their facilities, or at their customers’ sites to copy data from physical tapes to a VTL, without needing to restore tape data to its original form. Once your tape data is stored in AWS, you can restrict data access and enforce retention policies as you do with offsite storage of physical tapes. You can access your tape data stored in AWS through a Tape Gateway running in AWS or in your data center over the network.

    You can complete your tape data migration using Snowball Edge in three simple steps. First, order Snowball Edge from the AWS Snow Family management console. After you receive and setup the Snowball Edge device, you unlock it, and activate Tape Gateway. You can use Tape Gateway on Snowball Edge in place of a physical tape library without any changes to your current tape-based workflows. Second, you create virtual tapes on Tape Gateway and copy data from physical tapes to virtual tapes on Snowball Edge using your existing backup application. Third, after completing the data copy, you use the integrated logistics built into Snowball Edge to ship the device to the correct AWS location. AWS imports your data stored on Snowball Edge to Amazon S3 Glacier Flexible Retrieval or Amazon S3 Glacier Deep Archive based on the storage destination you selected when creating virtual tapes. You can view and manage your virtual tapes stored in AWS from the AWS Storage Gateway management console.

    The Snowball Edge tape data migration capability is available in US East (N. Virginia), US East (Ohio), US West (Oregon), US West (N. California), Europe (Ireland), Europe (Frankfurt), Europe (London), and Asia Pacific (Sydney) Regions. To learn more, visit the product page, documentation, and AWS News launch blog. To get started, visit the AWS Snow Family console.

    » New connectivity software, AWS IoT ExpressLink, accelerates IoT development (Preview)

    Posted On: Nov 30, 2021

    AWS IoT ExpressLink (Preview) is connectivity software that powers a range of hardware modules developed and offered by AWS Partners, such as Espressif, Infineon, and u-blox. These connectivity modules include AWS-validated software, making it faster and easier for you to securely connect almost any product to the cloud in a fraction of the time and cost, including medical devices, industrial sensors, and consumer products.

    Developers of all skill levels can now quickly and easily transform their products into IoT devices without having to merge large amounts of code or have a deep understanding of the underlying implementation. The connectivity modules come preprovisioned with security credentials, allowing you to off-load the complex networking and cryptography tasks to the module and develop secure IoT products in weeks rather than months. Modules that use AWS IoT ExpressLink are preprogrammed to seamlessly integrate with over 200 AWS IoT services, including AWS IoT Core.

    The price of a module with AWS IoT ExpressLink is set by AWS Partners who manufacture the modules. Only pay for the AWS services you use after connecting your application to the cloud. Get started by evaluating which AWS IoT ExpressLink is right for you by purchasing a developer kit from participating partners on the AWS Partner Device Catalog page.

    » Announcing preview of AWS Backup for Amazon S3

    Posted On: Nov 30, 2021

    Today, we are announcing the public preview of AWS Backup for Amazon S3. You can now create a single policy in AWS Backup to automate the protection of application data stored in S3 alone or alongside 11 other AWS services for storage, compute, and database. Using AWS Backup’s seamless integration with AWS Organizations, you can create independent, immutable, and encrypted backups and centrally manage backups and restore of S3 buckets and objects across your AWS accounts.

    You can get started with AWS Backup for Amazon S3 (Preview) by creating a backup policy in AWS Backup and assigning S3 buckets to it using tags or resource IDs. AWS Backup allows you to create periodic snapshots and continuous backups of your S3 buckets, and provides you the ability to restore your S3 buckets and objects to your specified point-in-time with a single click in the AWS Backup management console. Additionally, you can use AWS Backup to maintain and demonstrate compliance of your organizational data protection policies to auditors.

    AWS Backup for Amazon S3 (Preview) is available in US West (Oregon) Region. For more information on this preview, visit the AWS Backup product page, documentation, and AWS News Preview launch blog. To enroll in the Preview, send your account ID to s3-backup-preview@amazon.com.

    » Introducing AWS Microservice Extractor for .NET

    Posted On: Nov 30, 2021

    AWS Microservice Extractor for .NET simplifies the process of re-architecting applications into smaller code projects. Modernize and transform your .NET applications with an assistive tool that analyzes source code and runtime metrics to create a visual representation of your application and its dependencies. This tool delivers a holistic visualization of an applications source code, helps code refactoring and assists in extraction of the codebase into separate code projects that teams can develop, build, and operate independently to improve agility, uptime, and scalability.

    Microservice Extractor for .NET assists your application modernization efforts with:

  • Faster identification of applications components to re-factor into service architecture: Microservice Extractor combines data from code analysis and runtime profiling to produce a visualization showing each component's dependencies and metrics. This simplifies the need to manually correlate outputs of various tools for code and runtime analysis.
  • Facilitates refactoring so that you can apply Domain-Driven Design principles: Microservice Extractor helps you adopt industry best practices such as Domain Driven Design by enabling you to label the visualized graph to associate code with business processes. It highlights dependencies that need to be refactored to be able to extract parts of the application into separate code projects.
  • Assisted refactoring of monolith codebases into smaller code projects: After refactoring the monolithic application to prepare it for extraction as independent code projects, this tool can be used to partition source code into units that teams may develop, build, deploy and operate as independent services with their choice of tools.
  • Learn more on our product page and in the documentation, and download today to start modernizing your .NET applications with AWS today.

    » Announcing AWS IoT FleetWise (Preview), a new service for transferring vehicle data to the cloud more efficiently

    Posted On: Nov 30, 2021

    Today, we are announcing AWS IoT FleetWise, a new service that makes it easier and more cost effective for automakers to collect, transform, and transfer vehicle data to the cloud in near-real time. Once the data is in the cloud, automakers can use it for tasks like remotely diagnosing issues in individual vehicles, analyzing vehicle fleet health to help prevent potential warranty claims and recalls, and collecting rich sensor data for training machine learning models that improve autonomous driving and advanced driver assistance systems (ADAS).

    AWS IoT FleetWise can access the unique data format of a vehicle then structure and standardize the data so automakers don’t have to build custom data collection systems. Automakers start in the AWS Management Console by defining and modeling vehicle attributes (e.g. a two-door coupe) and the sensors associated with the car’s model, trim, and options (e.g. engine temperature, front-impact warning, etc.) for individual vehicle types or multiple vehicle types across their entire fleet. After vehicle modeling, automakers install the AWS IoT FleetWise application on the vehicle gateway (an in-vehicle communications hub that monitors and collects data), so it can read, decode, and transmit information to and from AWS.

    With intelligent filtering, AWS IoT FleetWise helps automakers reduce costs by limiting the amount of unnecessary data transferred to the cloud. Automakers build data collection campaigns to select only the exact data they need for their use cases by creating conditional rules to filter the data they want to collect and analyze (e.g. sensor data from hard-braking events associated with a vehicle make and model). Since data is transferred to the cloud in near-real time, automakers don’t have to wait for customers to report vehicle problems before realizing there might be a fleet-wide issue. AWS IoT FleetWise can help automakers detect vehicle health issues early-on so they can take corrective action quickly, like notifying the manufacturing group to help mitigate further spread.

    AWS IoT FleetWise is available in Preview in the regions of US East (N. Virginia) and Europe (Frankfurt). A full list of AWS Regions is available at the AWS region table. To get started with AWS IoT FleetWise, see our Developer Guide. To learn more, please visit the AWS IoT FleetWise website.

    » Amazon FSx for Lustre can now automatically update file system contents as data is deleted and moved in Amazon S3

    Posted On: Nov 30, 2021

    Amazon FSx for Lustre, a service that provides cost-effective, high-performance, scalable file systems for compute workloads, is making it even easier to process data residing in Amazon S3 by enabling your FSx for Lustre file system’s contents to be updated automatically as data is deleted or moved in S3.

    Integrated with Amazon S3, FSx for Lustre enables you to easily process S3 datasets with a high-performance file system. Before today, when you linked your file system to an S3 bucket, FSx for Lustre transparently presented S3 objects as files and updated its contents automatically on an ongoing basis as objects were added to or changed in your S3 bucket. With today’s launch, FSx for Lustre provides an additional option that is designed to automatically update the file system when objects in linked S3 buckets are deleted or moved. This new option allows you to keep the file system contents synchronized as you perform many kinds of updates (adds, changes, deletes, and moves) on the linked S3 buckets, enabling you to run parallel workflows that manipulate data on S3 and on the file system at the same time.

    The new functionality is available at no additional cost on all Amazon FSx for Lustre file systems created after July 23, 2020 in all regions where FSx is available. You can configure your file system to automatically import S3 updates by using the AWS Management Console, the AWS CLI, and AWS SDKs. Learn more about using automatic import with FSx for Lustre file systems in our AWS News blog, and the Amazon FSx documentation.

    » Announcing preview of Amazon EC2 Trn1 instances

    Posted On: Nov 30, 2021

    Today, we are announcing the preview of AWS Trainium-based Amazon EC2 Trn1 instances. AWS Trainium, is the second machine learning chip built by AWS that is optimized for high-performance deep learning training.

    Trn1 instances will deliver the best price performance for training deep learning models in the cloud for use cases such as natural language processing, object detection, image recognition, recommendation engines, intelligent search, and more. They support up to 16 Trainium accelerators, up to 800 Gbps of EFA networking throughput (double the networking bandwidth available in GPU-based instances), and ultra high speed intra-instance connectivity for the fastest ML training in Amazon EC2.  

    They are deployed in EC2 UltraClusters, which can be scaled to tens of thousands of Trainium accelerators with petabit scale, non-blocking networking. These Trn1 UltraClusters are 2.5x larger than previous generation EC2 UltraClusters and serve as a powerful supercomputer to rapidly train the most complex deep learning models. 

    If you are interested in Trn1 instances, you can sign up for preview by visiting our product detail page

    » AWS Outposts is Now Available in Two Smaller Form Factors

    Posted On: Nov 30, 2021

    AWS Outposts 1U and 2U rack-mountable servers are now generally available.

    Outposts servers provide local compute and networking services to edge locations that have limited space or smaller capacity requirements. Outposts servers are ideal for running workloads that need low latency or local data processing on-premises, while providing seamless access to the broad array of AWS services in the cloud. Outposts servers allow you to bring the same AWS services, infrastructure and operation models to locations like branch offices, factories, healthcare clinics and hospitals, cell sites or retail stores. Outposts servers provide local compute, storage, and networking services to edge locations that have limited space or smaller capacity requirements. Outposts servers are ideal for running workloads that need low latency or local data processing on-premises, while providing seamless access to the broad array of AWS services in the cloud. Outposts servers allow you to bring the same AWS services, infrastructure and operation models to locations like branch offices, factories, healthcare clinics and hospitals, cell sites or retail stores.

    The 1U server is 19“ wide, 24” deep, and is available with C6gd instances powered by Graviton2, with 64 vCPUs, 128 GiB memory, and 4 TB of local NVMe storage. The Outposts 2U server is 19“ wide, 30” deep, and is available with C6id instances powered with 3rd generation Intel Xeon Scalable processors, with up to 128 vCPUs, 256 GiB memory, and 8 TB of local NVMe storage. You can run Amazon EC2 instances for Virtual Machines, use VPC for networking, and run Amazon ECS for containers with support for Amazon EKS coming soon. Outposts server capacity can be managed by the same in-Region tools like Amazon CloudWatch, AWS CloudFormation, and AWS Code Deploy that you use to deploy applications in the AWS Region or to Outposts racks.

    To learn more, read our blog.

    » Announcing new Amazon EC2 Im4gn and Is4gen instances powered by AWS Graviton2 processors

    Posted On: Nov 30, 2021

    Today, we are announcing the next generation storage optimized Amazon EC2 Im4gn and Is4gen instances. These instances are built on the AWS Nitro System and are powered by AWS Graviton2 processors. They feature up to 30TB of storage with the new AWS Nitro SSDs that are custom-designed by AWS to maximize the storage performance of I/O intensive workloads such as SQL/NoSQL databases, search engines, distributed file systems and data analytics which continuously read and write from the SSDs in a sustained manner. AWS Nitro SSDs enable up to 60% lower latency and up to 75% reduced latency variability in Im4gn and Is4gen instances compared to the third generation of storage optimized instances. These instances maximize the number of transactions processed per second (TPS) for I/O intensive workloads such as relational databases (e.g. MySQL, MariaDB, PostgreSQL), and NoSQL databases (KeyDB, ScyllaDB, Cassandra) which have medium-large size data sets and can benefit from high compute performance and high network throughput. They are also an ideal fit for search engines, and data analytics workloads that require very fast access to data sets on local storage.

    The Im4gn instances provide the best price performance for storage-intensive workloads in Amazon EC2. They offer up to 40% better price performance and up to 44% lower cost per TB of storage compared to I3 instances for running applications such as MySQL, NoSQL, and file systems, which require dense local SSD storage and higher compute performance. They also feature up to 100 Gbps networking and support for Elastic Fabric Adapter (EFA) for applications that require high levels of inter-node communication.

    The Is4gen instances provide the lowest cost per TB and highest density per vCPU of SSD storage in Amazon EC2 for applications such as stream processing and monitoring, real-time databases, and log analytics, that require high random I/O access to large amounts of local SSD data. These instances enable 15% lower cost per TB of storage and up to 48% better compute performance compared to I3en instances.

    AWS Graviton2 processors are custom-built by AWS using 64-bit Arm Neoverse N1 cores to enable the best price performance for cloud workloads running in Amazon EC2. They deliver a major leap in performance and capabilities over first-generation AWS Graviton processors, with 7x performance, 4x the number of compute cores, 2x larger caches, and 5x faster memory. AWS Graviton2 processors feature always-on 256-bit DRAM encryption and 50% faster per core encryption performance compared to the first-generation AWS Graviton processors. Amazon EC2 instances powered by the AWS Graviton2, including Im4gn and Is4gen instances, are supported by many popular Linux operating systems including Amazon Linux 2, Red Hat Enterprise Linux, SUSE, and Ubuntu. Many popular applications and services for security, monitoring and management, containers, and CI/CD from AWS and Independent Software Vendors also support AWS Graviton2-based instances. The AWS Nitro System is a collection of AWS-designed hardware and software innovations that enable the delivery of efficient, flexible, and secure cloud services with isolated multi-tenancy, private networking, and fast local storage.

    The Im4gn instances are now available in the AWS US East (N. Virginia and Ohio), US West (Oregon), and Europe (Ireland) regions and are purchasable On-Demand, as Reserved instances, as Spot instances, or as part of Savings Plans. They are available in 6 sizes providing up to 64 vCPUs, 30 TB SSD storage, 256 GB memory, 100 Gbps of networking bandwidth, and 38 Gbps of Amazon Elastic Block Store (Amazon EBS) bandwidth. The Is4gen instances will be available soon in 6 sizes providing up to 32 vCPUs, 30 TB SSD storage, 192 GB memory, 50 Gbps of networking bandwidth, and 19 Gbps of Amazon EBS bandwidth.

    To get started, visit the AWS Management Console, AWS Command Line Interface (CLI), and AWS SDKs. To learn more, visit the Amazon EC2 Im4gn and Is4gen page or the AWS Graviton page.

    » Amazon S3 Glacier storage class is now Amazon S3 Glacier Flexible Retrieval; storage price reduced by 10% and bulk retrievals are now free

    Posted On: Nov 30, 2021

    The Amazon S3 Glacier storage class is now named Amazon S3 Glacier Flexible Retrieval, and now includes free bulk retrievals in addition to a 10% price reduction, making it optimized for use cases such as backup and disaster recovery. S3 Glacier Flexible Retrieval is now even more cost-effective, and the free bulk retrievals make it ideal for when you need to retrieve large data sets once or twice per year and do not want to worry about the retrieval cost.

    S3 Glacier Flexible Retrieval delivers low-cost storage for archive data that is retrieved asynchronously. S3 Glacier Flexible Retrieval delivers the most retrieval speed options that balance cost with access times ranging from minutes to hours, and with free bulk retrievals. It is an ideal solution for backup, disaster recovery, offsite data storage needs, and for when some data needs to be occasionally retrieved in minutes. S3 Glacier Flexible Retrieval is designed for 99.999999999% (11 9s) of data durability by redundantly storing your objects on multiple devices across a minimum of three AWS Availability Zones in an AWS Region.

    S3 Glacier Flexible Retrieval is one of three Amazon S3 Glacier storage classes that are optimized for archive data with a variety of uses cases from user-generated content to media archives to analytics to genomics to compliance archives. For archive data that needs immediate access, such as medical images, news media assets, or genomics data, choose the S3 Glacier Instant Retrieval storage class, a new archive storage class that delivers the lowest cost storage with milliseconds retrieval. For archive data that does not require immediate access, such as backups and disaster recovery, choose S3 Glacier Flexible Retrieval (formerly S3 Glacier), with flexible data retrieval options from minutes to hours. You can choose from expedited retrievals in 1-5 minutes, standard retrievals in 3-5 hours, and free bulk retrievals in 5-12 hours. To save even more on long-lived archive storage such as regulatory and compliance data and digital media preservation, choose the S3 Glacier Deep Archive storage class, the lowest cost storage in the cloud (less than $1 per TB-month) with data retrieval from 12-48 hours.

    The Amazon S3 Glacier Flexible Retrieval 10% storage price reduction and free bulk retrievals are effective December 1, 2021, in all AWS Regions, including the AWS GovCloud (US) Regions, the AWS China (Beijing) Region, operated by Sinnet, and the AWS China (Ningxia) Region, operated by NWCD.

    To learn more, read the AWS News blog post, visit the storage classes page, the S3 pricing page, and watch the video. To get started, visit the S3 console.

    In addition to this announcement, Amazon S3 has reduced storage prices up to 31% in S3 Standard-Infrequent Access, and S3 One Zone-Infrequent Access across 9 AWS Regions, read the blog post for more information.

    » Announcing Amazon Kinesis Data Streams On-Demand

    Posted On: Nov 30, 2021

    Amazon Kinesis Data Streams is a serverless streaming data service that makes it easy to capture, process, and store streaming data at any scale. Kinesis Data Streams On-Demand is a new capacity mode for Kinesis Data Streams, capable of serving gigabytes of write and read throughput per minute without capacity planning. You can create a new on-demand data stream or convert an existing data stream into the on-demand mode with a single-click and never have to provision and manage servers, storage, or throughput. In the on-demand mode you pay for throughput consumed rather than for provisioned resources, making it easy to balance costs and performance.

    When you choose on-demand capacity mode, Kinesis Data Streams instantly accommodates your workloads as they ramp up or down . If a workload’s traffic level hits a new peak, Kinesis Data Streams adapts rapidly to accommodate the workload. On-demand mode provides the same high availability and durability that Kinesis Data Streams already offers. All features such as AWS PrivateLink, Amazon Virtual Private Cloud, Enhanced Fan-Out and Extended Retention are available in the on-demand mode. When you switch your existing streams into the on-demand you can continue to use your existing applications to write and read data without making any code changes or requiring downtime. In the on-demand mode, all existing Kinesis Data Streams integrations with other AWS services such as Amazon CloudWatch Logs, Amazon DynamoDB, Amazon Kinesis Data Firehose, Amazon Kinesis Data Analytics, and Amazon Lambda along with open-source technologies such as Apache Spark and Apache Flink work without any changes.

    Kinesis Data Streams On-Demand is available in all AWS Commercial and China regions. See Amazon Kinesis Data Stream Pricing page for on-demand pricing. See the Developer Guide to learn more.

    » Amazon WorkSpaces introduces Amazon WorkSpaces Web

    Posted On: Nov 30, 2021

    Today we announced the General Availability of Amazon WorkSpaces Web. WorkSpaces Web is a new capability from our End User Computing suite - a low cost, fully managed WorkSpace built specifically to facilitate secure, web-based workloads. WorkSpaces Web makes it easy for customers to safely provide their employees with access to internal websites and SaaS web applications without the administrative burden of appliances or specialized client software. WorkSpaces Web provides simple policy tools tailored for user interactions, while offloading common tasks like capacity management, scaling, and maintaining browser images.

    With WorkSpaces Web, corporate data never resides on remote devices. Web sites are rendered in an isolated container in AWS, and pixel streamed to the user. The isolated browsing session provides an effective barrier against attacks packaged in web content and prevents potentially compromised end user devices from ever connecting with internal servers. Every session launches a fresh, always up to date, non persistent web browser. WorkSpaces Web supports enterprise controls that allow customers to set browser policies (e.g. enable/disable extensions, allow/deny list specific URLs, or any of Chrome’s 300+ policies) and user settings (e.g. clipboard, file transfer, or local printer controls). When the session is complete, the browser instance is terminated, ensuring sensitive corporate web content is never outside enterprise control. WorkSpaces Web provisions browser sessions for users on demand, automatically. AWS manages the capacity and scaling, so customers do not need to specify the instances, size their fleet, predict usage, or create and manage complex scaling logic. WorkSpaces Web automatically updates to the latest browser version, eliminating the need for customers to update and manage browser images.

    WorkSpaces Web offers low, predictable, pay as you go pricing. Customers pay only a low, monthly price for employees who actively use the service, eliminating the risk of over-buying. There are no up-front costs, licenses, or long-term commitments. For more information, see Amazon WorkSpaces Web pricing page. WorkSpaces Web is now available in Northern Virginia, Oregon, and Dublin and will be coming to additional regions in 2022. To get started with WorkSpaces Web log into the Amazon WorkSpaces Console, select a region, and create a web portal.

    » Amazon S3 console now reports security warnings, errors, and suggestions from IAM Access Analyzer as you author your S3 policies

    Posted On: Nov 30, 2021

    The Amazon Simple Storage Service (S3) console now reports security warnings, errors, and suggestions from Identity and Access Management (IAM) Access Analyzer as you author your S3 policies. The console automatically runs more than 100 policy checks to validate your policies. These checks save you time, guide you to resolve errors, and help you apply security best practices. By resolving errors and security warnings reported by the S3 console, you can validate that your policies are functional before you attach them to your S3 buckets or access points.

    Before a policy is saved, policy checks flag syntax errors such as invalid actions or missing policy elements in the S3 console's policy editor. This allows you to easily correct errors as they are found. These checks also identify overly permissive combinations of policy elements. For example, the console reports security warnings for policies with elements that can grant overly permissive access.

    In addition to the S3 console, you can validate your S3 policies programmatically by using the Access Analyzer API. Programmatic validation helps you identify errors and security warnings in policies as a part of your CI/CD pipelines and allows you to run policy validation at scale.

    Policy validation in the S3 console and through the Access Analyzer API is available at no additional cost in all AWS Regions; AWS GovCloud (US); the AWS China (Beijing) Region, operated by Sinnet; and the AWS China (Ningxia) Region, operated by NWCD. For more information, see Access Analyzer policy validation.

    » Announcing Amazon Redshift Serverless (Preview)

    Posted On: Nov 30, 2021

    Amazon Redshift now provides a serverless option (preview) to run and scale analytics without having to provision and manage data warehouse clusters. With Amazon Redshift Serverless, all users including data analysts, developers, and data scientists can now use Amazon Redshift to get insights from data in seconds. Amazon Redshift Serverless automatically provisions and intelligently scales data warehouse capacity to deliver best-in-class performance for all your analytics. You only pay for the compute used for the duration of the workloads on a per-second basis. You can benefit from this simplicity without making any changes to your existing analytics and business intelligence applications.

    With a few clicks in the AWS Management Console, you can get started with querying data with Amazon Redshift Serverless. There is no need to choose node types, node count, workload management, scaling, and other manual configurations. You can take advantage of pre-loaded sample data sets along with sample queries to kick start analytics immediately. You can create databases, schemas, tables, and load your own data from Amazon S3, access data via Amazon Redshift data shares, or restore an existing Amazon Redshift provisioned cluster snapshot. Amazon Redshift Serverless also enables you to directly query data in open formats, such as Parquet, in Amazon S3 data lakes, as well as data in your operational databases, such as Amazon Aurora and Amazon RDS. Amazon Redshift Serverless provides unified billing for queries on any of these data sources, making it easy for you to monitor and manage costs.

    Amazon Redshift Serverless preview is available in the following regions: US East (N. Virginia), US West (N. California), US West (Oregon), Europe (Frankfurt), Europe (Ireland), Asia Pacific (Tokyo). Refer to the feature page, blog and documentation to get started with the preview.

    » Introducing Amazon MSK Serverless in public preview

    Posted On: Nov 30, 2021

    Today we announced Amazon MSK Serverless in public preview, a new type of Amazon MSK cluster that makes it easier for developers to run Apache Kafka without having to manage its capacity. MSK Serverless automatically provisions and scales compute and storage resources and offers throughput-based pricing, so you can use Apache Kafka on demand and pay for the data you stream and retain.

    With this launch, getting started with Apache Kafka is even easier. With a few clicks in the AWS management console, you can setup secure and highly available clusters that automatically scale as your application I/O scales. MSK serverless is fully compatible with Apache Kafka, so you can run existing applications without any code changes or create new applications using familiar tools and APIs. MSK Serverless supports native AWS integrations that provide capabilities such as private connectivity with AWS PrivateLink, secure client access with AWS Identity and Access Management (IAM), and schema evolution control with AWS Glue Schema Registry.

    With pay-as-you-go pricing, there are no upfront commitments or minimum fees. You pay an hourly rate per cluster and an hourly rate for each partition that you create. Additionally, you pay per GB of data throughput and storage. To learn more about MSK Serverless, visit our webpage.

    Amazon MSK Serverless is available in public preview today in US East (Ohio).

    » Announcing new Amazon EC2 C7g instances powered by AWS Graviton3 processors

    Posted On: Nov 30, 2021

    Starting today, the new Amazon EC2 C7g instances powered by the latest generation custom-designed AWS Graviton3 processors are available in preview. Amazon EC2 C7g instances will provide the best price performance in Amazon EC2 for compute-intensive workloads such as high performance computing (HPC), gaming, video encoding, and CPU-based machine learning inference. These instances are the first in the cloud to feature the cutting edge DDR5 memory technology, which provides 50% more bandwidth compared to DDR4 memory. C7g instances provide 20% higher networking bandwidth compared to previous generation C6g instances based on AWS Graviton2 processors. They also support Elastic Fabric Adapter (EFA) for applications such as high performance computing that require high levels of inter-node communication.

    AWS Graviton3 is the latest in the Graviton family of processors that are custom-designed by AWS to enable the best price performance for workloads in Amazon EC2. They provide up to 25% better compute performance, up to 2x higher floating-point performance, and up to 2x faster cryptographic workload performance compared to AWS Graviton2 processors. Graviton3 processors deliver up to 3x better performance compared to Graviton2 processors for CPU-based machine learning workloads, with support for bfloat16 and fp16 instructions. Graviton3 processors also support pointer authentication for enhanced security in addition to always-on 256-bit memory encryption available in AWS Graviton2.

    Amazon EC2 C7g instances are built on the AWS Nitro System, a collection of AWS-designed hardware and software innovations that enable the delivery of efficient, flexible, and secure cloud services with isolated multi-tenancy, private networking, and fast local storage.

    Amazon EC2 instances powered by the AWS Graviton family of processors, including C7g instances, are supported by many popular Linux operating systems including Amazon Linux 2, Red Hat Enterprise Linux, Suse Enterprise Linux Server, and Ubuntu. Many popular applications and services for security, monitoring and management, containers, and CI/CD from AWS and Independent Software Vendors also support AWS Graviton-based instances. The AWS Graviton Ready program provides customers with certified solutions from third party vendors that can be used on Graviton-based instances.

    Learn more about AWS Graviton3 processors and Amazon EC2 C7g instances or request access to the C7g preview.

    » Amazon FSx for Lustre now supports automatically exporting file updates to Amazon S3

    Posted On: Nov 30, 2021

    Amazon FSx for Lustre, a service that provides cost-effective, high-performance, scalable file systems for compute workloads, is making it even easier to process data residing in Amazon S3 by enabling your S3 bucket’s contents to be updated automatically as data is updated in an FSx for Lustre file system.

    Integrated with Amazon S3, FSx for Lustre enables you to easily process S3 datasets with a high-performance file system. Before today, when linked to an S3 bucket, an FSx for Lustre file system transparently presented S3 objects as files and updated its contents automatically on an ongoing basis as objects were added to, changed in, or deleted from your S3 bucket. With today’s launch, FSx for Lustre can also automatically update the contents of the linked S3 bucket as files are added to, changed in, or deleted from the file system. With this feature, FSx for Lustre now provides fast file access for S3 datasets that is designed to keep data synchronized between the file system and S3 in both directions. You can now concurrently process your data using file-based and object-based workflows and share results in near real time between these workflows.

    The automatic export feature is available at no additional cost on Amazon FSx for Lustre file systems with Persistent 2 deployment type in US East (N. Virginia), US East (Ohio), US West (Oregon), Canada (Central), Europe (Frankfurt), Europe (Ireland), and Asia Pacific (Tokyo). You can configure your file system to automatically export updates to S3 using the AWS Management Console, the AWS CLI, and AWS SDKs. Learn more about using automatic export with FSx for Lustre file systems in our AWS News blog, and the Amazon FSx documentation.

    » Announcing the next generation of Amazon FSx for Lustre file systems

    Posted On: Nov 30, 2021

    The next generation of Amazon FSx for Lustre file systems, built on AWS Graviton processors, provide three improvements to performance and price. First, the new file systems provide up to 5x higher throughput per terabyte (up to 1 GB/s per terabyte) compared to previous generation file systems. Second, with support for client instances with multiple network interfaces, you can now drive up to 400 Gbps of network bandwidth on Amazon EC2 instances such as P4d and DL1. Third, the next generation of FSx for Lustre file systems reduce your cost of throughput by up to 60% compared to previous generation file systems.

    Using the next generation of FSx for Lustre file systems, you can accelerate execution of machine learning, high-performance computing, media & entertainment, and financial simulations workloads while reducing your cost of storage. To help you further optimize your storage costs, the next generation of FSx for Lustre is designed to enable data compression so you can reduce storage consumption without adversely impacting file system performance. Similar to previous generation file systems, you can also link the next generation file systems to Amazon S3 buckets, allowing you to access and process data concurrently from both a high-performance file system and from the S3 API.

    The next generation of FSx for Lustre file systems are available in US East (N. Virginia & Ohio), US West (Oregon), Canada (Central), EU (Frankfurt, Ireland), and Asia Pacific (Tokyo). For more information about this new file system options, see the Amazon FSx for Lustre documentation.

    » Amazon FSx for Lustre now supports linking multiple Amazon S3 buckets to a file system

    Posted On: Nov 30, 2021

    Amazon FSx for Lustre, a service that provides cost-effective, high-performance, scalable file systems for compute workloads, is making it even easier to process data residing in Amazon S3 by enabling an FSx for Lustre file system to be linked to multiple S3 buckets.

    Integrated with Amazon S3, FSx for Lustre enables you to easily to process S3 datasets with a high-performance file system. Before today, you could link a file system to a single S3 bucket or prefix and FSx for Lustre transparently presented S3 objects as files. With today’s launch, you can link multiple S3 buckets or prefixes to a file system and your S3 datasets appear as files and directories in a single unified file system namespace.

    Support for multiple S3 buckets is available at no additional cost on Amazon FSx for Lustre file systems with Persistent 2 deployment type in US East (N. Virginia), US East (Ohio), US West (Oregon), Canada (Central), Europe (Frankfurt), Europe (Ireland), and Asia Pacific (Tokyo). You can create a file system and link it to multiple S3 buckets by using the AWS Management Console, the AWS CLI, and AWS SDKs. Learn more about using multiple S3 datasets with FSx for Lustre file systems in our AWS News blog, and the Amazon FSx documentation.

    » AWS Lake Formation support Governed Tables, storage optimization and row-level security

    Posted On: Nov 30, 2021

    AWS Lake Formation is excited to announce the general availability of three new capabilities that simplify building, securing, and managing data lakes. First, Lake Formation Governed Tables, a new type of table on Amazon S3, that simplifies building resilient data pipelines with multi-table transaction support. As data is added or changed, Lake Formation automatically manages conflicts and errors to ensure that all users see a consistent view of the data. This eliminates the need for customers to create custom error handling code or batch their updates. Second, Governed Tables monitor and automatically optimize how data is stored so query times are consistent and fast. Third, in addition to table and columns, Lake Formation now supports row and cell-level permissions, making it more easily to restrict access to sensitive information by granting users access to only the portions of the data they are allowed to see. Governed Tables, row and cell-level permissions are now supported through Amazon Athena, Amazon Redshift Spectrum, AWS Glue, and Amazon QuickSight.

    AWS Lake Formation is a service that makes it easy to set up a secure data lake in days. A data lake is a centralized, curated, and secure repository that stores all your data, both in its original form and prepared for analysis. A data lake enables you to break down data silos and combine different types of analytics to gain insights and guide better decisions.

    AWS Lake Formation Governed Tables, storage optimization and row-level security are available in the US East (N. Virginia), US East (Ohio), US West (Oregon), Europe (Ireland) and Asia Pacific (Tokyo). To learn more, see the API documentation.

    » Amazon Athena now supports new Lake Formation fine-grained security and reliable table features

    Posted On: Nov 30, 2021

    Amazon Athena users can now use AWS Lake Formation to configure fine-grained access permissions and read from ACID-compliant tables. Amazon Athena makes it simple for users to analyze data in Amazon S3-based data lakes to help ensure that users only have access to data to which they're authorized and that their queries are reliable in the face of changes to the underlying data can be a complex task.

    Using Lake Formation Data Filtering, administrators can now grant column-, row-, and cell-level permissions on their Amazon S3 data lake tables that are enforced when Athena users query this data. This means that users are granted access to tables containing sensitive data without requiring course-grained masking that impedes their analyses. Furthermore, with Lake Formation Governed Tables, Athena users can query data while multiple users simultaneously add and delete the table’s Amazon S3 data objects.

    There are no additional charges for accessing data configured with Lake Formation security or Governed Tables from Athena but standard rates for data scanned, Lambda usage, and other services apply. For more information on Athena pricing, visit the pricing page.

    To get started with Lake formation Governed Tables and fine-grained security, see the Lake Formation Developer Guide and the Data Protection page of the Athena User Guide.

    » AWS Backup adds support for VMware workloads

    Posted On: Nov 30, 2021

    AWS Backup now allows you to centrally protect VMware workloads, on premises and in the cloud as VMware CloudTM on AWS, helping you meet your business and regulatory compliance needs. You can now use a single policy in AWS Backup to centrally protect your hybrid VMware environments alongside the 12 AWS services (spanning compute, storage, and databases) already supported by AWS Backup. AWS Backup enables you to demonstrate compliance status of your organizational data protection policies by monitoring backup, copy, and restore operations, and allowing you to generate unified auditor-ready reports to help satisfy your data governance and regulatory requirements.

    Using AWS Backup, you can centrally configure backup policies across your AWS applications comprised of native AWS services, on-premises VMware, and VMware Cloud on AWS, helping you to simplify data protection and automate lifecycle management. You can restore your VMware backups to your on-premises data centers and in VMware Cloud on AWS to meet your data recovery needs. AWS Backup Audit Manager provides built-in and customizable compliance controls to define your data protection policies, automatically detects violations of your defined policies, and prompts you to take corrective actions, enabling you to demonstrate compliance with regulatory requirements.

    AWS Backup for VMware is available in the US East (N. Virginia), US East (Ohio), US West (N. California), US West (Oregon), AWS GovCloud (US-West), AWS GovCloud (US-East), Canada (Central), Europe (Frankfurt), Europe (Ireland), Europe (London), Europe (Milan), Europe (Paris), Europe (Stockholm), South America (Sao Paulo), Asia Pacific (Hong Kong), Asia Pacific (Mumbai), Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Asia Pacific (Osaka), Middle East (Bahrain), and Africa (Cape Town) Regions. For more information on AWS Backup for VMware availability and pricing, see the AWS Regional Services List and pricing page. To learn more about AWS Backup for VMware, visit the product page, documentation, and AWS News launch blog. To get started, visit the AWS Backup console.

    » AWS IoT SiteWise now supports hot and cold storage tiers for industrial data

    Posted On: Nov 29, 2021

    AWS IoT SiteWise is a managed service to collect, store, organize, and monitor data from industrial equipment at scale. AWS IoT SiteWise now supports two storage tiers for equipment data: a hot tier optimized for real-time applications, and a cold tier optimized for analytical applications. The hot tier stores frequently accessed data with lower write-to-read latency. You can store data in the hot tier for industrial applications that need fast access to the latest measurement values from your equipment, such as applications that visualize real-time metrics with an interactive dashboard, or applications that monitor operations and trigger alarms to identify equipment performance issues. The cold tier stores less-frequently accessed data that can tolerate higher read latency. Use data from the cold tier to create applications that need access to historical data, such as business intelligence (BI) dashboards, artificial intelligence (AI) and machine learning (ML) training, historical reports, and backups. 

    By using the AWS IoT SiteWise cold tier, customers can lower their storage cost for less-frequently accessed data. AWS IoT SiteWise uses an Amazon S3 bucket in the customer account as the destination for cold tier data. You can configure AWS IoT SiteWise storage tiers from the AWS IoT SiteWise console. All you need to do is provide the URL to an Amazon S3 bucket in your AWS account, and define a hot data retention period after which data will be removed from hot tier. Once cold tier storage is enabled, AWS IoT SiteWise will export data from measurements, metrics, transforms, and aggregates to your S3 bucket every 6 hours. In addition, AWS IoT SiteWise will export to your S3 bucket any changes to asset and asset model definitions within minutes, so you can have the most updated virtual representation of your factory floor in your industrial data lake at all times.

    For more information on how to configure the cold tier storage, visit the AWS IoT SiteWise developer guide. To learn more about AWS IoT SiteWise, please visit the AWS IoT SiteWise website.

    » Introducing Amazon EC2 M6a instances

    Posted On: Nov 29, 2021

    Amazon Web Services (AWS) announces the general availability of general purpose Amazon EC2 M6a instances. M6a instances are powered by 3rd generation AMD EPYC (code named Milan) processors with an all-core turbo frequency of 3.6 GHz, deliver up to 35% better price performance compared to M5a instances, and 10% lower cost than comparable x86-based EC2 instances. Designed to provide a balance of compute, memory, storage, and network resources, M6a instances are built on the AWS Nitro System, a combination of dedicated hardware and lightweight hypervisor, which delivers practically all of the compute and memory resources of the host hardware to your instances. These instances are SAP-Certified and are ideal for workloads such as web and application servers, back-end servers supporting enterprise applications (e.g. Microsoft Exchange Server and SharePoint Server, SAP Business Suite, MySQL, Microsoft SQL Server, and PostgreSQL databases), web servers, micro-services, multi-player gaming servers, caching fleets, as well as for application development environments.

    To meet customer demands for increased scalability, M6a instances provide two more instance sizes than M5a (32xlarge and 48xlarge), with up to 192 vCPUs and 768 GiB of memory in the 48xlarge size, twice that of the largest M5a instance. M6a also give customers up to 50 Gbps of networking speed and 40 Gbps of bandwidth to the Amazon Elastic Block Store, more than twice that of M5a instances. Customers can use Elastic Fabric Adapter on the 48xlarge size, which enables low latency and highly scalable inter-node communication. For optimal networking performance on these new instances, Elastic Network Adapter (ENA) driver update may be required. For more information on the optimal ENA driver for M6a, see this article.

    These instances are generally available today in AWS Regions: US East (Northern Virginia), US West (Oregon), and Europe (Ireland). M6a instances are available in 10 sizes with 2, 4, 8, 16, 32, 48, 64, 96, 128, and 192 vCPUs. Customers can purchase the new instances via Savings Plans, Reserved, On-Demand, and Spot instances. To get started, visit the AWS Management Console, AWS Command Line Interface (CLI), and AWS SDKs. To learn more, visit the M6a instances page.

    » AWS AI for data analytics (AIDA) partner solutions

    Posted On: Nov 29, 2021

    Today, we announce AI for data analytics (AIDA), a set of AWS Partners solutions that embed predictive analytics into mainstream analytics workspaces. AIDA partners solutions make it easy for business experts to use artificial intelligence (AI) and machine learning (ML) to derive better insights from data and take action. AIDA features solutions from the following AWS Partners: Amplitude, Anaplan, Causality Link, Domo, Exasol, Interworks, Pegasystems, Provectus, Qlik, Snowflake, Tableau, TIBCO, and Workato. 

    AIDA partners solutions have interfaces and integrations with AWS AI/ML services that help bring predictive analytics in the normal workflow of business users who use data to run their business, and have limited data science experience. For more information and to find the full list of AIDA partners solutions descriptions, see blog.

    » Introducing the AWS Graviton Ready Program

    Posted On: Nov 29, 2021

    We are excited to announce the new AWS Graviton Ready Program for AWS Partners with software products that support AWS Graviton-based Amazon Elastic Compute Cloud (Amazon EC2) instances.  As customers adopt AWS Graviton-based instances to realize the best price performance in Amazon EC2, they need the right software solutions to help integrate, deploy, monitor, and secure their Linux-based and containerized workloads. AWS Graviton Ready Partners offer Graviton-enabled software products, including operating systems and platform services, security, monitoring and observability, CI/CD, data and analytics, and cloud devices. 

    AWS Graviton Ready Partners validate, optimize, and support their software on AWS Graviton-based instances. AWS Graviton Ready software products are vetted by AWS Partner Solutions Architects to ensure customers have a consistent experience using the software on AWS Graviton-based instances as they do on other instances. This parity makes it seamless for customers to choose the EC2 instances that best suit their workloads, including Graviton-based instances, for optimal price performance.

    AWS Graviton Ready Partners make adopting AWS Graviton-based instances easy, providing customers with a breadth of software products that support AWS Graviton. Learn more about AWS Graviton Ready Partner Software Products.

    » Recover from accidental deletions of your snapshots using Recycle Bin

    Posted On: Nov 29, 2021

    Starting today, you can use Recycle Bin for EBS Snapshots to recover from accidental snapshot deletions to meet your business continuity needs. Previously, if you accidentally deleted a snapshot, you would have to roll back to a snapshot from an earlier point in time, increasing your recovery point objective. With Recycle Bin, you can specify a retention time period and recover a deleted snapshot before the expiration of the retention period. A recovered snapshot retains its attributes such as tags, permissions, and encryption status, which it had prior to deletion, and can be used immediately for creating volumes. Snapshots that are not recovered from the Recycle Bin are permanently deleted upon expiration of the retention time.

    You can enable Recycle Bin for all the snapshots or a subset of the snapshots in your account by creating one or more Retention Rules. You can use tags in Retention Rules to specify which subset of snapshots should move to the Recycle Bin upon deletion.

    EBS Snapshots in Recycle Bin are billed at the same price as EBS Snapshots (see pricing pages). To learn more, see the technical documentation on Recycle Bin for EBS Snapshots. The feature is now available through the AWS Command Line Interface (CLI), AWS SDKs, or the AWS Console in all AWS commercial regions with the exception of China.

    » Amazon Timestream now offers faster and more cost-effective time series data processing through scheduled queries, multi-measure records, and magnetic storage writes

    Posted On: Nov 29, 2021

    Amazon Timestream has added three new capabilities, namely, scheduled queries, multi-measure records, and magnetic storage writes, to make time series data processing faster, cost-effective, and therefore more accessible to many more customers. These features enable customers to write, store, and access their time series data more economically and efficiently, so they can continue to derive insights from their data and drive better data-driven business decisions.

    Starting today, customers can use Amazon Timestream’s Scheduled Queries for faster and more affordable time series data processing. With scheduled queries, customers simply define the queries for computing aggregates, rollups, and other real-time analytics on their data; along with the frequency at which the query must be run. Then, Amazon Timestream periodically and automatically runs the scheduled queries and reliably writes the query results into a configurable destination table, within a few minutes. Customers can then point their dashboards and reports to simply query the destination table, instead of querying the considerably larger source tables. This leads to performance and cost gains that can exceed an order of magnitude or more. This is because destination tables contain much less data than the source tables. Given destination tables contain much less data than source tables, customers can store data in the destination tables for a much longer duration at a fraction of the data storage cost of the source table. They can also choose to reduce the data retention period of their source tables and further optimize their spend.

    With this release, Amazon Timestream also supports multi-measure records, a new data modeling capability that enables faster data writes, efficient data storage, performant data access, and ease of use. Multi-measure records enable customers to store multiple time series measures in a single table row, instead of storing one measure per row. This optimized data layout reduces the volume of data stored in a table, which helps customers lower their data storage spend, improve query performance, and minimize the cost of analytical queries. Multi-measure records also make it easy for customers to migrate time series data and queries from existing relational databases to Amazon Timestream with minimal changes.

    From today, Amazon Timestream also allows customers to write their late arrival data into the magnetic store, so they can further optimize their data storage spend. Late arrival data is data with a timestamp that is in the past. Customers can now use Amazon Timestream’s existing write APIs to send late arrival data to the magnetic store by simply enabling a property on their tables. With magnetic storage writes, customers no longer have to maintain a memory store with a large data retention period for the purpose of processing late arrival data. Customers can now set their memory store data retention period to match the high throughput data ingestion and fast point-in-time query requirements of their applications. They can use the magnetic store for asynchronous processing of late arrival data, long-term data storage, and for fast analytical queries.

    Scheduled Queries, multi-measure records, and magnetic storage writes are now available as part of Amazon Timestream’s APIs and the AWS Management Console for Amazon Timestream. Amazon Timestream is a fast, scalable, secure, and purpose-built time series database for Application Monitoring, edge and IoT workloads that can scale to process trillions of time series events per day up to 1,000 times faster than relational databases, and at as low as 1/10th the cost. The service is also HIPAA eligible, ISO certified, PCI DSS compliant, and in scope for AWS’s SOC reports SOC 1, SOC 2, and SOC 3. Amazon Timestream is currently available in US East (N. Virginia), US East (Ohio), US West (Oregon), Europe (Ireland), and Europe (Frankfurt). To get started, visit our product page.

    » Customize your AWS Well-Architected Review using Custom Lenses

    Posted On: Nov 29, 2021

    The AWS Well-Architected Tool now offers the ability for customers to create their own custom lenses.

    Many customers who use the AWS Well-Architected Tool have internal best practices they follow in addition to the AWS best practices provided by the AWS Well-Architected Framework. Historically, customers have had to track these best practices in separate documents and tools, making it difficult to gather insights into their overall architectural health. With the addition of custom lenses, the AWS Well-Architected Tool will become a single place for customers to review and measure best practices while performing associated operational reviews for all technology across their organization.

    Within a custom lens, customers can create their own pillars, questions, best practices, helpful resources, and improvement plans. They also can specify rules to determine which options, when not followed, result in high or medium risk issues being flagged. Customers then provide their own guidance for resolving the risk. Custom lenses can be shared across multiple AWS accounts for more visibility.

    To create your custom lens, sign in to the AWS Well-Architected Tool in the AWS Management Console and choose Custom lenses to get started. This feature is available to customers and AWS Partners at no additional charge and is offered in all Regions where the AWS Well-Architected Tool is available: US East (N. Virginia, Ohio), US West (N. California), US West (Oregon), Canada (Central), Europe (Frankfurt), Europe (Ireland), Europe (London), Europe (Paris), Europe (Stockholm), South America (Sao Paulo), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Asia Pacific (Seoul), Asia Pacific (Mumbai), Asia Pacific (Hong Kong), and Middle East (Bahrain).

    To learn more about custom lenses, visit the AWS WA Tool Documentation and Product Page. To learn more about how to engage an AWS Well-Architected Partner Program member, visit the Program Page.

    » Introducing AWS Migration Hub Refactor Spaces - Preview

    Posted On: Nov 29, 2021

    Ready to fast-track application refactoring? AWS Migration Hub Refactor Spaces is the new starting point for incremental app refactor that makes it easy to manage the refactor process while operating in production. Using Refactor Spaces, customers focus on the refactor of their applications, and not the creation and management of the underlying infrastructure that makes refactoring possible. This new Migration Hub feature reduces the business risk of evolving applications into microservices or extending existing applications with new features written in microservices. Refactor Spaces orchestrates AWS services across multiple accounts to create a refactor environment for incrementally evolving an application that helps customers realize value earlier.

    Migration Hub Refactor Spaces simplifies application refactoring by:

  • Reducing the time to setup a refactor environment 
  • Reducing the complexity for iteratively extracting capabilities as new microservices and re-routing traffic from old to new (Strangler Fig Pattern).
  • Simplifying management of existing apps and microservices as a single application with flexible routing control, isolation, and centralized management.
  • Helping dev teams achieve and accelerate tech and deployment independence by simplifying development, management, and operations while apps are changing
  • To learn more visit the AWS Migration Hub and start refactoring your applications in AWS today.

    » AWS customers can now find, subscribe to, and deploy third-party applications that run in any Kubernetes environment from AWS Marketplace

    Posted On: Nov 29, 2021

    AWS customers can now find, subscribe to, and deploy third-party Kubernetes applications from AWS Marketplace on any Kubernetes cluster, in any environment. This extends the existing AWS Marketplace for Containers capabilities. Previously, customers could find and buy containerized third-party applications from AWS Marketplace, and deploy them in Amazon Elastic Kubernetes Service (EKS) and Amazon Elastic Container Service (ECS). Customers can now deploy third-party Kubernetes applications to on-premises environments using Amazon Elastic Kubernetes Service Anywhere (EKS-Anywhere), or any customer self-managed Kubernetes clusters in on-premises environments or in EC2.

    In AWS Marketplace, customers can find over 500 container applications from Independent Software Vendors (ISVs). Each application is scanned for common vulnerability and exposures (CVE) before being made available in the catalog. By using AWS Marketplace, customers can manage upgrades with a few clicks, and track all of their licenses and bills in AWS Marketplace. Customers can discover commercial and open source Kubernetes-based applications from popular Independent Software Vendors (ISVs) and deploy these applications in any Kubernetes environment in-minutes, whether it be in the cloud or on-premises. When using AWS Marketplace for Containers Anywhere, customers receive the same benefits as any other products in AWS Marketplace, including consolidated billing, flexible payment options, and lower pricing for long-term contracts.

    This launch features products from Palo Alto Networks, JFrog, Veeam/Kasten, Prosimo, Nirmata, Trilio, Solodev, Isovalent/Cilium, HAproxy, and D2iQ. Click here to see all container products available in AWS Marketplace.

    » AWS Control Tower introduces Terraform account provisioning and customization

    Posted On: Nov 29, 2021

    We are excited to announce you can now use Terraform to provision and customize accounts through AWS Control Tower with AWS Control Tower Account Factory for Terraform (AFT). Your developers can now enjoy a streamlined process which automates the provisioning of fully functional accounts, providing your users with faster access to the resources they need to be successful.

    AFT provides you with a single Terraform infrastructure as code (IaC) pipeline to provision an AWS Control Tower managed account that helps meet your business and security policies, before handing the accounts to end users. AFT provides automated account creation that includes monitoring when account provisioning is complete, and then triggers additional Terraform modules to enhance the account with any customizations necessary as a part of the same IaC pipeline. As part of the customization process you can configure the pipeline to install your own custom Terraform modules, or you can choose to use one or more of the AFT Feature Options that are AWS provided options for common customizations.

    Customers can get started with AWS Control Tower Account Factory for Terraform by following the steps provided in the AWS Control Tower User Guide and downloading AFT for their Terraform instance. Terraform Cloud, Terraform Enterprise, and Terraform open-source are all supported by AFT.

    For a full list of regions where AWS Control Tower is available, see the AWS Region Table. To learn more, visit the AWS Control Tower homepage or see the AWS Control Tower User Guide.

    » AWS Ground Station launches expanded support for Software Defined Radios in Preview

    Posted On: Nov 29, 2021

    Amazon Web Services (AWS) announces wideband Digital Intermediate Frequency (DigIF) support for Software Defined Radios (SDRs) to help customers downlink more data in less time, saving cost. AWS Ground Station currently supports SDRs for narrowband (less than 54MHz), but in the past did not support SDRs for wideband (greater than 54Mhz). Expanding SDR support to 400Mhz for wideband enables SDR partners to provide new modulation and encoding schemes, helping Earth Imaging businesses, universities, and governments to optimize their operational costs. 

    With AWS Ground Station providing digital intermediate frequency (DigIF) output, Space customers can select an SDR of their choice to work with Ground Station and benefit from the speed of market innovation. These SDRs perform the modulation and encoding steps in the customer’s Virtual Private Cloud, giving the customer more control over their data and allowing more flexibility to move to different configurations, including higher data rates, as they scale their constellation. Customers can also stream DigIF from AWS Ground Station antennas to SDRs operating on AWS edge devices connected to the AWS global network. AWS Ground Station support for SDRs is available in Preview in the Middle East (Bahrain) region, with more region availability coming soon.

    AWS Ground Station is a fully managed service that lets you control satellite communications, process satellite data, and scale your satellite operations. Customers can easily integrate their space workloads with other AWS services in real-time using Amazon’s low-latency, high-bandwidth global network. Customers can stream their satellite data to Amazon EC2 for real-time processing, store data in Amazon S3 for low cost archiving, or apply AI/ML algorithms to satellite images with Amazon SageMaker. With AWS Ground Station, you pay only for the actual antenna time that you use.

    AWS Ground Station is now available in Oregon (US), Ohio (US), Middle East (Bahrain), Europe (Stockholm), Asia Pacific (Sydney), Europe (Ireland), Africa (Cape Town), Hawaii (US), and Asia Pacific (Seoul). Additional sites will become available in the coming months.

    To learn more about AWS Ground Station, visit here. To get started with AWS Ground Station, visit the AWS Management console here.

    » Securely manage your AWS IoT Greengrass edge devices using AWS Systems Manager

    Posted On: Nov 29, 2021

    Today, AWS IoT Greengrass announced a new integration with AWS Systems Manager that helps IT and edge device administrators to securely manage their edge devices, such as industrial equipment and industrial PCs, alongside their IT assets, such as EC2 instances, AWS Outposts, and on-premises servers. 

    With this launch, edge device administrators can use AWS IoT Greengrass to manage their edge application stack while using AWS Systems Manager to execute OS upgrades, schedule maintenance tasks, and remotely access their edge device fleet. The result is a single, integrated solution that helps edge device administrators manage their full device software stack. For example, a medical device manufacturer can use AWS IoT Greengrass to deploy and manage the applications on their device fleet, including managing software versions, testing updates, and rolling out updates over the air. They can also use AWS Systems Manager to schedule and automate operating system updates and patches, or assist customers with remote troubleshooting.

    With this launch, IT administrators can now use AWS Systems Manager to get a single consolidated view of their IT and edge infrastructure, and manage these resources through a single set of consistent operational policies. For example, a manufacturing firm can use AWS Systems Manager to holistically manage industrial PCs, on-premises instances in their factory, and EC2 instances using an integrated experience.

    This capability is now available in the US East (N. Virginia), US East (Ohio), US West (Oregon), Asia Pacific (Tokyo), Europe (Ireland), Asia Pacific (Sydney) Regions. To learn more about managing device software stacks, visit the AWS IoT Greengrass developer guide. To learn more about consolidated management of IT and edge infrastructure, visit the AWS Systems Manager product page.

    » New AWS Competency Program differentiates AWS Partners with Energy Industry Expertise

    Posted On: Nov 29, 2021

    At Amazon Web Services (AWS), we are committed to supporting the global energy industry in safely meeting the energy demands the world needs today, while accelerating their transition to a more balanced and sustainable energy future. During re:invent’s Global Partner Summit Keynote on November 29th in Las Vegas, AWS announced the new AWS Energy Competency Program that differentiates AWS Partners for their technical expertise and repeat customer success with energy customers worldwide.

    A rising demand for innovation is accelerating the energy industries adoption of technology such as cloud computing, Internet of Things (IoT), artificial intelligence (AI), and machine learning (ML). These tools can help to reduce costs, while driving efficiencies in the operation of oil and gas assets, and further the development of a customers sustainable renewable asset portfolio. AWS Energy Competency Partners bring both industry and technical expertise to deliver solutions that unlock value and transform technology into real business advantage for our customers.

    With the new AWS Energy Competency, customers can quickly and confidently identify highly specialized, AWS vetted Consulting and Technology Partners that help position operators for success as energy portfolios transition to a lower carbon world. Learn more about the new AWS Energy Competency Program and Partners on the APN Blog announcement

    » AWS Chatbot now supports management of AWS resources in Slack (Preview)

    Posted On: Nov 29, 2021

    Today, we are announcing the public preview of a new feature that allows you to use AWS Chatbot to manage AWS resources and remediate issues in AWS workloads by running AWS CLI commands from Slack channels. Previously, you could only monitor AWS resources and retrieve diagnostic information using AWS Chatbot.

    With this feature, customers can manage AWS resources directly from their Slack channels. Customers can securely run AWS CLI commands to scale EC2 instances, run AWS Systems Manager runbooks, and change AWS Lambda concurrency limits. Customers can now monitor, operate, and troubleshoot AWS workloads from Slack channels without switching context between Slack and other AWS Management Tools. Additionally, you can configure channel permissions to match your security and compliance needs by modifying account-level settings, using predefined permission templates, and using guardrail policies.

    You can use AWS Chatbot in any commercial AWS Region. Please refer to the Regional Product and Services table for details about AWS Chatbot availability. Visit the product page to explore more about AWS resource management using AWS CLI commands in the AWS Chatbot.

    » Introducing the AWS Migration and Modernization Competency

    Posted On: Nov 29, 2021

    Today, we announced the AWS Migration and Modernization Competency. These AWS Partners have deep domain expertise in offering software products that enable customers to migrate and modernize applications while customers move to the cloud. AWS Migration and Modernization Competency Partners can help customers optimize cost and reduce TCO, modernize legacy applications and data, and reduce operational burden.

    Lack of legacy application knowledge, business criticality, specialized skill requirement, and unstandardized processes around application modernization can prevent customers from modernizing applications and data at scale. As a result, customers running their workloads on premises and trying to modernize their applications on AWS will sometimes take the least resistant path to simply lift and shift these workloads. Customers need validated software products to help them reduce time, cost, and risk undertaking application modernization initiatives at scale. 

    To make it easier for customers to find partners with AWS validated software offerings, we are excited to introduce the new AWS Migration and Modernization Competency.

    AWS Partners with Migration and Modernization Competency have demonstrated technical proficiency and proven customer success delivering software products to accelerate migration and modernization on AWS. 

    Find an AWS Migration and Modernization Competency Partner Today >>

    Learn more here.

    » Amazon S3 adds new S3 Event Notifications for S3 Lifecycle, S3 Intelligent-Tiering, object tags, and object access control lists

    Posted On: Nov 29, 2021

    You can now build event-driven applications using Amazon S3 Event Notifications that trigger when objects are transitioned or expired (deleted) with S3 Lifecycle, or moved within the S3 Intelligent-Tiering storage class to its Archive Access or Deep Archive Access tiers. You can also trigger S3 Event Notifications for any changes to object tags or access control lists (ACLs). You can generate these new notifications for your entire bucket, or for a subset of your objects using prefixes or suffixes, and choose to deliver them to Amazon EventBridge, Amazon SNS, Amazon SQS, or an AWS Lambda function.

    S3 Event Notifications for S3 Lifecycle and S3 Intelligent-Tiering actions can be used for a wide range of automated workflow use cases. For instance, you can automatically update your Amazon DynamoDB tables, AWS Glue Data Catalogs, or media asset managers to track whether your data, per your S3 Lifecycle configuration, has transitioned into a storage class with retrieval times of minutes or hours, or been expired. In addition, you can now use S3 Event Notifications for changes in object tags to build applications that invoke an AWS Lambda function to resize images or to run machine learning services with Amazon Rekognition.

    These new Amazon S3 Event Notifications are now available in all commercial AWS Regions, including the AWS GovCloud (US) Regions. You can configure Amazon S3 Event Notifications in the AWS Management Console or with an API request. To learn more, visit the S3 User Guide.

    Note: AWS services generate events that invoke Lambda functions, and Lambda functions can send messages to AWS services. To avoid infinite loops, we recommend care to ensure that Lambda functions do not invoke services or APIs in a way that trigger another invocation of that function.

    » AWS Karpenter v0.5 Now Generally Available

    Posted On: Nov 29, 2021

    Today, AWS announced that Karpenter, a new open-source Kubernetes cluster autoscaling project, is now Generally Available with version 0.5 and ready for use in production environments. Karpenter is a flexible, high-performance Kubernetes cluster autoscaler that helps improve application availability and resource utilization. Karpenter launches right-sized EC2 instances in response to changing application load in under a minute. These EC2 instances are based on the specific needs of a cluster’s workloads, such as compute, storage, acceleration, and scheduling requirements. Today, Amazon Elastic Kubernetes Service (EKS) supports clusters using Karpenter on AWS, although Karpenter is designed to work with any conformant Kubernetes cluster.

    Kubernetes customers need to continuously adjust the compute capacity of their clusters to support workloads as they scale and to improve cost efficiency. Previously, customers needed to create dozens of EC2 Autoscaling Groups for the Kubernetes Cluster Autoscaler to work as expected to take advantage of the elasticity of the AWS Cloud. This increased operational overhead and degraded performance as their clusters grew. Moreover, customers who needed to provision hundreds of diverse EC2 instances quickly, such as when training machine learning models, experienced expensive scheduling latency, which slowed the pace of their innovation and increased costs.

    Karpenter is designed to provision new EC2 instances and schedule Kubernetes pods all in under a minute. Karpenter dynamically chooses the EC2 instance types best suited to what Kubernetes pods need with minimal configuration and no additional AWS infrastructure. As workloads scale, Karpenter automatically adds or removes the instances required, reducing the need for costly over-provisioning and preventing slow, expensive scale-downs. Customers get the capacity their clusters need right when they need it because Karpenter directly integrates with EC2. This means that customers can more easily take advantage of deep discounts from Spot and compute savings plans, further reducing costs.

    Learn more and get started with Karpenter today by reading the AWS News launch blog or by visiting karpenter.sh.

    » Amazon BugBust announces the First Annual AWS BugBust re:Invent challenge

    Posted On: Nov 29, 2021

    Today, we are excited to announce the First Annual AWS BugBust re:Invent challenge. Java and Python developers of all skill levels, can compete to fix as many software bugs as possible to earn points and climb the global leaderboard. There will be an array of prizes, from hoodies and fly swatters to Amazon Echo Dots, available to participants who meet certain milestones in the challenge. There’s also the coveted title of “Ultimate AWS BugBuster” accompanied by a cash prize of $1500 for whomever earns the most points by squashing bugs during the event.

    As participants fix bugs, they become part of an attempt to set the record for the largest code fixing challenge with Guinness World Records. All participants who contribute towards setting the record for the world’s largest code fixing challenge, by fixing even one bug, will receive an exclusive certificate from AWS and Guinness to commemorate their participation.

    The AWS BugBust re:Invent Challenge will run from 10AM PST on November 29, 2021 to 2PM PST on December 2, 2021. Within this time frame, registered participants across the globe- participating virtually and present on site at the BugBust Hub in the re:Invent expo, can compete to fix as many bugs as possible or improve the performance of a profiling group. For each bug that you fix, you receive points based on the complexity of the bug. The more bugs you fix, the more points you will gain and higher you will climb up the leaderboard. The live leaderboard will track each participant’s progress and showcase the number of bugs fixed and points received.

    AWS BugBust provides an easy and fun solution to transform bug bashes, foster team building, and bring friendly competition to improve code quality and application performance. To help find and exterminate bugs, AWS BugBust utilizes ML-powered developer tools - Amazon CodeGuru Reviewer and Amazon CodeGuru Profiler - to automatically scan code to weed out gnarly bugs and gamify fixing and eliminating them.

    For more information and to participate in the AWS BugBust re:Invent Challenge, please visit the AWS BugBust website.

    » AWS announces the new Amazon Inspector for continual vulnerability management

    Posted On: Nov 29, 2021

    The new Amazon Inspector is a vulnerability management service that continually scans AWS workloads for software vulnerabilities and unintended network exposure is generally available, globally. Amazon Inspector has been completely rearchitected to automate vulnerability management and deliver near real-time findings to minimize the time to discover new vulnerabilities.

    With the new Amazon Inspector you can now enable the service across your organization with a single click. Once enabled, Inspector automatically discovers all of your workloads and continually scans them for software vulnerabilities and unintended network exposure. Now that Inspector supports Amazon Elastic Container Registry (ECR), you gain a consolidated view of vulnerabilities across your Amazon EC2 instances and container images residing in ECR. Inspector now uses the widely-adopted Amazon Systems Manager (SSM) agent for EC2 vulnerability scanning. To intelligently prioritize vulnerability findings, the new Inspector introduces a highly contextualized Inspector risk score by correlating vulnerability information with environmental factors. The Inspector findings are also routed to Amazon Security Hub and pushed to Amazon EventBridge to automate with partner solutions to reduce mean time to resolution (MTTR).

    Uber
    “The new Amazon Inspector made it easy to adopt a cloud vulnerability management solution for our diverse AWS instances. By leveraging our already in use Systems Manager agents with Inspector, we automated continuous remediation and simplified operations with one-click onboarding, centralized controls, and operational visibility.”, Oliver Szimmetat, Security Engineering Manager II, Uber, “Additionally, Inspector’s auto trigger capability identifies recommended patches in near-real time. After patching, Inspector automatically rescans instances verifying that no new vulnerabilities were introduced. The use of Inspector has drastically reduced the mean time to remediate for Uber.”
     

    Volkswagen Financial Services
    “The new Amazon Inspector made it very easy for us to adopt a vulnerability management solution to support our software patching program and to detect vulnerabilities that could lead to unauthorized AWS access.”, said Stefan Klünker and Crispin Weißfuß, Global AWS Platform Owners, Volkswagen Financial Services, “Enabling the service to scan both our EC2 and ECR environments for software vulnerabilities was made seamless using CloudFormation. In addition, since Inspector is integrated with AWS Organizations, our 1300+ existing and newly added accounts are automatically onboarded to the service. Inspector discovers all our workloads, continually scans them, consolidates a prioritized list of findings in its console, and it reduces our mean time to remediate with near-immediate notifications of new critical vulnerabilities. Furthermore, the Amazon EventBridge integration enables us to quickly inform development teams about the resources with critical vulnerabilities.”

    Canva
    "We have a dynamic AWS environment, with new accounts, configurations, and resources added and removed on a regular basis,” said Paul Clarke, Head of Security at Canva. “Historically, this made it a challenge to ensure we are continuously assessing all resources for vulnerabilities, requiring multiple products with a high maintenance overhead. The new Amazon Inspector helps address this problem, supporting vulnerability scanning for both EC2 instances and containers. Since Inspector integrates with AWS Organizations, all our existing and new accounts are also immediately using the service. The service discovers all our workloads, continually scans them using data from multiple vulnerability notification sources, consolidates a prioritized list of findings in its console, and allows us to focus on vulnerability remediation, rather than managing multiple discovery tools and configurations.”

    Amazon Inspector has partnered with Snyk to receive additional vulnerability intelligence for its vulnerability database. Many AWS Security ISV Partners have integrated their products to further help customers operationalize Inspector findings, including Axonius, Cavirin, FireEye, IBM Security, Palo Alto Networks, Rezilion, Sophos, SumoLogic, Vulcan Cyber, Wiz and XM Cyber*.* Additionally, AWS Level 1 MSSP Partners such as Cloudhesive, Deloitte offer their customers a service to manage Inspector findings.

    Amazon Inspector is now generally available globally across 19 commercial regions, Asia Pacific (Singapore), Asia Pacific (Sydney), Europe (Ireland), US East (N. Virginia), US East (Ohio), US West (Oregon), Asia Pacific (Hong Kong), Asia Pacific (Tokyo), Asia Pacific (Seoul), Asia Pacific (Mumbai), Canada (Central), Europe (Frankfurt), Europe (Stockholm), Europe (Milan), Europe (London), Europe (Paris), Middle East (Bahrain), South America (Sao Paulo), US West (N. California). Visit the AWS Regional Services list for details. CloudFormation support will be coming soon. All accounts can scan their environment for vulnerabilities with a free 15-day trial of the new Amazon Inspector.

    To get started:

  • Visit the Amazon Inspector product page
  • Read the AWS News Blog on Amazon Inspector
  • Watch a short overview video
  • Meet the Amazon Inspector partners
  • Scan your environment for vulnerabilities with a free 15-day trial 
  • » Amazon S3 Event Notifications with Amazon EventBridge help you build advanced serverless applications faster

    Posted On: Nov 29, 2021

    You can now use Amazon S3 Event Notifications with Amazon EventBridge to build, scale, and deploy event-driven applications based on changes to the data you store in S3. This makes it easier to act on new data in S3, build multiple applications that react to object changes simultaneously, and replay past events, all without creating additional copies of objects or developing new software. With increased flexibility to process events and send them to multiple targets, you can now create new serverless applications with advanced analytics and machine learning at scale more confidently without writing single-use custom code.

    Amazon S3 Event Notifications with Amazon EventBridge allow you to make use of advanced filtering and routing capabilities and send events to 18 targets including AWS Lambda, Amazon Kinesis, AWS Step Functions, and Amazon SQS. S3 Event Notifications with EventBridge can simplify your architecture by allowing you to match any attribute, or a combination of attributes, for objects in an S3 event. This makes it possible for you to filter events by object size, time range, or other event metadata fields before invoking a target AWS Lambda function or other destinations. For example, if millions of audio files are uploaded to an S3 bucket, you can filter for specific files and send an event notification to multiple workflows. Through these multiple workflows, the same event can be used to transcribe an audio file, change its media format for streaming, and apply machine learning to generate a sentiment score. Finally, you can also archive and replay S3 events, giving you the ability to reprocess an event in case of an error or if a new application module is added.

    Amazon S3 Event Notifications in Amazon EventBridge are now available in all commercial AWS Regions. You can get started sending S3 Event Notifications to Amazon EventBridge with a few clicks in the AWS Management Console or with a single API request. To learn more, visit the S3 User Guide or read the AWS News Blog. For pricing, visit the Amazon EventBridge pricing page.

    Note: AWS services generate events that invoke Lambda functions, and Lambda functions can send messages to AWS services. To avoid infinite loops, we recommend care to ensure that Lambda functions do not invoke services or APIs in a way that triggers another invocation of that function.

    » AWS Compute Optimizer now offers resource efficiency metrics

    Posted On: Nov 29, 2021

    AWS Compute Optimizer now helps you quickly identify and prioritize top optimization opportunities through two new sets of dashboard-level metrics: savings opportunity and performance improvement opportunity.

    Savings opportunity metrics quantify the Amazon EC2, Amazon EBS, and AWS Lambda monthly savings you can achieve at the account level, resource type level, or resource level by adopting Compute Optimizer recommendations. You can use these metrics to evaluate and prioritize cost efficiency opportunities, as well as monitor your cost efficiency over time. Performance improvement opportunity metrics quantify the percentage and number of underprovisioned resources at the account level and resource type level. You can use these metrics to evaluate and prioritize performance improvement opportunities that address resource bottleneck risks.

    Savings opportunity and performance improvement opportunity metrics are available in all regions where AWS Compute Optimizer is available, except for AWS Regions in China, for free. You can start using Compute Optimizer through the AWS Management Console, AWS Command Line Interface, or AWS SDK. For more information on the resource efficiency metrics, read this blog. For more information on Compute Optimizer, see Compute Optimizer documentation and webpage.

    » New Greengrass Software Catalog with several new components makes it easier to build IoT edge applications

    Posted On: Nov 29, 2021

    Today, we are launching Greengrass Software Catalog, a collection of AWS IoT Greengrass software components developed by the Greengrass community. Instead of developing device applications from scratch, you can now choose from a list of pre-built Greengrass components on GitHub to kick-start your IoT edge application. You can easily install, use, and modify these components to accelerate your IoT project. As part of this launch, we are also offering Greengrass Development Kit Command Line Interface (CLI) that you can use to configure and build the catalog components in your local development environment.

    The catalog includes several new components with capabilities such as data streaming to Amazon Kinesis Video Streams (KVS), Modbus TCP protocol support, local InfluxDB time-series database, and Grafana visualization. For example, for a security monitoring solution, you can use the Amazon KVS component to ingest audio and video streams from RTSP cameras connected to a Greengrass core device. The data can then be streamed to a local monitoring platform or sent to the cloud. Alternatively, for real-time analytics and local operations monitoring, you can use the InfluxDB and Grafana component to locally process and visualize data from IoT sensors and edge devices. Since these components are a reference implementation of common patterns, please ensure that you appropriately review and test any functionality before deploying it to your production environments.

    Go to the Greengrass Software Catalog on GitHub and choose a component to get started building your IoT edge application.

    Please see the AWS Region table for all the regions where AWS IoT Greengrass is available. To learn more, visit Greengrass product page and view the updated developer guide.  .

    » Announcing Amazon Athena ACID transactions, powered by Apache Iceberg (Preview)

    Posted On: Nov 29, 2021

    We are excited to announce the public preview of Amazon Athena ACID transactions, a new capability that adds write, delete, update, and time travel operations to Athena's SQL data manipulation language (DML). Athena ACID transactions enables multiple concurrent users to make reliable, row-level modifications to their Amazon S3 data from Athena's console, API, and ODBC and JDBC drivers. Built on the Apache Iceberg table format, Athena ACID transactions are compatible with other services and engines such as Amazon EMR and Apache Spark that support the Iceberg table format.

    Using Athena ACID transactions, you can now make business- and regulatory-driven updates to your data using familiar SQL syntax and without requiring a custom record locking solution. Responding to a data erasure request is as simple as issuing a SQL DELETE operation. Making manual record corrections can be accomplished via a single UPDATE statement. And with time travel capability, you can recover data that was recently deleted using just a SELECT statement. 

    Athena ACID transactions is available for preview in US East (N. Virginia), US West (Oregon), and Europe (Ireland). To get started with the preview, see Using Amazon Athena Transactions.

    » Announcing availability of AWS Outposts in Costa Rica, Ecuador, Morocco, Nigeria, and Vietnam

    Posted On: Nov 29, 2021

    AWS Outposts can now be shipped and installed at your datacenter and on-premises locations in Costa Rica, Ecuador, Morocco, Nigeria, and Vietnam.

    AWS Outposts is a fully managed service that extends AWS infrastructure, AWS services, APIs, and tools to virtually any datacenter, co-location space, or on-premises facility for a truly consistent hybrid experience. Outposts is ideal for workloads that require low latency access to on-premises systems, local data processing, data residency, and migration of applications with local system interdependencies. 

    With the availability of Outposts in these countries, you can use AWS services to run your workloads and data in country in your on-premises facilities and connect to your nearest AWS Region for management and operations.

    To find out more about AWS Outposts, sign up for our upcoming webinars: Costa Rica and Ecuador (in Spanish) for customers and partners, and Vietnam (in English). To get started, visit the AWS Management Console. To learn more, read the product overview and user guide. For the most updated list of AWS Regions where Outposts is supported, check out the Outposts FAQs page.

    » AWS Control Tower now provides controls to meet data residency requirements

    Posted On: Nov 29, 2021

    We’re pleased to announce that AWS Control Tower now offers new guardrails to provide more control over the physical location of where customer data is stored and processed, a concept known as data residency. Control Tower data residency guardrails help ensure customer data, the personal data you upload to the AWS services under your AWS account, is not stored or processed outside a specific AWS Region or Regions.

    A lot of companies have workloads and applications that operate globally, and increasingly, data residency requirements mean they need to plan for the geographical location of their customer data. If you’re a public sector organization, or if you operate in a regulated industry like finance, government, or healthcare, data residency is often a necessary part of your modern data strategy.

    With Control Tower’s new data residency guardrails you can specify the AWS Region or Regions your customer data is stored and processed in, and if you need even more granular control, you can choose from 17 new guardrails that are purpose-built to enable data residency controls, such as "Disallow Amazon Virtual Private Network (VPN) connections”, or “Disallow internet access for an Amazon VPC instance”. You can see the compliance status of the guardrails and whether your data residency requirements are being met in the AWS Control Tower console. For a full list of available guardrails, see documentation on Control Tower guardrails.

    AWS Control Tower offers the easiest way to set up and govern a new, secure, multi-account AWS environment based on AWS best practices. Customers can automate the creation of new AWS accounts using AWS Control Tower’s account factory and enable governance features such as guardrails, centralized logging, and monitoring in supported AWS Regions. To learn more, visit the AWS Control Tower homepage or see the AWS Control Tower User Guide. For a full list of AWS Regions where AWS Control Tower is available, see the AWS Region Table.

    » Amazon ECR announces pull through cache repositories

    Posted On: Nov 29, 2021

    Amazon Elastic Container Registry (Amazon ECR) now supports pull through cache repositories, a new feature designed to automatically sync images from publicly accessible registries. With today’s release, customers now benefit from the download performance, security, and availability of Amazon ECR for the public images.

    Organizations rely on public registries for images to build or complement their container applications. The more public images that are pulled, the more teams need to ensure they stay updated with the public image’s source registry. With pull through cache repositories, there are no additional solutions or tools to manage. Using pull through cache, customers can cache public images into their ECR registry and leverage all the benefits of Amazon ECR without the operational burden of syncing public images. It supports frequent source registry syncs, helping to keep container images sourced from public registries up to date, and lets customers use ECR features for both public and private images.

    To learn more about Amazon ECR pull through cache repositories, read our blog post and review our technical documentation.

    » Announcing new Amazon EC2 G5g instances powered by AWS Graviton2 processors

    Posted On: Nov 29, 2021

    Today, we are announcing the new Amazon EC2 G5g instances powered by AWS Graviton2 processors and featuring NVIDIA T4G Tensor Core GPUs. G5g are the first Arm-based instances in a major cloud to feature GPU acceleration and provide the best price performance in Amazon EC2 for Android game streaming. With G5g instances, Android game developers can run natively on Arm-based GPU instances, encode the rendered graphics, and stream the game over network to a mobile device. This helps simplify development effort and lowers the cost per stream per hour by up to 30%. G5g instances are also ideal for machine learning developers who are looking for cost-effective inference, have ML models that are sensitive to CPU performance, and leverage NVIDIA’s AI libraries.

    AWS Graviton2 processors are custom-designed by AWS using 64-bit Arm Neoverse N1 cores to enable the best price performance for workloads in Amazon EC2. They deliver a major leap in performance and capabilities over first-generation AWS Graviton processors, with 7x performance, 4x the number of compute cores, 2x larger caches, and 5x faster memory. AWS Graviton2 processors feature always-on 256-bit DRAM encryption and 50% faster per core encryption performance compared to the first-generation AWS Graviton processors. G5g instances are built on the AWS Nitro System, a collection of AWS-designed hardware and software innovations that enable the delivery of efficient, flexible, and secure cloud services with isolated multi-tenancy, private networking, and fast local storage.

    Amazon EC2 instances powered by the AWS Graviton2, including G5g instances, are supported by popular Linux operating systems including Red Hat Enterprise Linux, SUSE, and Ubuntu. Many popular applications and services for security, monitoring and management, containers, and CI/CD from AWS and Independent Software Vendors also support AWS Graviton2-based instances.

    The G5g instances are now available in 5 regions globally, including the AWS US East (N. Virginia), US West (Oregon), and Asia Pacific (Tokyo, Asia Pacific (Seoul), Asia Pacific (Singapore) Regions, and are purchasable On-Demand, as Reserved instances, as Spot instances, or as part of Savings Plans. They are available in 6 sizes providing up to 64 vCPUs, 2 NVIDIA T4G Tensor Core GPUs, 32 GB memory, 25 Gbps of networking bandwidth, and 19 Gbps of Amazon Elastic Block Store (Amazon EBS) bandwidth.

    Machine learning developers can quickly get started with G5g instances by using AWS Deep Learning Linux AMI with NVIDIA drivers and popular machine learning frameworks such as Tensorflow and PyTorch pre-installed. Graphics customers can download the NVIDIA drivers from NVIDIA’s public driver download page. To get started, visit the AWS Management Console, AWS Command Line Interface (CLI), and AWS SDKs. To learn more, visit the Amazon EC2 G5g page.

    » Introducing Amazon CloudWatch RUM for monitoring applications’ client-side performance

    Posted On: Nov 29, 2021

    Amazon CloudWatch RUM is a real-user monitoring capability that helps you identify and debug issues in the client-side on web applications and enhance end user’s digital experience. CloudWatch RUM enables application developers and DevOps engineers reduce mean time to resolve (MTTR) client-side performance issues by enabling a quicker resolution. Amazon CloudWatch RUM is part of CloudWatch’s Digital Experience Monitoring along with Amazon CloudWatch Synthetics and Amazon CloudWatch Evidently.

    Using CloudWatch RUM, you can view how your applications are performing in near real-time across different geolocations, browsers, and devices, enabling you to optimize their performance. You can use CloudWatch RUM’s curated dashboards to view anomalies in application’s performance including page load steps, core web vitals, and JavaScript and Http errors. You can also understand how many user sessions are impacted by an issue helping you prioritize the issues to fix. CloudWatch RUM surfaces relevant debugging data such as error messages, stack traces, and sessions in an easy-to-use dashboard to fix performance issues. On the CloudWatch RUM console, you can also easily correlate traces from client-side to backend infrastructure nodes through integration with CloudWatch ServiceLens and AWS X-Ray. With the addition of client-side performance data under CloudWatch RUM, you can now use CloudWatch for end-to-end monitoring.

    Starting today, CloudWatch RUM is generally available in 10 AWS Regions - US East (N. Virginia), US East (Ohio), US West (Oregon), Europe (Ireland), Europe (Frankfurt), Europe (Stockholm), Asia Pacific (Sydney), Asia Pacific (Tokyo), Asia Pacific (Singapore), and Europe (London).

    To get started, see the following list of resources:

  • Blog post on getting started with CloudWatch RUM 
  • User guide on CloudWatch RUM 
  • For pricing, please refer to the Amazon CloudWatch pricing page
  • » AWS Compute Optimizer now offers enhanced infrastructure metrics, a new feature for EC2 recommendations

    Posted On: Nov 29, 2021

    AWS Compute Optimizer now offers enhanced infrastructure metrics, a paid feature that when activated, enhances your Amazon EC2 instance and Auto Scaling group recommendations by capturing monthly or quarterly utilization patterns. Compute Optimizer does this by ingesting and analyzing up to six times more Amazon CloudWatch utilization metrics history than the default Compute Optimizer option (up to 3 months of history vs. 14 days). You can activate the feature at the organization, account, or resource level via the Compute Optimizer console or API for all existing and newly created EC2 instances and Auto Scaling groups.

    Enhanced infrastructure metrics is available in all Regions that AWS Compute Optimizer is available in, except for AWS Regions in China. The feature costs $0.0003360215 per hour and is charged based on the number of hours per month the resource is running. For more information on enhanced infrastructure metrics, see this blog and Compute Optimizer documentation, pricing, and FAQs.

    » Introducing Amazon CloudWatch Metrics Insights (Preview)

    Posted On: Nov 29, 2021

    Metrics Insights is a new feature from Amazon CloudWatch that is in preview. As a fast, flexible, SQL based query engine, Metrics Insights enables developers, operators, systems engineers, and cloud solutions architects to identify trends and patterns across millions of operational metrics in real time and helps you use these insights to reduce time to resolution. With Metrics Insights, you can gain better visibility on your infrastructure and large scale application performance with flexible querying and on-the-fly metric aggregations. Use Metrics Insights and other CloudWatch features to monitor your AWS and hybrid environments, and to respond to operational problems promptly.

    Metrics Insights provides you with a flexible query capability, where you can aggregate and group your metrics in real-time in order to identify issues quickly. For example, you can analyze thousands of EC2 instances by CPU Utilization to troubleshoot an underperforming application. You can group your instance metrics by InstanceId to narrow down your analysis and pinpoint failing instances rapidly. Once any failing instance is isolated, you can recover the application by rebooting problematic instances. Moreover, you can use your queries to create powerful visualizations using a range of out-of-the-box chart types that will stay up to date as resources are deployed or terminated, helping you proactively monitor and pinpoint issues quickly.

    It is easy to get started with Metrics Insights. While Metrics Insights comes with standard SQL language, you can also get started with Metrics Insights by using the visual query builder. To use the query builder, first, you select your metrics of interest, namespaces and dimensions visually, and the console automatically constructs your SQL queries for you, based on your selections. You can use the query editor to type in your raw SQL queries anytime to dive deep and pinpoint issues to further granular detail.

    Metrics Insights is now available in all commercial AWS Regions and you can start using it immediately. To get started, click on the All metrics link under Metrics on the left navigation panel of the CloudWatch console and browse to Query tab. Metrics Insights is also available on Amazon Managed Grafana console. To learn more about Metrics Insights please refer to our documentation.

    » New AWS GameDay Benefits for Differentiated Partners

    Posted On: Nov 29, 2021

    The AWS Partner Network (APN) introduces AWS GameDay Benefits for AWS Partners in Differentiation Programs: AWS Service Delivery, AWS Service Ready, AWS Competency, and AWS Managed Service Provider Programs. Through AWS GameDay Benefits, partners can choose AWS GameDay League, AWS GameDay Quests Developer Kit (QDK), or both, as benefits of their AWS Partner Differentiation Program achievements! Elevated levels of technical enablement, direct connections with AWS experts, and quality leads are the top benefit asks of AWS Partners who participate in Differentiation Programs. AWS GameDay Benefits for Partners delivers all three. AWS GameDay Benefits provide tangible value-added opportunities for partners in return for their work to attain technical validation through our programs.

    AWS GameDay League is made up of teams from global partner companies who compete against each other in hands-on technical challenges to build new cloud skills capabilities. “GameDay League is a top benefit. It allows us to promote our AWS expertise to customers, and it’s a learning environment we need to retain the best and brightest cloud talent.” Corey Brunisholz Principal Cloud Solution Architect, Presidio. Once your company achieves an AWS Service Ready, Service Delivery, Competency, or MSP Program designation, our League scouts will email League tournament invitations to your alliance lead and technical staff. Fans can subscribe to the AWS GameDay Twitch channel, and follow AWS GameDay Twitter.

    AWS GameDay Quests Developer Kit (QDK) allows partners to integrate their products into interactive scenarios using AWS to be leveraged as repeatable, lead-generating events with customers. This unique sales tool overcomes traditional barriers like subscription trials and building test scenarios in sandbox accounts. "One of the best ways to learn a new technology is by using it. With GameDays, we have been able to provide safe and interactive experiences for learning DevSecOps in the cloud," said Ilan Rabinovitch, SVP Product and Community at Datadog. "Participants leave these events with practical knowledge and a clear understanding of our better together story with AWS."

    AWS Partners can learn more with the GameDay Partner Benefits Guide in Partner Central. Once your company achieves an AWS Service Ready, Service Delivery, Competency, or MSP Program designation, you can nominate your software offering by emailing awsgamedaybenefits@amazon.com to get started.

    If you are not yet in a AWS Differentiation Program, start an application under “View My APN Account” in Partner Central for the program that best aligns to your specializations.

    » Amazon CodeGuru Reviewer now detects hardcoded secrets in Java and Python repositories

    Posted On: Nov 29, 2021

    Amazon CodeGuru is a developer tool powered by machine learning that provides intelligent recommendations to detect security vulnerabilities, improve code quality and identify an application’s most expensive lines of code.

    Today we are announcing a new secrets detector feature that searches your codebase for hardcoded secrets. It can pinpoint locations in your code of usernames and passwords, database connection strings, tokens, and API keys from AWS and other service providers. When a secret is found in your code, CodeGuru Reviewer provides an actionable recommendation that links to AWS Secrets Manager where developers can secure the secret with a point-and-click experience.

    When you add a new repository to Amazon CodeGuru Reviewer, secrets detector will automatically search Python and Java source, in addition to configuration and documentation files, for secrets. As your codebase evolves, CodeGuru Reviewer continues to help you keep your secrets protected by integrating into your pull request workflow or CI/CD pipeline.

    To get started with Amazon CodeGuru Reviewer secrets detector, visit the blog, CodeGuru Reviewer Features or the user guide. To learn more about Amazon CodeGuru Reviewer, take a look at the Amazon CodeGuru page. To contact the team visit the Amazon CodeGuru developer forum.

    » Announcing AWS IoT RoboRunner, Now Available in Preview

    Posted On: Nov 29, 2021

    AWS IoT RoboRunner is a new robotics service that makes it easier for enterprises to build and deploy applications that help fleets of robots work together seamlessly. With AWS IoT RoboRunner, it is easier to build applications that make it possible to interoperate and orchestrate robots from a single view by reducing the complex development work required to connect robots to each other and the rest of your industrial software systems.

    AWS IoT RoboRunner collects and combines data from each type of robot in a fleet and standardizes data types like facility, location, and robotic task data in a central repository. Developers can use AWS IoT RoboRunner's APIs, and software libraries to build applications on top of the centralized repository for use cases such as task orchestration, space management, and robot collaboration. With AWS IoT RoboRunner, enterprises can improve efficiency of robotics fleets and reduce costs of running robotic operations.

    The AWS IoT RoboRunner service is available in preview in US East (N. Virginia) and Europe (Frankfurt) Regions. To learn more visit the AWS IoT RoboRunner product page.

    » Introducing intelligent user segmentation in Amazon Personalize, helping you to run more effective marketing campaigns

    Posted On: Nov 29, 2021

    Amazon Personalize now offers intelligent user segmentation which allows you to run more effective prospecting campaigns through your marketing channels. Traditionally, user segmentation has relied on demographic information and manually curated business rules to make assumptions about users’ intentions and assign them to pre-defined audience segments. Amazon Personalize uses machine learning techniques to learn about your items, users, and how your users interact with your items. Amazon Personalize segments users based on their preferences for different products, categories, brands, and more. This can help you drive higher engagement with marketing campaigns, increase retention through targeted messaging, and improve the return on investment for your marketing spend.

    Our new recipes are simple to use. Provide Amazon Personalize with data about your items and your users’ interactions and Amazon Personalize will learn your users’ preferences. When given an item or item-attribute Amazon Personalize recommends a list of users sorted by their propensity to interact with the item or items that share the attribute.

    Amazon Personalize enables you to personalize your website, app, ads, emails, and more, using the same machine learning technology as used by Amazon, without requiring any prior machine learning experience. To get started with Amazon Personalize, visit our documentation.

    » Announcing AWS Data Exchange for APIs

    Posted On: Nov 29, 2021

    We are announcing the launch of AWS Data Exchange for APIs, a new feature that enables customers to find, subscribe to, and use third-party API products from providers on AWS Data Exchange. With AWS Data Exchange for APIs, customers can leverage AWS-native authentication and governance, explore consistent API documentation, and utilize supported AWS SDKs to make API calls. Data providers can now reach millions of AWS customers that consume API-based data by adding their APIs to the AWS Data Exchange catalog, and more easily manage subscriber authentication, entitlement, and billing.

    With AWS Data Exchange for APIs, customers can leverage AWS-native authentication when setting access permissions for team members, to help simplify governance and compliance. The customer’s API charges can be viewed in a consolidated AWS bill, which helps simplify billing and payments. AWS SDKs (available in 6 programming languages) help reduce developer effort and facilitate integration with applications deployed to AWS Lambda, Amazon Elastic Cloud Compute, and other AWS services. 

    AWS Data Exchange for APIs is available in all regions where AWS Data Exchange is available including: US East (N. Virginia), US East (Ohio), US West (Northern California), US West (Oregon), Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Europe (Frankfurt), Europe (Ireland), and Europe (London) Regions.

    To explore available AWS Data Exchange for APIs products, visit the AWS Data Exchange data catalog. If you’re a registered data provider you can learn more about licensing data in AWS Data Exchange for APIs. If you’re not already a registered data provider, see our documentation on how to become a data provider.

    » Amazon EBS Snapshots introduces a new tier, Amazon EBS Snapshots Archive, to reduce the cost of long-term retention of EBS Snapshots by up to 75%

    Posted On: Nov 29, 2021

    Starting today, you can use Amazon EBS Snapshots Archive, a new tier for EBS Snapshots, to save up to 75% on storage costs for EBS Snapshots that you intend to retain for more than 90 days and rarely access. EBS Snapshots are incremental, storing only the changes since the last snapshot and making them cost effective for daily and weekly backups that need to be accessed frequently. You might also have snapshots that you access every few months or years and do not need fast access to data, such as snapshots created at the end of a project or snapshots that need to be retained long-term for regulatory reasons. For such use cases, you can now use EBS Snapshots Archive to store full, point-in-time snapshots at a storage cost of $0.0125/GB-month*. Snapshots in the archive tier have a minimum retention period of 90 days. Retrievals from the archive tier will incur a charge of $0.03/GB* of data transferred.

    *us-east-1 pricing

    You can archive snapshots with a single API call. When you archive a snapshot, a full snapshot archive is created that contains all the data needed to create your EBS Volume. To create a volume from the snapshot archive, you can restore the snapshot archive to the standard tier, and then create an EBS volume from the snapshot in the same way you do today.

    To learn more about EBS Snapshots Archive, please use the technical documentation and pricing pages. The feature is now available through the AWS Command Line Interface (CLI), AWS SDKs , or the AWS Console in all AWS commercial regions with the exception of China, Asia Pacific (Seoul), Asia Pacific (Osaka), Canada (Central), and South America (São Paulo).

    » Introducing recommenders optimized to deliver personalized experiences for Media & Entertainment and Retail with Amazon Personalize

    Posted On: Nov 29, 2021

    Today, Amazon Personalize is excited to announce recommenders which are optimized to deliver personalized experiences for common use cases in Media & Entertainment and Retail. It is now faster and easier to deliver high performing personalized user experiences in your applications without any ML expertise required. Recommenders reduce the time needed to build and deliver personalized experiences and fully manage the lifecycle of the experience to help ensure you recommend what is most relevant to your users.

    Tailoring experiences to users requires different types of recommendations at different points in a user’s journey. Media & Entertainment applications drive greater engagement and retention with personalized recommendations like “Top Picks” for users on the welcome screen and “More Like X” on video detail pages where the context of what a user has watched is critical to discover what to watch next. Retail businesses need recommendations to highlight “Best Sellers” and the items “Frequently Bought Together” to enable customers to more easily build their baskets at check-out. Amazon Personalize’s recommenders simplify the creation and maintenance of these personalized user experiences. Personalize considers the business-specific context and selects the optimal settings for our underlying machine learning models used to serve the recommendations. By fully managing the lifecycle of maintaining and hosting these models, Amazon Personalize makes it easier and faster to deliver these experiences in your application.

    Media & Entertainment customers can choose from use cases such as:

  • “Most Popular”
  • “Because You Watched X”
  • “More Like X”
  • “Top Picks For You”
  • Retail customers can choose from use cases such as:

  • “Best Sellers”
  • “Most Viewed”
  • “Frequently Bought Together”
  • “Customers Who Viewed This Also View”
  • “Recommended For You”
  • Recommenders in Amazon Personalize enable you to personalize your website, app, ads, emails, and more, using the same machine learning technology as used by Amazon.com, without requiring any prior machine learning experience. To get started with Amazon Personalize, visit our documentation.

    » Introducing Amazon CloudWatch Evidently for feature experimentation and safer launches

    Posted On: Nov 29, 2021

    Amazon CloudWatch Evidently is a new capability which helps application developers safely validate new features across the full application stack. Developers can use Evidently to conduct experiments on new application features and identify unintended consequences, thereby reducing risk. When launching new features, developers can expose the features to a subset of users, monitor key metrics such as page load times and conversions, then safely dial up traffic for general use. Amazon CloudWatch Evidently is part of CloudWatch’s Digital Experience Monitoring capabilities along with Amazon CloudWatch Synthetics and Amazon CloudWatch RUM.

    Evidently helps you remove guesswork when deciding which features are the best ones for your business, whether a new user experience, machine learning recommendations model, or server side implementation. Experimental results are presented in a clear way so you don't need advanced statistical knowledge to interpret them. While an experiment is running, anytime p-value and confidence intervals allow you to see when there is statistical significance so you can end the experiment. To launch with confidence, Evidently has a granular scheduling capability to dial up traffic in a controlled manner, while monitoring key business and performance metrics for the new feature. You can define alarms to roll back to a safe state if there are issues with the launch. Evidently also integrates with CloudWatch RUM, a new feature for client side application performance monitoring, so RUM metrics can be used directly in Evidently.

    Starting today, Evidently is generally available in 9 AWS regions: US East (N. Virginia), US East (Ohio), US West (Oregon), Europe (Ireland), Europe (Frankfurt), Europe (Stockholm), Asia Pacific (Sydney), Asia Pacific (Tokyo), and Asia Pacific (Singapore).

    To get started, see the following list of resources:

  • Blog post on getting started with Evidently
  • User guide on Evidently
  • For pricing, please refer to the Amazon CloudWatch pricing page
  • » Announcing Amazon Braket Hybrid Jobs for running hybrid quantum-classical workloads on Amazon Braket

    Posted On: Nov 29, 2021

    Amazon Braket Hybrid Jobs enables you to easily run hybrid quantum-classical algorithms such as the Variational Quantum Eigensolver (VQE) and the Quantum Approximate Optimization Algorithm (QAOA), that combine classical compute resources with quantum computing devices to optimize the performance of today’s quantum systems. With this new feature, you only have to provide your algorithm script and choose a target device — a quantum processing unit (QPU) or quantum circuit simulator. Amazon Braket Hybrid Jobs is designed to spin up the requested classical resources when your target quantum device is available, run your algorithm, and release the instances after completion so you only pay for what you use. Braket Hybrid Jobs can provide live insights into algorithm metrics to monitor your algorithm as it progresses, enabling you to make adjustments more quickly. Most importantly, your jobs have priority access to the selected QPU for the duration of your experiment, putting you in control, and helping to provide faster and more predictable execution.

    To run a job with Braket Hybrid Jobs, you need to first define your algorithm using either the Amazon Braket SDK or PennyLane. You can also use TensorFlow and PyTorch or create a custom Docker container image. Next, you create a job via the Amazon Braket API or console, where you provide your algorithm script (or custom container), select your target quantum device, and choose from a variety of optional settings including the choice of classical resources, hyper-parameter values, and data locations. If your target device is a simulator, Braket Hybrid Jobs is designed to start executing right away. If your target device is a QPU, your job will run when the device is available and your job is first in the queue. You can define custom metrics as part of your algorithm, which can be automatically reported to Amazon CloudWatch and displayed in real time in the Amazon Braket console. Upon completion, Braket Hybrid Jobs writes your results to Amazon S3 and releases your resources.

    The Hybrid Jobs feature is available in all regions where Amazon Braket is available and can be used with all devices on Amazon Braket. To learn more, you can read the AWS News Blog post and the documentation. To get started you can follow one of the example notebooks on Github that also come preinstalled in Amazon Braket notebooks. 

    » AWS price reduction for data transfers out to the internet

    Posted On: Nov 26, 2021

    Effective December 1, 2021, AWS is making two pricing changes for data transfer out to the internet. Each month, the first terabyte of data transfer out of Amazon Cloudfront, the first 10 million HTTP/S requests, and the first 2 million CloudFront Functions invocations will be free. Free data transfer out of CloudFront is no longer limited to the first 12 months. In addition, the first 100 gigabytes per month of data transfer out from all AWS Regions (except China and GovCoud) will be free. Free data transfer out from AWS Regions is also no longer limited to the first 12 months. These changes will replace the existing data transfer and CloudFront AWS Free Tier offerings, and AWS customers will see these changes automatically reflected in their AWS bills going forward. All AWS customers will benefit from these pricing changes, and millions of customers will see no data transfer charges as a result.

    Today, the majority of data transfer out to the internet is from customers that are hosting live video, websites, mobile applications, and APIs on AWS. Most of this traffic is served using Amazon CloudFront, AWS’s content delivery network, which securely delivers content with low latency and high availability. CloudFront is built on the AWS global network, which is the largest network in the world, with hundreds of thousands of miles of network backbone, interconnecting 300+ Points of Presences, 81 Availability Zones, and 25 Regions. Every data center, Availability Zone, and AWS Region is interconnected via a purpose-built, highly available, and low-latency private global network infrastructure. Designed for the most demanding workloads, the AWS network is built with a fully redundant 100 GbE fiber network backbone and hundreds of terabits of capacity. This enables AWS customers to benefit from a high-performance, reliable, flexible, scalable, and secure network—while reducing costs.

    The CloudFront free tier is a full-service free tier, meaning customers can use all CloudFront features, such as support of website images, media workloads, and APIs, without service restrictions or data type limitations. Customers will also continue to benefit from free origin fetches from their AWS origins that are hosted on AWS Regional services to CloudFront. Compared to the current CloudFront free tier, which provides 50GB per month of free data transfer out for first 12 months, this new free tier offers 20 times the amount of free data transfer out per month and does not expire after 12 months.

    Customers with use cases that transfer data directly out of AWS Regions instead of using CloudFront, such as game streaming or live text and audio communication, will benefit from an expanded offering for free data transfer out to the internet. With this price reduction, the first 100GB per month of data transferred out of an AWS Region will be free. Now, for millions of AWS customers, the portion of their AWS bill for data transfer out to the internet will be $0. The new perpetual free tier replaces the current free tier, which offers up to 1GB per month of data transfer to the internet.

    In addition to the millions of customers who will benefit from free CloudFront and AWS data transfer, customers with greater than 1TB of CloudFront data transfer or greater than 100GB of AWS data transfer out to the internet will also see a meaningful reduction in their AWS bills. For instance, with the new expanded free tier, a CloudFront customer with data transfer usage of 2TB and 20 million HTTP requests monthly will see a 50% reduction in their monthly bill for Amazon CloudFront, while a customer who doesn’t use CloudFront but sends 400 GB per month directly out from AWS services such as Amazon EC2, Amazon S3, and Elastic Load balancing will see a 25% cost reduction. These price reductions cannot be combined with any other discount.

    To learn more about the new price reductions going into effect, refer to the AWS News Blog, or get started now at https://aws.amazon.com/free/.

    » AWS Lambda now supports event filtering for Amazon SQS, Amazon DynamoDB, and Amazon Kinesis as event sources

    Posted On: Nov 26, 2021

    AWS Lambda now provides content filtering options for SQS, DynamoDB and Kinesis as event sources. With event pattern content filtering, customers can write complex rules so that their Lambda function is only triggered by SQS, DynamoDB, or Kinesis under filtering criteria you specify. This helps reduce traffic to customers’ Lambda functions, simplifies code, and reduces overall cost.

    Customers can specify up to 5 filter criteria when creating or updating the event source mappings for their Lambda functions triggered by SQS, DynamoDB or Kinesis. The filters are combined using OR logic by default. In other words, an event/payload meeting any of the filtering criteria defined will be passed on to trigger a Lambda function while an event/payload not matching any of the filtering criteria will be dropped. This feature helps reduce function invocations for microservices that only use a subset of events available, removing the need for the target Lambda function or downstream applications to perform filtering. 

    Content filtering is available in all commercial regions that AWS Lambda is available. There is no additional cost for using this feature beyond the standard price for AWS Lambda.

    To learn more about the feature, visit the AWS Lambda documentation for event filtering.

    » AWS App2Container now supports Jenkins for setting up a CI/CD pipeline

    Posted On: Nov 26, 2021

    AWS App2Container(A2C) now supports Jenkins for setting up a CI/CD pipeline to automate building and deploying application in containers on AWS. With this new integration, customers can configure their existing Jenkins pipeline in the current Jenkins environment for managing automated build and deployment of containerized applications. 

    AWS App2Container (A2C) is a command-line tool for modernizing .NET and Java applications into containerized applications. A2C analyzes and builds an inventory of all applications running in virtual machines, on-premises or in the cloud. You simply select the application you want to containerize, and A2C packages the application artifact and identified dependencies into container images, configures the network ports, and generates the ECS task and Kubernetes pod definitions.

    Jenkins is an open source automation server which supports building, deploying, and automating your application with the help of Jenkins Pipeline. Jenkins Pipeline is a suite of plugins that supports implementing and integrating continuous delivery pipelines into Jenkins. These plugins can be used to integrate with AWS App2Container to automate deployments for your applications. App2Container can help configure a Jenkins pipeline in your existing Jenkins environment. This is in addition to AWS CodePipeline support already included in App2Container. 

    To learn more, refer to App2Container technical documentation for setting up CI/CD pipeline with Jenkins. 

    » AWS Single Sign-On is now in scope for AWS SOC reporting

    Posted On: Nov 24, 2021

    AWS Single Sign-On (AWS SSO) is now in scope for AWS SOC 1 , SOC 2, and SOC 3 reports. You can now use AWS SSO in applications requiring audited evidence of the controls in our System and Organization Controls (SOC) reporting. For example, if you use AWS to manage access to accounts and applications, you can use the SOC reports to help meet your compliance requirements for those use cases.  AWS SOC reports are independent third-party examination reports that demonstrate how AWS achieves key compliance controls and objectives.

    AWS SSO is where you create, or connect, your workforce identities in AWS once and manage access centrally across your AWS organization. You can choose to manage access just to your AWS accounts or cloud applications. You can create user identities directly in AWS SSO, or you can bring them from your Microsoft Active Directory or a standards-based identity provider, such as Okta Universal Directory or Azure AD. With AWS SSO, you get a unified administration experience to define, customize, and assign fine-grained access. Your workforce users get a user portal to access all of their assigned AWS accounts or cloud applications. AWS SSO can be flexibly configured to run alongside or replace AWS account access management via AWS IAM. To get started with AWS SSO, please see the AWS SSO product page and service documentation.

    You can download the AWS SOC reports in AWS Artifact. To learn more, you can go to the AWS Services in Scope by Compliance Program webpage to see a full list of services covered by each compliance program and read our blog post about the Fall 2021 SOC reports.

    » Amazon EC2 Auto Scaling Now Supports Predictive Scaling with Custom Metrics

    Posted On: Nov 24, 2021

    With Amazon EC2 Auto Scaling’s new predictive scaling policy, you can now use custom metrics to predict the EC2 instance capacity needed by an Auto Scaling group. Predictive scaling proactively increases the capacity of an Auto Scaling group to meet predicted demand. For workloads that experience recurring, steep demand changes, predictive scaling can help improve your application’s responsiveness without having to overprovision capacity, resulting in lower EC2 costs. Custom metrics are useful when the predefined metrics (CPU Utilization, Network I/O, and ALB Request Count) are not sufficient to capture the load on your application. Previously, you could only use custom metrics with step scaling and target tracking, but you can now use them with predictive scaling as well.

    For example, predictive scaling can now be configured to scale based on an Amazon CloudWatch metric from another AWS service that represents your application’s load—like the number of messages in an Amazon Simple Queue Service (SQS) queue—or based on a custom CloudWatch metric specific to your application—like the number of user sessions served. Predictive scaling now also supports CloudWatch Metric Math Expressions, enabling you to easily create custom metrics from existing ones. For example, if the Auto Scaling group processes tasks from multiple SQS queues, you can create a custom metric that represents the total messages across queues by using a simple SUM expression, saving you the effort and cost of creating another CloudWatch metric. You can also use Metric Math expressions to aggregate metrics across Auto Scaling groups, for example in Blue-Green deployment scenarios.

    Predictive scaling is available through AWS Command Line Interface (CLI), EC2 Auto Scaling Management Console, and AWS SDKs in all public AWS Regions. To learn more, visit the predictive scaling page in the EC2 Auto Scaling documentation.

    » Amazon Managed Grafana adds support for Amazon Athena and Amazon Redshift data sources and Geomap visualization

    Posted On: Nov 24, 2021

    Amazon Managed Grafana announces new data source plugins for Amazon Athena and Amazon Redshift, enabling customers to query, visualize, and alert on their Athena and Redshift data from Amazon Managed Grafana workspaces. Amazon Managed Grafana now also supports CloudFlare, Zabbix, and Splunk Infrastructure Monitoring data sources as well as the Geomap panel visualization and open source Grafana version 8.2.

    With the new Amazon Athena data source, customers can now connect to, query, and analyze their Amazon Simple Storage Service (Amazon S3) data using standard SQL directly from Amazon Managed Grafana workspaces. Customers can also leverage the default dashboard that comes with the Amazon Athena plugin to query their AWS Cost and Usage Reports and visualize their AWS spend. Using the new Amazon Redshift data source, customers can now create dashboards and alerts in their Amazon Managed Grafana workspaces to analyze their structured and semi-structured data across data warehouses, operational databases, and data lakes. The Amazon Redshift plugin also comes with a default dashboard out-of-the-box that makes it easy for customers to get started with monitoring the health and performance of their Redshift clusters. 

    Customers can now also use the Geomap panel visualization to visualize their geospatial data in a map view. Customers can configure multiple overlay styles to visually represent important location-based characteristics of their data, such as the heatmap overlay to cluster data points for visualizing hotspot locations with high data densities. The Zabbix and CloudFlare data sources are now available on Amazon Managed Grafana, enabling customers to visualize Zabbix metrics and connect to their CloudFlare account to monitor their DNS traffic by geography, latency, response code, query type, and hostname. With the Splunk Infrastructure Monitoring data source, available with a Grafana Enterprise License, customers can now also visualize Splunk Infrastructure Monitoring data directly from their Managed Grafana workspaces. Existing and new Amazon Managed Grafana workspaces are now automatically upgraded to Grafana version 8.2, with no action required from customers.

    Amazon Managed Grafana is a fully managed service that takes care of the provisioning, setup, scaling, and maintenance of Grafana servers and is generally available in the following regions: US East (N. Virginia), US East (Ohio), US West (Oregon), Europe (Ireland), Europe (Frankfurt), Europe (London), Asia Pacific (Singapore), Asia Pacific (Tokyo), Asia Pacific (Sydney), Asia Pacific (Seoul). To get started with creating a workspace, visit the AWS Console or check out the Amazon Managed Grafana user guide for detailed documentation. To learn more, visit the Amazon Managed Grafana product pagepricing page, and AWS Observability Recipes page for getting started templates. 

    » Amazon QuickSight launches versioning in datasets

    Posted On: Nov 24, 2021

    Amazon QuickSight now supports dataset versioning, which allows dataset owners to understand historical changes within a dataset, preview a specific version, or revert back to a previous version if needed. Dataset versions can be viewed and tracked via the UI, allowing dataset owners to view versions and switch to a specific version via UI. Dataset Versions gives dataset authors the confidence to experiment with their content, knowing that their older versions are available and that they easily can revert back to it when required.

    Dataset Versions is available in Amazon QuickSight Standard and Enterprise Editions in all QuickSight regions - US East (N. Virginia and Ohio), US West (Oregon), Canada, Sau Paulo, Europe (Frankfurt, Ireland and London), Asia Pacific (Mumbai, Seoul, Singapore, Sydney and Tokyo), and AWS GovCloud (US-West). For further details, visit here

    » AWS Database Migration Service now supports Azure SQL Managed Instance as a source

    Posted On: Nov 24, 2021

    AWS Database Migration Service (AWS DMS) expands functionality by adding support for Azure SQL Managed Instance as a source. Using AWS DMS, you can now migrate data live from Azure SQL Managed Instance to any valid supported target with minimal downtime.

    To learn more about the new target endpoint, see Using a Microsoft SQL Server database as a source for AWS DMS.

    For regional availability, please refer to the AWS Region Table.

    » Amazon Connect Customer Profiles now offers Identity Resolution to consolidate similar profiles

    Posted On: Nov 24, 2021

    Amazon Connect Customer Profiles now offers Identity Resolution that is designed to automatically detect similar customer profiles by comparing name, email address, phone number, date of birth, and address. For example, two or more profiles with spelling mistakes, such as "John Doe" and "Jhn Doe," can be detected as belonging to the same customer "John Doe" using clustering and matching machine learning (ML) algorithms. Once a group of profiles are detected to be similar, admins can configure how profiles should be merged together by setting up consolidation rules through AWS management console or APIs.

    At the moment of contact, the unified profile is presented to the IVR (interactive voice response) through the Customer Profiles Flow Block or the contact center agent through the Customer Profiles agent application. A unified profile helps an agent save the time and effort to scan through multiple similar records to identity and service a customer.

    Amazon Connect Customer Profiles Identity Resolution is available in Europe (London), Europe (Frankfurt), Asia Pacific (Sydney), Asia Pacific (Singapore), Asia Pacific (Tokyo), Canada (Central). US West (Oregon), and US East (N. Virginia). To get started with Identity Resolution, read our launch blog. To learn more about Amazon Connect Customer Profiles please visit the Customer Profiles website, API reference guide, admin guide, or Amazon Connect website.

    » AWS IoT SiteWise announces three new enhancements that make it easier to ingest equipment data to the cloud

    Posted On: Nov 24, 2021

    Today, we are announcing three new enhancements for AWS IoT SiteWise that make it easier for customers to collect data from industrial equipment at scale. The new enhancements reduce the number of steps required to ingest equipment data to the cloud, and add flexibility for customers modeling their physical operations using AWS IoT SiteWise asset models and assets.

    Customers can now ingest equipment data without needing to create asset models and assets in advance. Previously, customers needed to create asset models and assets, as well as assign data streams to an asset before ingesting data to the cloud. Now, customers can ingest equipment data into AWS IoT SiteWise without defining asset models and creating assets for industrial equipment. This allows users to visualize data streams in the AWS IoT SiteWise console and use that information to decide how to best map data streams to assets.

    AWS IoT Sitewise now has the flexibility to re-assign data streams from one asset to another. Customers can use this flexibility to evolve the modeling of their production site as it changes over time. To learn more, visit the Data Streams page in our user guide.

    AWS IoT SiteWise has also released new enhancements to the access control for your equipment data. You can now grant a user access to data streams associated with one production site, and restrict access to data streams associated with another production site within the same AWS account. To learn more, visit How AWS IoT SiteWise works with IAM in our User Guide.

    These enhancements are enabled by default for all new AWS IoT SiteWise customers. Existing customers can opt into the feature through the AWS IoT SiteWise console after reviewing the prerequisites section on Managing data streams in our User Guide.

    AWS IoT SiteWise is a managed service to collect, model, analyze and visualize data from industrial equipment at scale. To learn more, please visit the AWS IoT SiteWise website or the developer guide.

    » Amazon QuickSight adds new Exasol data connector

    Posted On: Nov 24, 2021

    Amazon QuickSight now supports connectivity to Exasol, a high-performance, in-memory, MPP database designed for analytics. QuickSight’s new data connector allows business users to directly connect, analyze and report on the data in Exasol using a live connection, or import data from Exasol into QuickSight’s SPICE in-memory engine for scaling access to 1000s of users. 

    Exasol Connector is now available in Amazon QuickSight Standard and Enterprise Editions in all QuickSight regions - US East (N. Virginia), US East (Ohio), US West (Oregon), Canada (Central), South America (Sao Paulo), Europe (Frankfurt), Europe  (Ireland), Europe (London), Asia Pacific (Mumbai), Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific(Sydney), Asia Pacific (Tokyo), and US West (GovCloud) Regions. For further details, visit here.

    » Amazon Redshift announces native support for spatial GEOGRAPHY datatype

    Posted On: Nov 24, 2021

    Amazon Redshift support for GEOGRAPHY data type is now available for spatial analytics. GEOGRAPHY data type is used in queries requiring higher precision results for spatial data with geographic features that can be represented with a spheroid model of the Earth and referenced using latitude and longitude as spatial coordinate system.

    With this announcement, Amazon Redshift can now support two major spatial data types - GEOMETRY and GEOGRAPHY to perform the vast majority of spatial analytics, opening-up support for many more third-party spatial and GIS applications. In addition to the GEOGRAPHY data type, Amazon Redshift also released support for new spatial functions such as: ST_Intersection, ST_Centroid, ST_Transform, and ST_IsRing

    Geography support is now available in all commercial AWS Regions. Refer to the AWS Region Table for Amazon Redshift availability. For more information or to get started with Amazon Redshift spatial analytics, see the documentation or the tutorial.

    » Announcing General Availability of Enterprise On-Ramp

    Posted On: Nov 24, 2021

    Amazon Web Services (AWS) has announced the general availability of Enterprise On-Ramp, a new Support tier between existing Business and Enterprise Support, to help customers that are starting their cloud journey and need expert guidance to grow and optimize on cloud. With Enterprise On-Ramp, customers can solve cloud-related challenges with access to AWS experts whether by phone or live chat, share their screen and get support to improve issue resolution and eliminate the frustration of back-and-forth emails.

    Enterprise On-Ramp includes 24/7 access to Support engineers with 30 minutes case response time for high severity issues. Customers also get access to consultative architectural guidance, operations reviews, cost optimization recommendations, and event management Support delivered by a pool of experts. Enterprise On-Ramp provides access to Support APIs, Support automated workflows, and full access to AWS Trusted Advisor best practice checks along with concierge support to help customers understand their bill and cost allocations. Enterprise On-Ramp is priced at the greater of $5,500 or 10% of monthly AWS usage.

    Enterprise On-Ramp is available immediately in all public regions with customer engagement in English and including language Support in Mandarin and Korean. Additional languages will be added in the future.

    To learn more about Enterprise On-Ramp, visit its product page. To learn more about pricing, visit the pricing page.

    » Announcing new performance enhancements for Amazon Redshift data sharing

    Posted On: Nov 24, 2021

    Amazon Redshift data sharing allows you to share live, transactionally consistent data across different Redshift clusters without the complexity and delays associated with data copies and data movement. Data sharing now adds several performance enhancements including result caching, and concurrency scaling allowing you to support broader set of analytics applications and meet critical performance SLAs when querying shared data.

    Data sharing allows you to rapidly onboard new analytics workloads and provision them with flexible compute resources to meet individual workload-specific performance SLAs. With the new performance enhancements, data sharing makes it easier to support analytics that require low latency and high concurrency such as dash boarding applications using optimizations that minimize the amount of data that need to be accessed by the consumer clusters. Result caching helps with reducing query execution time and improve system performance by caching the results of certain types of queries in memory. When a user submits a query, Amazon Redshift checks the results cache for a valid, cached copy of the query results making it possible to offer sub-second response times. With the concurrency scaling feature, you can support virtually unlimited concurrent users and concurrent queries on shared data, with consistently fast query performance.

    The new performance enhancements are available in all regions where data sharing is available. Learn more about data sharing capability in feature page and refer to documentation. Refer to enabling workload isolation and supporting multi-tenancy and data as a service to learn more about use cases.

    » AWS launches NAT64 and DNS64 capabilities to enable communication between IPv6 and IPv4 services

    Posted On: Nov 24, 2021

    Starting today, your IPv6 AWS resources in Amazon Virtual Private Cloud (VPC) can use NAT64 (on AWS NAT Gateway) and DNS64 (on Amazon Route 53 Resolver) to communicate with IPv4 services. As you transition your workloads to IPv6 networks, they would continue to need access to IPv4 network and services. With NAT64 and DNS64, your IPv6 resources can communicate with IPv4 services within the same VPC or connected VPCs, your on-premises networks, or the Internet.

    A NAT Gateway enables instances in a private subnet to connect to services outside that subnet using the NAT Gateway’s IP address and Route 53 Resolver is a DNS server that is available by default in all Amazon VPCs. In order to enable your IPv6 workloads to communicate with IPv4 networks, you can enable DNS64 on the subnet containing your IPv6 services and route the subnet’s traffic destined for IPv4 services through a NAT Gateway. There is no separate configuration required on NAT Gateway. The DNS64 service synthesizes and returns the AAAA records for IPv4 destinations, and the NAT Gateway performs the translation on the traffic to allow IPv6 services in your subnet to access IPv4 services outside that subnet. This way, by using both DNS64 and NAT64, your IPv6 resources in the subnet can communicate with IPv4 services anywhere outside this subnet.

    NAT64 on NAT Gateway and DNS64 on Route 53 Resolver are available in the following AWS Regions today: US East (N. Virginia), US West (Oregon), and US West (N. California). To learn more about VPC NAT Gateway and DNS64 on Route 53 Resolver, please visit our documentation.

    » Now execute python files and notebooks from another notebook in EMR Studio

    Posted On: Nov 24, 2021

    EMR Studio is an integrated development environment (IDE) that makes it easy for data scientists and data engineers to develop, visualize, and debug big data and analytics applications written in R, Python, Scala, and PySpark. Today, we are excited to announce two new capabilities in EMR Studio. First, you can now more easily execute python scripts directly from the EMR Studio Notebooks. Second, you can execute other dependent Jupyter notebooks directly from a notebook in EMR Studio. Earlier, both of these capabilities required manually copying these files from EMR Studio to the EMR Cluster. 

    An EMR Studio Workspace provides a fully-managed serverless Jupyter instance in the cloud which comes with a local file system where you can author, store, and organize your notebooks and files. Data Scientists often have python scripts and Notebooks that need to be invoked from other Notebooks. For e.g. a python script doing generic data quality checks may be used across multiple notebooks. Previously, you needed to manually copy these files from EMR Studio Workspace’s local storage to the cluster in order to execute them. You can now use %mount_workspace_dir Jupyter magic command to mount your EMR Studio Workspace directory to an EMR Cluster. This allows notebooks running on EMR Clusters to execute python files or invoke other notebooks in your local Workspace without manually copying these files or logging into the cluster. In addition, we have also added a command - %generate_s3_download_url to download files from Amazon S3. You can use this capability to download a data file from a notebook to analyze it locally e.g. to further analyze it in Excel. Without this capability, you had to navigate to the Amazon S3 console to download files from your S3 bucket. Both the above Jupyter magic commands are made available in the EMR Notebooks iPython Magics package.

    EMR Studio is available in US East (Ohio), US East (N. Virginia), US West (Oregon), Canada (Central), Europe (Ireland), Europe (Frankfurt), Europe (London), Europe (Paris), Europe (Stockholm), Asia Pacific (Mumbai), Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), and South America (Sao Paulo) Regions.

    To learn more about this feature, see our documentation here. To learn more about using this feature, see our sample notebook here.

    » The Amazon Chime SDK now offers enhanced echo reduction

    Posted On: Nov 24, 2021

    The Amazon Chime SDK lets developers add real-time audio, video, screen-sharing, and messaging capabilities to their web or mobile applications. The Amazon Chime SDK now offers machine learning (ML) based echo reduction to help improve audio experiences. Acoustic echoes disrupt meetings or conference calls when the sound played by the loudspeaker is picked up by the microphone and it circulates back into the call. The new ML-based echo reduction capability is designed to reduce acoustic echoes and preserve voice quality during double-talk conditions, when two or more people speak at the same time.

    The Amazon Chime SDK echo reduction also includes Amazon Voice Focus, the technology developed to provide noise reduction in the Amazon Chime SDK. Amazon Voice Focus uses machine learning and models of speech and hearing to reduce background noises such as fans, lawnmowers, and barking dogs as well as foreground noises like typing and shuffling papers — so that noise doesn’t detract from conversations and engagements. Developers can configure their meetings with echo reduction capabilities via the CreateMeeting API from the Amazon Chime SDK. After configuration, developers additionally have to enable the feature at the client level by applying the appropriate ML model for echo reduction when attendees join the meeting.

    The processing for Amazon Chime SDK echo reduction is performed in real time using WebAssembly (WASM) and single instruction multiple data (SIMD) for efficient operation on most modern computers and browsers. This offering is currently available in the Amazon Chime SDK for Javascript.

    To learn more about the Amazon Chime SDK, please see the following resources:

  • Amazon Chime SDK website
  • Amazon Chime SDK for JavaScript
  • Amazon Chime SDK Developer Guide
  • Amazon Chime SDK Voice Focus API
  • » Announcing AWS PrivateLink Support for Amazon Translate

    Posted On: Nov 24, 2021

    Amazon Translate is a neural machine translation service that delivers fast, high-quality, and affordable language translation. Amazon Translate now supports Amazon Virtual Private Cloud (VPC) endpoints via AWS PrivateLink so you can securely initiate API calls to Amazon Translate from within your VPC and without using public IPs. AWS PrivateLink provides private connectivity between VPCs and AWS services, without ever leaving the Amazon network, significantly simplifying your internal network architecture. You no longer need to use an Internet Gateway, Network Address Translation (NAT) devices or firewall proxies to connect to Amazon Translate.

    Using AWS PrivateLink enables you to privately access Amazon Translate APIs from your VPC in a scalable manner by using interface VPC endpoints. A VPC endpoint is an elastic network interface in your subnet with a private IP address that serves as the entry point for all Amazon Translate API calls.

    AWS PrivateLink support for Amazon Translate is now available in all commercial AWS regions. To learn more, please see our Amazon Translate documentation.

    » AWS Proton introduces Git management of infrastructure as code templates

    Posted On: Nov 24, 2021

    AWS Proton now allows customers to sync their Proton templates from a git repository. Platform teams can create AWS Proton templates based on AWS CloudFormation and Terraform templates uploaded to a git repository. AWS Proton is designed to automatically sync and create a new version when changes are made and committed to the git repository. With this new feature, platform and development teams can eliminate manual steps and and reduce the chance for human error.

    AWS Proton is the first fully managed application deployment service for containers and serverless. Platform teams can use AWS Proton to connect and coordinate all the different tools needed for infrastructure provisioning, code deployments, monitoring, and updates in a curated self-service interface for developers. The self-service interface provides developers access to approved infrastructure to build and deploy their applications.

    To enable template syncing, create a new AWS Proton template and indicate from which repository and folder it will be synced. Whenever you want to create a new version, simply commit the updates to the repository. By design, AWS Proton automatically picks up the changes and generates a new minor version. Generate a new major version by creating a new folder for the same template in your repository.

    To start syncing your AWS Proton templates from git, go here.

    » AWS Proton now supports Terraform Open Source for infrastructure provisioning

    Posted On: Nov 24, 2021

    AWS Proton now supports the definition of infrastructure in HashiCorp Configuration Language (HCL) and the provisioning of infrastructure using Terraform Open Source through a git-based workflow. Platform teams define AWS Proton templates using Terraform modules, and AWS Proton leverages the customer-managed Terraform automation to provision or update the infrastructure. Customers can use Terraform as their infrastructure definition and provisioning tool, and AWS Proton will ensure that modules are used consistently and kept up to date.

    AWS Proton is the first fully managed application deployment service for containers and serverless. Platform teams can use AWS Proton to connect and coordinate all the different tools needed for infrastructure provisioning, code deployments, monitoring, and updates in a curated self-service interface for developers. The self-service interface provides developers access to approved infrastructure to build and deploy their applications.

    To use AWS Proton with Terraform Open Source, start by creating AWS Proton templates for environments and services using Terraform modules. Next, select a configuration repository and chose to to provision infrastructure based on newly committed modules. When development teams create or update a service that uses Terraform, AWS Proton will render the modules that make up the service and make a PR to the corresponding repository. Once your work workflow triggers infrastructure to be provisioned, upon completion, reports the status back to AWS Proton. Developers can get consistent infrastructure provisioned for their services without having to assemble and configure their Terraform modules. Platform teams can oversee and update infrastructure across multiple environments without having to review the code in several different repositories and folders.

    To learn more about how to use AWS Proton with Terraform, read here

    » Amazon DynamoDB now helps you meet regulatory compliance and business continuity requirements through enhanced backup features in AWS Backup

    Posted On: Nov 24, 2021

    Amazon DynamoDB now helps you meet regulatory compliance and business continuity requirements through enhanced backup features, including copying on-demand backups cross-account and cross-Region, cost allocation tagging for backups, and transitioning backups to cold storage. In addition, backups managed through AWS Backup are now stored in the AWS Backup vault, which allows you to encrypt and secure your backups by using AWS Key Management Service (KMS) key that is independent from your DynamoDB table encryption key.

    You can use on-demand backups to create full backups of your DynamoDB tables with a single click, and with zero impact on performance or availability. Previously, creating backup copies in different AWS accounts and Regions required building and managing custom scripts. You now can use AWS Backup to easily define cross-account and cross-Region backup copy preferences enabling you to support your data protection requirements. You also can tag backups to simplify cost allocation and define cold storage lifecycle preferences to reduce your backup costs. To use these enhanced backup features, you simply need to opt-in to have AWS Backup manage your DynamoDB backups via AWS Management Console or AWS Backup APIs.

    To learn more about this feature, see On-Demand Backup and Restore for DynamoDB. For more information about region availability and pricing, see AWS Backup pricing.

    » Elastic Fabric Adapter now supports new instance sizes within supported Amazon EC2 instance types

    Posted On: Nov 24, 2021

    Elastic Fabric Adapter (EFA) now supports new instance sizes within the Amazon EC2 compute-optimized, GPU, and dense SSD storage instance types that support EFA. Until now, EFA could be enabled for select bare-metal instances or for the largest instance size that support EFA. Starting today, you can associate EFA with additional sizes within Amazon C5, G4, and I3 instance types. By enabling EFA for smaller instance sizes that match the performance requirements of your application, you can lower costs. 

    EFA is a network interface for Amazon EC2 instances that enables customers to run applications requiring high levels of inter-node communications at scale on AWS. To learn more about EFA, please visit EFA documentation. To learn more about how EFA enables uncompressed live video production on AWS with g4dn.8xlarge instance and AWS Cloud Digital Interface (CDI), click here

    » EC2 Image Builder enables sharing Amazon Machine Images (AMIs) with AWS Organizations and Organization Units

    Posted On: Nov 24, 2021

    Now on EC2 Image Builder, customers can share their Amazon Machine Images (AMIs) with AWS Organizations and Organizational Units (OUs) in the image distribution phase of their build process. As their organization structure changes, customers no longer have to manually update AMI permissions for individual AWS accounts in their organization. Customers can create OUs within AWS Organizations and manage AMI permissions for AWS accounts within those OUs.

    Customers can automate AMI sharing by adding their AWS Organization details in the distribution settings of the image build pipeline. When an AWS Organization is added to the pipeline distribution settings, EC2 Image Builder will share the new AMIs from the build pipeline to the specified AWS Organization.

    Get started from the EC2 Image Builder Console, CLI, API, Cloud Formation, or CDK, and learn more in the EC2 Image Builder documentation. You can find information about AMI sharing on EC2 Image Builder with AWS Organizations on the service documentation page. Learn more about the AMI sharing in EC2 here.

    You can also learn about upcoming EC2 Image Builder features on the public roadmap.

    » New AWS Managed Templates for IoT Jobs enable customers to deploy remote operations to IoT fleets with no code

    Posted On: Nov 24, 2021

    AWS Managed Templates for IoT Jobs, a new feature of AWS IoT Device Management now gives you the ability to deploy common remote operations to fleets of IoT devices directly from the AWS IoT Console, with no incremental code, and in a standardized manner. Instead of having to manually define your remote operations in a JSON Job Document, you can select from a range of pre-built remote actions, provide relevant inputs, and quickly deploy them to your IoT fleets.

    You can use AWS Managed Templates for IoT Jobs to deploy seven frequently used remote operations: reboot-device, download-files, install-applications, remove-applications, start-application, stop-application, and restart-application. Jobs created with AWS Managed Templates currently work by default on all hardware/software platforms running the AWS IoT Device Client, namely microprocessor-based IoT devices (x86_64, ARM), and common Linux software environments (Debian, Ubuntu, RHEL). Alternately, if you choose to write your own device-side code, or customize device-code for your hardware platforms, you can use our library of handlers on GitHub.

    You will only be charged for job executions deployed to your IoT devices using AWS Managed templates (see pricing). This feature is available in all AWS Regions where AWS IoT Device Management Jobs is available. Read our documentation to learn how to create a Job with an AWS Managed Template, and how this capability works together with the AWS IoT Device Client. Get started with this feature on the AWS IoT Console.

    » AWS WAF adds support for Captcha

    Posted On: Nov 24, 2021

    AWS today announced AWS WAF Captcha to help block unwanted bot traffic by requiring users to successfully complete challenges before their web request are allowed to reach AWS WAF protected resources. Captcha is an acronym for Completely Automated Public Turing test to tell Computers and Humans Apart and is commonly used to distinguish between robotic and human visitors to prevent activity like web scraping, credential stuffing, and spam. You can configure AWS WAF rules to require WAF Captcha challenges to be solved for specific resources that are frequently targeted by bots such as login, search, and form submissions. You can also require WAF Captcha challenges for suspicious requests based on the rate, attributes, or labels generated from AWS Managed Rules, such as AWS WAF Bot Control or the Amazon IP Reputation list. WAF Captcha challenges are simple for humans while remaining effective against bots. WAF Captcha includes an audio version and is designed to meet WCAG accessibility requirements.

    You can start using Captcha in AWS WAF by creating or navigating to a rule statement and selecting challenge as the action type. When a request matches a rule statement and has WAF Captcha as the action type, users will be presented with a page delivered by AWS WAF, instructing them to complete a Captcha challenge before they can proceed. Once a user successfully completes a Captcha challenge, the originally requested resource will be requested again automatically. Users that complete challenges will not be required to complete additional challenges for a period of time that you can customize. For detailed information, see the AWS WAF developer guide.

    AWS WAF Captcha is now available in the US East (N. Virginia), US West (Oregon), Europe (Frankfurt), South America (Sao Paulo), and Asia Pacific (Singapore) AWS Regions and supports Application Load Balancer, Amazon API Gateway, and AWS AppSync resources. We expect to launch AWS WAF Captcha in other commercial AWS Regions and AWS GovCloud (US) Regions and to add support for Amazon CloudFront resources over the next few days. WAF Captcha usage is billed based on the number of WAF Captcha challenges attempted, in addition to standard AWS WAF service charges. See the AWS WAF Pricing page for more details.

    Modified 12/9/2021 – In an effort to ensure a great experience, expired links in this post have been updated or removed from the original post.

    » AWS Database Migration Service now supports Google Cloud SQL for MySQL as a source

    Posted On: Nov 24, 2021

    AWS Database Migration Service (AWS DMS) has expanded functionality by adding support for Google Cloud SQL for MySQL as a source. Using AWS DMS, you can now perform live migrations from Google Cloud SQL for MySQL to any AWS DMS supported targets.

    To learn more, see Using a MySQL-compatible database as a source.
    For AWS DMS regional availability, please refer to the AWS Region Table.
     

    » AWS App Runner supports GitHub Actions to build and deploy applications

    Posted On: Nov 24, 2021

    AWS App Runner now supports GitHub Actions to build and deploy applications. GitHub Actions provide a way to implement complex orchestration and CI/CD functionality directly in GitHub by initiating a workflow on any GitHub event. If you have your source code in a GitHub repository, you can use GitHub Actions to enable App Runner to build a Docker image based on the language runtime and to deploy your application based on the generated image. For supported runtimes on App Runner, refer to the documentation. If you already have a container image of your application in a GitHub repository, you can use GitHub Actions to directly use the image to deploy your application on App Runner.

    You can construct a custom workflow using GitHub Actions to easily deploy your applications to App Runner directly from GitHub. To learn more about using AWS App Runner through GitHub Actions, refer to the blog. To learn more about AWS App Runner, refer to the documentation.

    » Amazon Redshift delivers better cold query performance to Amazon Web Services China regions

    Posted On: Nov 24, 2021

    Improved cold query performance is now available in Amazon Web Services China (Beijing) Region, operated by Sinnet and Amazon Web Services China (Ningxia) Region, operated by NWCD.

    With this improvement, Amazon Redshift can process queries up to 2x faster if and when queries need to be compiled. This improvement gives you better query performance when creating a new Redshift cluster, onboarding a new workload on an existing cluster, or when running a query after a software update to an existing cluster. These query improvements are available in both China regions at no additional cost, and no additional actions are needed to enable it on supported Redshift instance types (DS2, DC2 or RA3).

    With this update, query compilations are scaled to a serverless compilation service beyond the compute resources of the leader node of your cluster. When your mission-critical queries are submitted to Amazon Redshift, the cold query performance enhancement feature uses an elastic pan-China cache that is shared by both Beijing and Ningxia regions to store compiled objects to increase cache hits.

    Cold query performance improvement is now available in China (Bejing), China (Ningxia), US East (Ohio), US East (N. Virginia), US West (N. California), US West (Oregon), Canada (Central), Asia Pacific (Seoul), Asia Pacific (Singapore),  Asia Pacific (Sydney), Asia Pacific (Tokyo), Asia Pacific (Mumbai), Europe (Frankfurt), Europe (Ireland, Europe (London), and South America (São Paulo) regions.

    » Elastic Beanstalk supports AWS Graviton-based Amazon EC2 instance types

    Posted On: Nov 24, 2021

    Elastic Beanstalk now supports AWS Graviton-based Amazon Elastic Compute Cloud (Amazon EC2) instance types. AWS Graviton is an arm64-based processor built by Amazon that provides up to 40% better price-performance over a comparable x86-based processor. AWS Graviton on Elastic Beanstalk enables customers to benefit from the superior price-performance of arm64-based processors along with the ease-of-use of Elastic Beanstalk.

    Elastic Beanstalk provides safe deployment options helping customers efficiently test their workloads on Graviton processors and avoid downtime. Elastic Beanstalk also provides rich out-of-the-box logs to troubleshoot compatibility issues and enhanced health metrics such as request latency, CPU utilization, and target response time to monitor price-performance improvements.

    Use the Elastic Beanstalk console to deploy a new application to an Elastic Beanstalk environment configured with EC2 instances running arm64-based processors. To learn more about deploying your applications using Elastic Beanstalk console, see Elastic Beanstalk Developer Guide

    You can also deploy a new application or update an existing application to run arm64-based processors on Elastic Beanstalk using other interfaces: Elastic Beanstalk Command Line Interface (EB CLI), AWS CLI, CloudFormation (CFN), TerraForm, and AWS cloud development kit (AWS CDK). To learn more about deploying and managing your application using these interfaces, see Amazon EC2 instances for your Elastic Beanstalk environment in the Elastic Beanstalk Developer Guide. To learn more about Elastic Beanstalk, see the AWS Elastic Beanstalk Documentation.

    » Amazon Transcribe now supports automatic language identification for streaming transcriptions

    Posted On: Nov 23, 2021

    Amazon Transcribe is an automatic speech recognition (ASR) service that makes it easy for you to add speech-to-text capabilities to your applications. Today, we are excited to announce automatic language identification for streaming transcriptions. Until now, you were required to manually identify the dominant language in order to use Transcribe streaming APIs. You can now simply start streaming and Transcribe will detect the dominant language from the speech signal and generate transcriptions in the identified language. 

    Live streaming transcription is used across industries in contact center applications, broadcast events, capturing meeting interactions, and e-learning. If you operate in a country with multiple official languages or across multiple regions, your audio streams can contain different languages. With a minimum of 3 seconds of audio, Transcribe can automatically detect the dominant language and generate transcript without needing humans to specify the spoken language. 

    You can use Amazon Transcribe automatic language identification for streaming transcriptions in the following AWS Regions: US East (Ohio), US East (N. Virginia), US West (Oregon), Asia Pacific (Seoul), Asia Pacific (Sydney), Asia Pacific (Tokyo), Canada (Central), Europe (Frankfurt), Europe (Ireland), Europe (London) and South America (São Paulo). You can learn more by checking out the Amazon Transcribe documentation page or visit the AWS console to try it out.

    » Now prepare data and build models using TensorFlow 2.6 and PyTorch 1.8 in Amazon SageMaker Studio Notebooks

    Posted On: Nov 23, 2021

    Amazon SageMaker Studio is the first fully integrated development environment (IDE) for machine learning (ML). With a single click, data scientists and developers can quickly spin up SageMaker Studio Notebooks to interactively explore datasets and build ML models. The notebooks come pre-configured with deep learning environments for AWS-optimized TensorFlow and PyTorch to quickly get started with building models. Starting today you can access two new environments for TensorFlow 2.6 and PyTorch 1.8.

    Data preparation is a foundational step of any data science and ML workflow. Therefore, the new TensorFlow 2.6 and PyTorch 1.8 environments come built-in with the recently introduced capability to visually browse and connect to Amazon EMR clusters right from the SageMaker Studio Notebook. Thus, you can interactively explore, visualize and prepare petabyte-scale data using Spark, Hive and Presto on Amazon EMR and build ML models using the latest deep learning frameworks without leaving the notebook.

    These features are generally available in all AWS Regions where SageMaker Studio is available and there are no additional charges to use this capability. For complete information on pricing and regional availability, please refer to the SageMaker Studio pricing page. To learn more, see “Prepare Data at Scale with Studio Notebooks” in the SageMaker Studio Notebooks user guide.

    » Amazon Chime SDK meetings live transcription now supports content identification and custom language models

    Posted On: Nov 23, 2021

    Amazon Chime SDK lets developers add real-time audio, video, and screen share to their web and mobile applications. With live transcriptions, developers can include subtitles in meetings and create transcripts using Amazon Transcribe or Amazon Transcribe Medical. Using the service-side integration between Amazon Chime SDK and your Amazon Transcribe account, application builders can now help identify and redact personally identifiable information (PII) and personal health information (PHI) from transcripts. Builders can also utilize custom language models to help improve the transcription accuracy for their use cases.

    The transcript content and meta data are shared in real-time with meeting participants for display during the meeting. They are also stored by media capture in the Amazon Simple Storage Service (Amazon S3) bucket of the developer’s choice for post-meeting processing. Developers can access all the streaming languages supported by Amazon Transcribe, as well as features such as custom vocabularies and vocabulary filters. Standard Amazon Transcribe and Amazon Transcribe Medical costs apply.

    To learn more about the Amazon Chime SDK and live transcription with Amazon Transcribe or Amazon Transcribe Medical, review the following resources:

    * Amazon Chime SDK
    * Amazon Transcribe 
    * Using live transcription in the Amazon Chime SDK Developer Guide

    » AQUA for Amazon Redshift launches in two additional AWS regions

    Posted On: Nov 23, 2021

    AQUA (Advanced Query Accelerator) for Amazon Redshift is now generally available in two additional AWS regions: Asia Pacific (Mumbai) and Europe (London).

    AQUA is a new distributed and hardware-accelerated cache that enables Amazon Redshift to run up to 10x faster than other enterprise cloud data warehouses by automatically boosting certain types of queries. AQUA uses AWS-designed processors with AWS Nitro chips adapted to speed up data encryption and compression, and custom analytics processors, implemented in FPGAs, to accelerate operations such as scans, filtering, and aggregation.

    AQUA is available with the RA3.16xlarge, RA3.4xlarge, or RA3.xlplus nodes at no additional charge and with no code changes. You can enable AQUA for your existing Redshift RA3 clusters or launch a new AQUA enabled RA3 cluster via the AWS Management Console, API, or AWS CLI. To learn more about AQUA, visit the documentation.

    AQUA is now generally available in US East (Ohio), US East (N. Virginia), US West (Oregon), US West (N. California), Europe (London), Europe (Ireland), Europe (Frankfurt), Europe (Stockholm), Asia Pacific (Mumbai), Asia Pacific (Tokyo), Asia Pacific (Sydney), Asia Pacific (Singapore), and Asia Pacific (Seoul) regions, and will expand to additional regions in the coming months.

    » Amazon Redshift launches RA3 Reserved Instance migration feature

    Posted On: Nov 23, 2021

    Amazon Redshift RA3 Reserved Instance (RI) migration feature is now available in the Amazon Redshift Console, CLI and API to help migrate your DS2 RI clusters to RA3 RI clusters.

    Amazon Redshift RA3 RI migration feature is available to Amazon Redshift customers that have eligible DS2.xlarge and DS2.8xlarge clusters with RIs purchased for those clusters in the same account, and where the cluster size meets the requirements. You can migrate the DS2 RI clusters to equivalent RA3 RI clusters as part of a cross-instance resize or cross-instance snapshot restore function. The RA3 RI covering the new cluster will be the same cost and on the same calendar terms as the original DS2 RI for supported configurations.  

    This feature is now available in the following public AWS regions: US East (N. Virginia), US East (Ohio), US West (Oregon), Europe (Frankfurt), Europe (Ireland), Asia Pacific (Tokyo), Asia Pacific (Singapore), Asia Pacific (Sydney).

    It will be available in the remaining public AWS regions: South America (Sao Paulo), Europe (Milan), Europe (Stockholm), Asia Pacific (Hong Kong), Asia Pacific (Seoul), Asia Pacific (Mumbai), China (Beijing), China (Ningxia), AWS GovCloud (US-West), AWS GovCloud (US-East), US West (N. California), Africa (Cape Town), Middle East (Bahrain), Canada (Central), Europe (London), Europe (Paris) and Asia Pacific (Osaka) on December 20, 2021. Refer to the AWS Region Table for Amazon Redshift availability. To learn more about RA3 nodes, see the Amazon Redshift RA3 feature page. To learn more about DS2 to RA3 upgrades, see the Upgrading to RA3 node types section of the Amazon Redshift documentation. You can find more information on pricing by visiting the Amazon Redshift pricing page.

    » AWS Single Sign-On now provides one-click login to Amazon EC2 instances running Microsoft Windows

    Posted On: Nov 23, 2021

    You can now enable one-click single sign-on to your Amazon Elastic Compute Cloud instances running Microsoft Windows (Amazon EC2 Windows Instances) with AWS Single Sign-On (AWS SSO). You can connect your instances with users from AWS SSO or any AWS SSO supported identity provider, such as Okta, Ping, and OneLogin. This makes it easy for you to access your instance desktops from anywhere without having to enter your credentials multiple times or having to configure remote access client software. Now, you can use your existing corporate usernames, passwords, and multi-factor authentication devices to securely access your Amazon EC2 Windows Instances, eliminating the use of shared administrator credentials. In addition, you have visibility into individual user actions which can be viewed in the Amazon EC2 Windows event log, making it easier to meet audit and compliance requirements.

    With AWS SSO, you can centrally grant and revoke access to your Amazon EC2 Windows Instances at scale across multiple AWS accounts. For example, if you remove an employee from your AWS SSO integrated identity system, their access to all AWS resources (including Amazon EC2 Windows Instances) is automatically revoked.

    You can enable the one-click single sign-on experience in the AWS Systems Manager Fleet Manager console with a few simple configuration steps. This new feature is available in all AWS Regions where AWS SSO and AWS Systems Manager are offered (excluding AWS China Regions and AWS GovCloud [US]).

    To learn more please see our blog post.

    » Amazon SQS Announces Server-Side Encryption with Amazon SQS-managed encryption keys (SSE-SQS)

    Posted On: Nov 23, 2021

    Amazon Simple Queue Service (SQS) now provides managed server-side encryption using SQS owned encryption keys (SSE-SQS) to protect sensitive data. SSE-SQS helps you build security-sensitive applications to support your encryption compliance and regulatory requirements. 

    Amazon SQS is a fully managed message queuing service that enables you to decouple and scale microservices, distributed systems, and serverless applications. Using Amazon SQS, you can send, store, and receive messages between software components at any volume without losing messages or requiring other services to be available. Customers are increasingly decoupling their monolith applications to microservices and moving sensitive workloads to Amazon SQS, such as financial and healthcare applications with encryption requirements. Now SSE-SQS helps you transmit data securely and improve your security posture.

    Amazon SQS already supports server-side encryption with customer-provided encryption keys using the AWS Key Management Service (SSE-KMS). When creating a new queue, you can now use either the SSE-SQS or the SSE-KMS. With the SSE-SQS, you do not need to create or manage any encryption keys. Both encryption options help to reduce the operational burden and complexity involved in protecting data. They encrypt data using industry-standard AES-256 algorithms, so that only authorized roles and services can access data.

    With SSE-SQS, you do not have to make any code or application modifications to encrypt your data. Encryption at rest using SSE-SQS is provided at no additional charge. SQS handles the encryption and decryption of your data transparently and continues to deliver the same performance that you have come to expect.

    Support for SSE-SQS is available in all AWS Commercial and GovCloud Regions except the China Regions. To learn more about SSE-SQS on Amazon SQS, please visit the Amazon SQS documentation.

    » Announcing usability improvements in the navigation bar of the AWS Management Console

    Posted On: Nov 23, 2021

    Today, we launched usability improvements for the navigation bar in the AWS Management Console. The improvements include a customizable favorites bar, updates to the services menu, and visual updates for consistency and accessibility. The new favorites bar appears when you have selected at least one service as a favorite in the services menu. It also supports an unlimited number of favorites that can be organized with drag and drop. The updated services menu groups services by category and provides an A to Z listing of all services. 

    The usability improvements are available in all public AWS Regions. To experience these improvements, visit the AWS Management Console

    » Amazon ECS announces a new integration with AWS Distro for OpenTelemetry

    Posted On: Nov 23, 2021

    Amazon Elastic Container Service (Amazon ECS) now enables customers to quickly get started to monitor and debug their applications with traces and custom metrics using AWS Distro for OpenTelemetry (ADOT). This feature allows Amazon ECS customers to use the console to enable metrics and traces collection, and then export to Amazon CloudWatch, Amazon Managed Service for Prometheus, and AWS X-Ray with just few clicks. This experience simplifies a multi-step manual process of configuring ADOT in task definitions, and enables customers to solve application availability and performance issues.

    This new task definition creation experience gives you an option to enable metrics and traces collection, and allows users to export to to X-Ray, Amazon CloudWatch and Amazon Managed Service for Prometheus. If you enable metrics collection feature and export to Amazon Managed Service for Prometheus, ADOT is designed to automatically collect task-level metrics (CPU/Memory/Network/Storage) with Amazon ECS metadata and export them to the selected destination. You can also instrument your application code using ADOT SDKs, that are available in multiple-languages to collect and export traces to AWS X-Ray. ADOT is configured as a sidecar collector in task definitions, giving users the full control to manually edit ADOT configurations to add additional processors, filters, or export to other AWS or partner destinations.

    This new experience is available on the Amazon ECS console is available in all commercial AWS regions. To get started, review these resources:

    * Getting started blog
    * AWS Distro for OpenTelemetry
    * New Amazon ECS console
    * Create a new task definition using the new console steps

    » Announcing data tiering for Amazon ElastiCache for Redis

    Posted On: Nov 23, 2021

    You can now use data tiering for Amazon ElastiCache for Redis as a lower cost way to scale your clusters to up to hundreds of terabytes of capacity. Data tiering provides a new price-performance option for Redis workloads by utilizing lower-cost solid state drives (SSDs) in each cluster node in addition to storing data in memory. It is ideal for workloads that access up to 20% of their overall dataset regularly, and for applications that can tolerate additional latency when accessing data on SSD.

    When using clusters with data tiering, ElastiCache is designed to automatically and transparently move the least recently used items from memory to locally attached NVMe SSDs when available memory capacity is completely consumed. When an item that moves to SSD is subsequently accessed, ElastiCache moves it back to memory asynchronously before serving the request. Assuming 500-byte String values, you can expect an additional 300µs latency on average for requests to data stored on SSD compared to requests to data in memory.

    ElastiCache data tiering is available when using Redis version 6.2 and above on Graviton2-based R6gd nodes. R6gd nodes have nearly 5x more total capacity (memory + SSD) and can help you achieve over 60% savings when running at maximum utilization compared to R6g nodes (memory only).

    To get started using ElastiCache data tiering, create a new cluster using one of the R6gd node types using the AWS Management Console for ElastiCache, the AWS CLI, or one of the SDKs. Data tiering on R6gd nodes is available in the Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Europe (Frankfurt), Europe (Ireland), US East (N. Virginia), US East (Ohio), US West (N. California), and US West (Oregon) Regions. For pricing, see Amazon ElastiCache pricing and for more information, see the ElastiCache data tiering documentation.

    » Amazon Connect Customer Profiles now stores contact history at no charge to help personalize customer service

    Posted On: Nov 23, 2021

    Amazon Connect Customer Profiles now provides contact history and customer information together in unified customer profiles at no charge, helping contact center managers personalize the contact center experience. Previously, contact center managers needed to work with software development teams to build profiles of end customers and their contact history. Now, they can use Customer Profiles at no charge to automatically store Amazon Connect contact history in a customer-centric view along with customer information such as name, phone number, account number, and address. Agents can access Customer Profiles to provide more personalized customer service through either the out-of-the-box Amazon Connect agent application or through their company’s custom agent applications, enabling them to provide more personalized customer service. Contact center managers can also use the Customer Profiles contact block when designing contact flows to personalize and automate the contact center experience.

    Starting today, every new Amazon Connect instance includes Customer Profiles. Customers can also enable Customer Profiles for their existing Amazon Connect instances through the AWS console.

    Customers can also use the paid features of Customer Profiles to build a more complete customer view. You can enrich profiles with data from external applications using pre-built connectors for Salesforce, Zendesk, ServiceNow, S3, and more. With the built-in Identity Resolution feature, you can also identify and unify similar profiles into a single unique customer view. To use these paid features, you pay monthly for the number of profiles that store data from external applications or use Identity Resolution.

    To learn more about Customer Profiles please visit our webpage.

    » New features for AWS IoT Core Device Advisor

    Posted On: Nov 23, 2021

    AWS IoT Core Device Advisor now supports the capability to run multiple test suites at the same time. Device Developers can use this capability to complete testing faster by testing multiple devices simultaneously. Developers can also test their devices more comprehensively by using new MQTT test cases such as a test to validate the device behavior when the device is disconnected from the server side. Device Advisor console also provides a new and simpler way for developers to quickly review and create an IAM role in few clicks, enabling developers to grant permissions to Device Advisor for connecting with AWS IoT Core on behalf of their test devices.

    Device Advisor is a fully managed cloud-based test capability to validate IoT devices for reliable and secure connectivity with AWS IoT Core. Any device that has been built to connect to AWS IoT Core can take advantage of Device Advisor. Device Advisor also provides a signed qualification report which can be used by hardware partners to qualify their devices for inclusion in the AWS Partner Device Catalog. To learn more and get started with the new features of Device Advisor, see Device Advisor’s overview page and technical documentation.

    » Amazon S3 Lifecycle further optimizes storage cost savings with new actions and filters

    Posted On: Nov 23, 2021

    You can now set Amazon S3 Lifecycle rules to limit the number of versions of an object to retain to achieve greater storage savings, and to choose objects to move to other storage classes based on size to optimize your lifecycle transitions. S3 Lifecycle helps you optimize your storage costs by transitioning or expiring your objects as they get older or are replaced by newer versions. You can use these Lifecycle configurations for your whole bucket, or for a subset of your objects by filtering by prefixes, object tags, or object size.

    You can now use finer-grained controls to manage your S3 Lifecycle rules with actions based on the number of noncurrent versions and filters based on object size. For example, you can optimize your costs by transitioning only large media files to storage classes such as S3 Glacier or S3 Glacier Deep Archive. Additionally, you can save costs by deleting old (noncurrent) versions of an object after 5 days and when there are at least 2 newer versions of the object. This allows you to have additional versions of your objects as you need, but saves you cost by transitioning or removing them after a period of time.

    New S3 Lifecycle rules with actions based on the number of noncurrent versions and filters based on object size are available in all AWS Regions, including the AWS GovCloud (US) Regions. You can get started with new actions and filters in S3 Lifecycle at no additional cost through the S3 console, AWS Command Line Interface (CLI), the Application Programming Interface (API), and the AWS Software Development Kit (SDK) client. For transition and storage pricing, please visit the Amazon S3 pricing page.

    To learn more about S3 Lifecycle, visit the S3 User Guide.

    » Application Load Balancer and Network Load Balancer end-to-end IPv6 support

    Posted On: Nov 23, 2021

    Application Load Balancers and Network Load Balancers now support end-to-end connectivity with Internet Protocol version 6 (IPv6). Clients can now connect to application and network load balancers and access backend applications over IPv6.

    With this launch, you can set your load balancer to “dual stack” mode, allowing it to accept both IPv4 or IPv6 client connections. While dual stack mode on internet-facing load balancers has been available, this launch extends support for internal load balancers by adding protections to help prevent unintended internet access via IPv6 through an internet gateway.

    Additionally, application and network load balancers now support load balancing to targets addressed by IPv6. With this launch, you can now create IPv6 target groups, allowing IPv4 and IPv6 clients to connect to targets in IPv6/dual-stack subnets. Similar to IPv4, Network Load Balancers can preserve the source IP addresses of IPv6 clients connecting to IPv6 target groups and Application Load Balancers deliver the client IP in an HTTP header.

    IPv6 support on Application Load Balancers and Network Load Balancers is available in all AWS commercial and GovCloud (US) Regions. To get started, enable IPv6 for your VPC, register IPv6 targets into a new IPv6 type target group, and select “dualstack” as the IP address type on the load balancers.

    To learn more, please refer to the ALB documentation and the NLB documentation.

    » Amazon Lex launches support for Amazon Polly Neural Text-To-Speech (NTTS) voices for speech interactions

    Posted On: Nov 23, 2021

    Amazon Lex now supports Amazon Polly  Neural Text-to-Speech (NTTS) voices for your bots, allowing your bots to respond to your users with richer, more expressive, and natural-sounding voices than standard Polly Text-to-Speech (TTS) voices. Polly NTTS voices deliver advanced improvements in speech quality through a new machine learning approach. Amazon Lex is natively integrated with Amazon Polly for voice interactions. Until today, Lex developers could only configure bots to use Polly’s standard Text-to-Speech (TTS) voices. Starting today, you can configure bots built through Lex V2 APIs and console to use Polly NTTS voices for any language that supports an NTTS option to improve user experience and boost customer engagement.

    Support for Amazon Polly NTTS voices for Amazon Lex bots is available in the following AWS Regions: US East (N. Virginia), US West (Oregon), Africa (Cape Town), Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Canada (Central), Europe (Frankfurt), Europe (Ireland), and Europe (London).

    Amazon Polly voice for bot interactions can be configured using Amazon Lex console, Amazon Command Line Interface (CLI), or via APIs. For more information, please visit Amazon Lex documentation. For information on Amazon Polly voices, please visit Voices in Amazon Polly.

    » Announcing AWS Fargate for Amazon ECS Powered by AWS Graviton2 Processors

    Posted On: Nov 23, 2021

    AWS Fargate for Amazon Elastic Container Service (Amazon ECS) powered by AWS Graviton2 Processors, is now generally available. AWS Graviton2 processors are custom built by Amazon Web Services using 64-bit Arm Neoverse cores and Graviton2-powered Fargate delivers up to 40% improved price/performance at 20% lower cost over comparable Intel x86-based Fargate for a variety of workloads such as application servers, web services, high-performance computing, and media processing. This adds even more choice to help customers optimize performance and cost for running containerized workloads on Fargate’s serverless compute.

    Most applications built on Linux utilizing open-source software can run on multiple processor architectures and are well suited for Graviton2-powered Fargate. Developers can build Arm-compatible applications or leverage multi-architecture container images in Amazon ECR to run on Graviton2-powered Fargate. Fargate takes care of the scaling, patching, securing, and managing servers so customers can focus on building applications. Customers simply specify the CPU architecture type as ARM64 in their Amazon ECS Task Definition to target Graviton2-powered Fargate for achieving better price/performance for their applications.

    AWS Graviton2 support on AWS Fargate is available in US East (Ohio), US East (N. Virginia), US West (N. California), US West (Oregon), Asia Pacific (Hong Kong), Asia Pacific (Mumbai), Asia Pacific (Osaka), Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Canada (Central), Europe (Frankfurt), Europe (Ireland), Europe (London), Europe (Milan), Europe (Paris), Europe (Stockholm), South America (São Paulo). You can find the regional pricing information on the AWS Fargate pricing page. This feature is supported on Fargate platform version 1.4.0 or later. Visit our documentation page or read more in the blog post about using Graviton2-powered Fargate compute via the API, AWS Command Line Interface (CLI), AWS SDKs, Amazon ECS Console, or the AWS Copilot CLI.

    » New data management APIs for Amazon FinSpace

    Posted On: Nov 23, 2021

    Amazon FinSpace now provides data management APIs that allow customers to work with data in their Amazon FinSpace environment using the AWS SDK and CLI. With these new APIs, customers can add steps to their automated workflows that manage their data resources in Amazon FinSpace. Using the new APIs, customers can create Amazon FinSpace datasets, load data using change sets, and create point-in-time views for analysis. 

    For example, customers can automate loading and updating stock market trades and quotes so they are continuously available for analysis in Amazon FinSpace. From these customers can also automate the generation of historical point-in-time views from this data for backtesting.

    The data management APIs are available in all AWS regions where Amazon FinSpace is offered. There is no charge to use these APIs beyond standard Amazon FinSpace usage charges. To learn more about the new APIs see the Amazon FinSpace Data API reference. To learn more about Amazon FinSpace Data API see the Amazon FinSpace API page.

    » AWS Systems Manager Fleet Manager now provides console based access to Windows instances with enhanced security protocols

    Posted On: Nov 23, 2021

    Fleet Manager, a feature in AWS Systems Manager (SSM) that helps IT Admins streamline and scale their remote server management processes, now enables a console-based management experience for Windows instances. This new feature provides customers a full graphical interface to setup secure connections to and manage Windows instances. You no longer need to install additional software, set up additional servers, or open direct inbound access to ports on the instance.

    Fleet Manager now provides a simple browser-based means to access Windows servers using Remote Desktop Protocol, or RDP, with security protocols. Remote Desktop Protocol (RDP) connections into Windows servers are established through a few simple steps in the console providing access to your server or server-based application. With this feature, you can simultaneously open connections to multiple servers at once and access them from the same console removing the need to switch back and forth between tabs. In addition to standard credential-based access, you can use AWS Single Sign-On and third-party identity providers such as Ping and Okta for a seamless one click log-in experience.

    Fleet Manager is a console based experience in Systems Manager that provides you with visual tools to manage your Windows, Linux, and macOS servers. With it, you can easily perform common administrative tasks such as file system exploration, log management, Windows Registry operations, performance counters, and user management from a single console. Fleet Manager manages instances running both on AWS and on-premises, without needing to remotely connect to the servers.

    This new feature in Fleet Manager is available in all AWS Regions where Systems Manager is offered (excluding AWS China Regions and AWS GovCloud [US]). To learn more about Fleet Manager, visit our web-page, read our blog post, or see our documentation and AWS Systems Manager FAQs. To get started, choose Fleet Manager from the Systems Manager left navigation pane.

    » Amazon Connect now supports contact flow modules to simplify repeatable logic

    Posted On: Nov 23, 2021

    Amazon Connect now supports modules to simplify the creation and management of repeatedly used contact flow logic. Contact flow modules are a set of user defined blocks centrally managed in an Amazon Connect instance that can be referenced in multiple contact flows. For example, a customer may want to perform the same steps of identifying intent, authenticating the account number, and updating contact attributes across multiple different contact flows. With contact flow modules, the customer only has to build the contact flow logic once then reference the module in the applicable contact flows. Any time updates to a module are published, the changes will reflect directly in all the contact flows that reference the updated module. Modules feature access, editing, and publishing is enabled through the Amazon Connect console.

    Contact flow modules are available in all AWS regions Amazon Connect is available. To learn more contact flow modules, see the help documentation or read this blog post. To learn more about Amazon Connect, the AWS contact center as a service solution on the cloud, please visit the Amazon Connect website.

    » Amazon Virtual Private Cloud (VPC) customers can now create IPv6-only subnets and EC2 instances

    Posted On: Nov 23, 2021

    Starting today, Amazon Virtual Private Cloud (VPC) allows you to create IPv6-only subnets in your dual-stack VPCs and launch EC2 instances built on Nitro System in these subnets. The launch of IPv6-only subnets allows customers to scale their deployments on AWS by not requiring any IPv4 addressing in the subnet. With a /64 IPv6 CIDR assignment to the subnet, it accommodates approximately 18 quintillion IP addresses for applications.

    Customers will be able to create an IPv6-only subnet in an existing dual-stack VPC using AWS Console or EC2 APIs. EC2 instances launched in an IPv6-only subnet and specifically the ENIs attached to them will no longer require private IPv4 addresses to be allocated and assigned. Instead, every ENI created and attached to an instance launched in an IPv6-only subnet will be assigned an IPv6 address from the subnet’s configured IPv6 CIDR range. These instances in the IPv6-only subnet are also able to call the on-instance services over IPv6 link local addresses.

    IPv6-only subnets and EC2 instances are available in all AWS commercial and AWS GovCloud (US) regions at no additional charge. To get started with IPv6-only subnets and instances, use the AWS Console or API. For more information on this enhancement, please read about IPv6-only subnets in our documentation and the blog post.

    » AWS Amplify expands its Notifications category to include in-app messaging (Developer Preview)

    Posted On: Nov 23, 2021

    AWS Amplify is launching a developer preview of its expanded Notifications category for JavaScript. Powered by Amazon Pinpoint, this expansion allows developers to instrument in-app messaging to drive engagement and monetization.

    In-app messaging gives businesses the ability to personalize app experiences for their users by enabling contextual in-app messages and in-app content, such sending promotional alerts when the user opens an app. These messages nudge users to complete key in-app actions - like promoting an item, getting payment reminders, or discovering new app features. Developers can integrate the Amplify JavaScript library into their app and enable in-app messaging capabilities with just a few lines of code.

    The Amplify JavaScript library makes it easy for developers to instrument the app and rely on marketers to create campaigns. Once a marketer has configured a marketing event in the AWS Pinpoint console, Amplify automatically maps UI designs built by marketers in the Pinpoint console to UI in the app, saving developers time from building out the UI. Amplify handles all the display validation logic of whether an in-app message should be displayed on the device or not.

    Amplify also automatically hooks in to Pinpoint analytics events to trigger in-app message displays on events that occur in the app. For example, when customers are going to view their shopping cart Amplify can both record the event and display a relevant in-app message if applicable.

    Get started with Amplify In-App Messaging or learn about Pinpoint.

    » Amazon Connect launches APIs to archive and delete contact flows

    Posted On: Nov 23, 2021

    Amazon Connect now provides two new APIs to archive/unarchive and delete contact flows. The new APIs provide a programmatic and flexible way to manage your library of contact flows at scale. For example, contact flows used only during certain times of the year can be archived when not in use and then unarchived when needed. You can now also delete a contact flow so it is no longer available for use. To learn more about the new APIs, see the API documentation.

    The new archive and delete flows APIs are available in all AWS regions where Amazon Connect is available. To learn more about Amazon Connect, the AWS contact center as a service solution on the cloud, please visit the Amazon Connect website.

    » Amazon OpenSearch Service (successor to Amazon Elasticsearch Service) now supports checking for blue/green deployment when making configuration changes.

    Posted On: Nov 23, 2021

    You can now check whether a configuration change will require a blue/green deployment from the Amazon OpenSearch Service (successor to Amazon Elasticsearch Service) console or using the Amazon OpenSearch Service APIs. With this new option, you can plan and make configuration changes that require a blue/green deployment when your cluster is not at its peak traffic.

    Previously, you had to depend on documentation to identify whether a change would require a blue/green deployment, and even then in certain scenarios, it was not completely deterministic. Now, when you make a configuration change using the Amazon OpenSearch Service console, you can use the ‘Run Analysis’ option to check whether the change will require a blue/green deployment before you actually make the change. Depending on whether the configuration change requires a blue/green deployment, you can plan to make the change at a time when the cluster is not experiencing its peak traffic, helping you avoid any disruptions or latency during a blue/green deployment for your users. You can also check whether a change will require a blue/green deployment through the configuration APIs by using the new ‘DryRun’ parameter when calling UpdateDomainConfig API.

    The new blue/green check is available across 23 regions globally. Please refer to the AWS Region Table for more information about Amazon OpenSearch Service availability.

    For more information about this feature, please see the documentation. To learn more about Amazon OpenSearch Service, please visit the product page.

    » Announcing Amazon Redshift cross-region data sharing (preview)

    Posted On: Nov 23, 2021

    Amazon Redshift data sharing allows you to share live, transactionally consistent data across different Redshift clusters without the complexity and delays associated with data copies and data movement. Ability to share data across clusters that are in the same AWS account and across accounts is already available. Now sharing data across Redshift clusters in different AWS regions is available for preview. Cross-region data sharing preview is supported on all Redshift RA3 node types.

    With data sharing, you can securely share data at many levels including schemas, tables, views, and user defined functions, and use fine-grained controls to specify access for each data consumer. With cross-region data sharing, you can now share live data across clusters that are in the same or different accounts and in different AWS regions. No manual data copies or data replication is required. Users with access to shared data can continue to discover and query the data with high performance using standard SQL and analytics tools. Queries accessing shared data use the compute resources of the consumer Redshift cluster and do not impact the performance of the producer cluster. With cross-region data sharing, data transfer fees are applicable to access the data needed to serve data sharing queries across regions.

    Cross-region data sharing preview is available for all Amazon Redshift RA3 node types in regions where RA3 is available. Learn more about data sharing capability in the feature page and refer to the documentation on how to get started. Cross-region data sharing transfer pricing is available on the pricing page.

    » AWS Lambda now supports partial batch response for SQS as an event source

    Posted On: Nov 23, 2021

    AWS Lambda now supports partial batch response for SQS as an event source. With this feature, when messages on an SQS queue fail to process, Lambda marks a batch of records in a message queue as partially successful and allows reprocessing of only the failed records. By processing information at a record-level instead of batch-level, AWS Lambda has removed the need of repetitive data transfer, increasing throughput and making Amazon SQS message queue processing more efficient. 

    Until now, a batch being processed through SQS polling would either be completely successful, in which case the records would be deleted from the SQS queue, or would completely fail, and the records would be kept on the queue to be reprocessed after a ‘visibility timeout’ period. The Partial Batch Response feature an SQS queue will only retain those records which could not be successfully processed, improving processing performance.

    This feature is available for both standard and FIFO SQS queues, in all commercial regions that AWS Lambda is available in. There are no additional charges for using this feature beyond the standard Lambda price for Lambda.

    To learn more about Partial Batch Responses, refer to the documentation on using AWS Lambda with SQS.

    » New Multi-AZ deployment option for Amazon RDS for PostgreSQL and for MySQL; increased read capacity, lower and more consistent write transaction latency, and shorter failover time (Preview)

    Posted On: Nov 23, 2021

    Amazon Relational Database Service (Amazon RDS) for MySQL and for PostgreSQL now supports a new Multi-AZ deployment option with one primary and two readable standby database instances. This deployment option optimizes write transactions and is ideal when your workloads require additional read capacity, lower write transaction latency, more resilience from network jitter (which impacts the consistency of write transaction latency), and high availability and durability.

    Amazon RDS Multi-AZ deployments provide enhanced availability and durability for Amazon RDS database (DB) instances, making them a natural fit for production database workloads. In a Multi-AZ deployment of a DB instance, Amazon RDS automatically creates a primary DB instance and replicates the data to a standby DB instance in a different Availability Zone (AZ). In the case of an infrastructure failure, Amazon RDS performs an automatic failover to the standby, so that database operations resume as soon as the failover is complete. Since the endpoint for the DB Instance remains the same after a failover, the application can resume database operations without the need for manual administrative intervention.

    Now, Amazon RDS offers a second Multi-AZ deployment option—a Multi-AZ deployment with readable standby DB instances. When you select this new option, Amazon RDS provisions one primary and two standby DB instances across three AZs and then automatically configures data replication. The standby DB instances act as automatic failover targets and can also serve read traffic to increase throughput without needing to attach additional read replica DB instances. You can connect to your readable standby DB instances by using a managed read-only endpoint or the individual endpoints of each.

    The readable standby option for Amazon RDS Multi-AZ deployments works with AWS Graviton2 R6gd and M6gd DB instances (with NVMe-based SSD instance storage) and Provisioned IOPS Database Storage. The Preview is available in the US East (N. Virginia), US West (Oregon), and Europe (Ireland) regions. Amazon RDS for MySQL supports the Multi-AZ readable standby option for MySQL version 8.0.26. Amazon RDS for PostgreSQL supports the Multi-AZ readable standby option for PostgreSQL version 13.4.

    Learn more about Multi-AZ deployments in the Amazon RDS User Guide and in the AWS Database Blog. See Amazon RDS Pricing for pricing details. Create or update a fully managed Amazon RDS database in the Amazon RDS Management Console.

    » Amazon ElastiCache for Redis adds support for Redis 6.2

    Posted On: Nov 23, 2021

    Amazon ElastiCache for Redis now supports Redis 6.2. ElastiCache for Redis 6.2 includes performance improvements for TLS-enabled clusters using x86 node types with 8 vCPUs or more or Graviton2 node types with 4 vCPUs or more. These enhancements are designed to improve throughput and reduce client connection establishment time by offloading encryption to other CPUs. With Amazon ElastiCache for Redis 6.2, you can also manage access to Pub/Sub channels with Access Control List (ACL) rules. For the full list of improvements in Amazon ElastiCache for Redis 6.2 (enhanced), refer to Supported ElastiCache for Redis versions.

    You can upgrade the engine version of your cluster or replication group by modifying it and specifying 6.2 as the engine version. To learn more about upgrading engine versions, refer to Version Management.

    Amazon ElastiCache for Redis 6.2 support is available in all AWS commercial regions and AWS GovCloud (US) Regions. To get started, log on to AWS Management Console.

    » AWS Amplify announces a redesigned, more extensible GraphQL Transformer for creating app backends quickly

    Posted On: Nov 23, 2021

    AWS Amplify announces GraphQL Transformer version 2, enabling developers to develop more feature-rich, flexible, and extensible GraphQL-based app backends even with minimal cloud expertise. The AWS Amplify CLI is a command line toolchain that helps frontend developers create app backends in the cloud. With the GraphQL Transformer, developers can model their backend data model using the GraphQL Schema Definition Language, and Amplify CLI automatically transforms the the schema into a fully functioning GraphQL API with its underlying cloud infrastructure.

    With the GraphQL Transformer version 2, developers get a new, simpler data modeling experience for data model relationships. The new @hasOne, @hasMany, @manyToMany GraphQL directives help developers model relationships between tables without having to configure underlying implementation details such as foreign keys or indexes. Also new in version 2, developers can secure their data model using an updated @auth directive that provides deny-by-default authorization, as well as the ability to configure global, model-level, and field-level authorization rules. Developers can then audit the effective permissions using a new feature for printing out the access control matrix. Lastly, developers now gain the ability to replace Amplify-generated resolver functions or extend the Amplify-generated resolvers with their own custom business logic. The new GraphQL Transformer is redesigned from the ground up to generate extensible pipeline resolvers to route a GraphQL API request, apply business logic, such as authorization, and communicate with the underlying data source (such as DynamoDB or OpenSearch).

    Learn more about how to setup Amplify CLI’s new GraphQL Transformer our blog post or in the Amplify documentation.

    » Amazon Voice Focus as an Amazon Machine Image

    Posted On: Nov 23, 2021

    Amazon Voice Focus, an industry-leading speech enhancement technology currently used for noise reduction in Amazon Chime SDK meetings, is now available packaged as an Amazon Linux 2 (AL2) Machine Image (AMI). The Amazon Voice Focus AMI helps developers, media producers, and content creators reduce noise in real-time speech capture or archived speech recordings. It is a cloud component that application builders can insert into their streaming media and content production pipelines to help reduce unwanted sounds and deliver the speech that users want to be heard.

    Amazon Voice Focus AMI helps reduce background noises such as fans, lawnmowers, and barking dogs as well as foreground noises like typing and shuffling papers. Media content processed with the Amazon Voice Focus AMI does not leave a customer’s AWS account or Virtual Private Cloud (VPC). This allows the customer to maintain security and control over their content while denoising their media with an award-winning machine learning algorithm.

    To learn more about the Amazon Voice Focus AMI, please see the following resources:

  • Getting Started with Amazon Voice Focus AMI
  • Amazon Chime Science Blog
  • Amazon Chime SDK website
  • » Amazon RDS Proxy now supports PostgreSQL major version 12

    Posted On: Nov 22, 2021

    Amazon Relational Database Service (RDS) Proxy now supports RDS for PostgreSQL and Amazon Aurora PostgreSQL - Compatible Edition major version 12. PostgreSQL 12 includes better management of indexing, improved partitioning capabilities, JSON path queries per SQL/JSON specifications, and many other additional features.

    RDS Proxy is a fully managed and a highly available database proxy for Aurora and RDS databases. RDS Proxy helps improve application scalability, resiliency, and security.

    Related Resources:

    RDS Proxy details page, tutorial, and documentation
    RDS for PostgreSQL details page, and documentation
    Aurora for PostgreSQL details page, and documentation

    » Amazon EC2 Mac Instances now support hot attach and detach of EBS volumes

    Posted On: Nov 22, 2021

    Starting today, customers can dynamically attach and detach Amazon Elastic Block Storage (EBS) volumes on their running Amazon EC2 Mac instances. Prior to today, customers attaching or detaching EBS volumes on EC2 Mac instances needed to reboot their instances for revised EBS configuration to be reflected within their macOS guest environments. Now with this capability, customers do not need to trigger an instance reboot and wait for it to complete when attaching or detaching EBS volumes on EC2 Mac instances.

    This capability is supported in all AWS regions where EC2 Mac instances are available today. Learn more about EC2 Mac instances here or start a machine today in the AWS Console.

    » You can now import your AWS CloudFormation stacks into a CloudFormation stack set

    Posted On: Nov 22, 2021

    Today, AWS CloudFormation StackSets announces the capability to import existing CloudFormation stacks into a stack set. StackSets extend the functionality of stacks letting you create, update, or delete stacks across multiple AWS accounts and regions with a single operation. You can now bring your existing CloudFormation stacks into the management purview of a new or an existing stack set. This will let you create resources, applications or environments across your AWS Organization and AWS Regions efficiently. You can subsequently avoid the process of manually replicating and managing the infrastructure in each account and region individually.

    With the Import functionality, you can now efficiently replicate your cloud infrastructure described in a CloudFormation template, and manage that infrastructure in a centralized manner across your entire AWS Organization and Regions. You can avoid manually maintaining stacks individually in each Organizational Unit or Region, or having to delete and recreate the existing infrastructure in order to bring a particular stack into the purview of a stack set. For example, you can import security resources such as AWS IAM roles described in CloudFormation into a stack set and then centrally manage and deploy those IAM roles across the entire AWS Organization to achieve a consistent organization-wide security compliance in a scalable manner.

    To get started, use the CloudFormation console, AWS CLI, or AWS SDKs to begin the import process. You can specify the ID of the CloudFormation stack you intend to import to create a new stack set, or to add a stack to an existing stack set. Previously, you could import Stacks into StackSets created using the Self-managed permission model. This launch will allow you to import Stacks into StackSets created using the Service-managed permission model. You can use the StackSets import functionality in all AWS regions where AWS CloudFormation StackSets is currently available. For more information, please refer to the documentation.

    » Announcing preview of Amazon Linux 2022

    Posted On: Nov 22, 2021

    Today, we are announcing the public preview of Amazon Linux 2022 (AL2022), Amazon's new general purpose Linux for AWS that is designed to provide a secure, stable, and high-performance execution environment to develop and run your cloud applications. Starting with AL2022, a new Amazon Linux major version will be available every two years and each version will be supported for five years. Customers will also be able to take advantage of quarterly updates via minor releases and use the latest software for their applications. Finally, AL2022 provides the ability to lock to a specific version of the Amazon Linux package repository giving customers control over how and when they absorb updates.

    Customers use a variety of Linux based distributions on AWS, including Amazon Linux 1 (AL1) and Amazon Linux 2 (AL2). These have become the preferred Linux choice for AWS customers because of no license costs, tight integration with AWS-specific tools and capabilities, immediate access to new AWS innovations, and a single-vendor support experience. AL2022 combines the benefits of our current Amazon Linux products with a predictable, two year release cycle, so customers can plan for operating system upgrades as part of their product lifecycles. The two year major release cycle provides customers the opportunity to keep their software current while the five year support commitment for each major release gives customers the stability they need to manage long project lifecycles.

    AL2022 uses the Fedora project as its upstream to provide customers with a wide variety of the latest software, such as updated language runtimes, as part of quarterly releases. In addition, AL2022 has SELinux enabled and enforced by default. This enables customers to improve their security posture, reduce the operational overhead that comes with managing custom security policies, and helps comply with common industry standards.

    AL2022 is now available in preview in all commercial regions and is provided at no additional charge. Standard Amazon EC2 and AWS charges apply for running Amazon EC2 instances and other services. You can launch AL2022 from AWS Management Console, AWS Command Line Interface (CLI), AWS Tools for Windows PowerShell, RunInstances or via a AWS CloudFormation template. To learn more about Amazon Linux 2022, please refer to the documentation. Feedback on AL2022 can be provided through your designated AWS representative, Amazon Linux 2022 GitHub page or Amazon Linux Discussion Forums.

    » Amazon EventBridge cross-Region support now expands to more Regions

    Posted On: Nov 22, 2021

    Amazon EventBridge expands support to all Regions, except for AWS GovCloud (US) and China, as a destination for its cross-Region event bus as a target functionality launched in April’2021 (initially launched with 3 destination Regions - US East (N. Virgina), US West (Oregon) and Europe(Ireland)). This will allow customers to consolidate events in one central Region from any Region. This makes it easier for customers to centralize their events for auditing and monitoring purposes or replicate events from source to destinations Regions to help synchronize data across Regions. 

    EventBridge is a serverless event bus that enables you to create scalable event-driven applications by routing events between your own applications, third-party SaaS applications, and other AWS services. You can set up routing rules to determine where to send your data, allowing for application architectures to react to changes in your data and systems as they occur. Amazon EventBridge can make it easier to build event-driven applications by facilitating event ingestion, delivery, security, authorization, and error handling. 

    With cross-Region Event Bus target support, customers can now have their event information in the destination Region. This makes it easier for developers to find those events and write code that reacts to them, and generate insights from events generated across the organization. 

    To enable cross-Region event bus targets, you need to specify the Amazon Resource Name (ARN) of the event bus in the destination Region. You can use this to send events within the same or a different account across Regions. You can get started using the AWS Management Console or using the APIs. You pay as per the custom events pricing for cross-Region invocations. To learn about pricing, please visit the EventBridge pricing page.

    You can now send events between any of the following AWS Regions: US East (Ohio), US East (N. Virginia), US West (Oregon), US West (N. California), Canada (Central), Europe (Stockholm), Europe (Paris), Europe (Ireland), Europe (Frankfurt), Europe (London), Europe (Milan), Middle East (Bahrain), Africa (Cape Town), Asia Pacific (Mumbai), Asia Pacific (Tokyo), Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific (Hong Kong), Asia Pacific (Osaka), Asia Pacific (Sydney), and South America (São Paulo).

    To learn more:

    * Read Stephen Liedig’s blog post
    * Visit the Amazon EventBridge page
    * Read cross region support in the Amazon EventBridge Developer Guide

    » Announcing AWS Graviton2-based instances for Amazon Neptune

    Posted On: Nov 22, 2021

    Starting today, Amazon Neptune announced the general availability of general-purpose T4g and memory-optimized R6g database instances powered by the AWS Graviton2 processor. AWS Graviton2-based instances deliver up to 40% better price performance over comparable current generation x86-based instances for a variety of workloads. Customers running graph workloads using Apache TinkerPop Gremlin, openCypher, or W3C SPARQL 1.1 query languages can expect to see significant improvements in query latency at a lower cost in comparison to x86-based instances of equivalent instance size.

    AWS Graviton2 processors are custom built by Amazon Web Services using 64-bit Arm Neoverse cores to deliver several performance optimizations over first-generation AWS Graviton processors. This includes 7x the performance, 4x the number of compute cores, and 5x faster memory. Additionally, the AWS Graviton2 processors feature always-on fully encrypted DDR4 memory and 50% faster per core encryption performance. Customers can provision low-cost burstable performance workloads using the new T4g instance, ideal for development and testing use cases. The memory-optimized R6g instance provides up to 64 vCPUs, 25 Gbps of enhanced networking, and 19 Gbps of EBS bandwidth, ideal for production database workloads. Both instance types are part of the AWS Nitro System, a collection of AWS-designed hardware and software innovations that streamline the delivery of isolated multi-tenancy, private networking, and fast local storage.

    You can launch Graviton2 T4g and R6g Neptune DB clusters using the AWS Management Console or using the AWS CLI. Upgrading a Neptune DB cluster to Graviton2 requires a simple instance type modification for Neptune engine version 1.1.0.0 and higher, using the same steps as any other instance modification. Your applications will continue to work as normal and you will not have to port application code. For more details on how to modify the instance type, see Amazon Neptune User Guide. For more information on pricing and regional availability, refer to the Amazon Neptune pricing page.

    » Amazon ElastiCache now supports T4g Graviton2-based instances

    Posted On: Nov 22, 2021

    Amazon ElastiCache now supports the AWS Graviton2-based T4g instance family in the following regions: US East (N. Virginia), US East (Ohio), US West (Oregon), US West (Northern California), Europe (Ireland), Europe (London), Europe (Stockholm), Europe (Frankfurt), South America (Sao Paulo), Asia Pacific (Hong Kong), Asia Pacific (Seoul), Asia Pacific (Mumbai), Asia Pacific (Tokyo), Asia Pacific (Sydney), Asia Pacific (Singapore), Canada (Central), and mainland China (Ningxia, Beijing). Customers choose ElastiCache for workloads that require accelerated performance with microsecond latency and high throughput. T4g instances are ideal for running applications with moderate CPU usage that experience temporary spikes in usage.

    AWS Graviton2 processors are custom built by Amazon using 64-bit Arm Neoverse cores to deliver the best price performance for your cloud workloads. T4g instances leverage the Amazon Nitro System and ENA (Elastic Network Adapter). They are a burstable general-purpose instance type that provide a baseline level of CPU performance with the ability to burst CPU usage.

    AWS Graviton2 is available for Amazon ElastiCache for Redis and for Memcached to help with a seamless update from previous generation instances. For more information on pricing and regional availability, please refer to the Amazon ElastiCache pricing page. To get started and update your existing instances, review the Amazon ElastiCache documentation.

    » You can now securely connect to your Amazon MSK clusters over the internet

    Posted On: Nov 22, 2021

    Amazon Managed Streaming for Apache Kafka (Amazon MSK) now offers an option to securely connect to Amazon MSK clusters over the internet. By enabling public access, authorized clients external to a private Amazon Virtual Private Cloud (VPC) can stream encrypted data in and out of specific Amazon MSK clusters. You can enable public access for MSK clusters at no additional cost, but standard AWS data transfer costs for cluster ingress and egress apply.

    You can enable public accessibility in a few clicks after a cluster has been created, and the feature is supported in all AWS regions where Amazon MSK is available. Public accessibility requires clients encrypt traffic using TLS and authenticate with MSK clusters using IAM Access Control, SASL/SCRAM, or mutual TLS authentication. To learn how to get started with Amazon MSK and public access, visit the Amazon MSK developer guide.

    Amazon MSK is a fully managed service for Apache Kafka and Kafka Connect that makes it easy for you to build and run applications that use Apache Kafka as a data store. Amazon MSK is fully compatible with Apache Kafka, which enables you to quickly migrate your existing Apache Kafka workloads to Amazon MSK with confidence or build new ones from scratch. With Amazon MSK, you spend more time building innovative applications and less time managing clusters.

    » AWS Lambda launches the metric OffsetLag for Amazon MSK and Self-managed Kafka

    Posted On: Nov 22, 2021

    AWS Lambda has launched a new metric, OffsetLag, to monitor the performance of Amazon MSK and Self-managed Kafka. Up until now, Lambda users did not have visibility into how polling runs and had to increasingly rely on the Lambda support team to resolve delays in processing, leading to inefficiencies in data streaming. The OffsetLag metric is a measure of the total number of messages waiting in the message queue to be sent to the target Lambda function. This metric will provide transparency into the amount of data congestion in a message queue. Thus, developers can monitor the performance of events, set alarms and thresholds to check for undesirable congestion and quickly diagnose and solve inefficiencies in their data stream.

    To learn more about OffsetLag and how it is calculated check the Lambda Developer Guide.  There is no additional cost for using this feature beyond the standard price for AWS Lambda.

    » Amazon RDS on AWS Outposts now supports backups on AWS Outposts

    Posted On: Nov 22, 2021

    Amazon Relational Database Service (Amazon RDS) on AWS Outposts now supports creating backups locally on AWS Outposts with Amazon S3 support. You can create backups of your Amazon RDS databases running on AWS Outposts to the same Outpost or to the AWS Region of your Outpost, allowing you to maintain your data residency requirements while giving you flexibility for maintaining your data recovery solutions. CloudFormation support will be coming soon.

    In your monthly bill, Amazon RDS backups to an Outpost are billed the same as backup storage in an AWS region. This means that there is no additional charge for backup storage, up to 100% of your total database storage, within a region including all of the Outposts you have associated with for that region.

    For more information, see the Amazon RDS on Outposts Pricing page and our RDS Outposts documentation. Get started backing up your Amazon RDS database instances on Outposts on the Amazon RDS Management Console.

    » Announcing preview for write queries with Amazon Redshift Concurrency Scaling

    Posted On: Nov 22, 2021

    Amazon Redshift now scales write queries with Concurrency Scaling. Concurrency Scaling supports virtually unlimited concurrent users and concurrent queries, with consistently fast query performance. Now your write queries such as COPY, INSERT, UPDATE, and DELETE can run on transient Concurrency Scaling clusters when there is queueing.

    If you are currently using Concurrency Scaling, this new capability is automatically enabled in your cluster. You can monitor your Concurrency Scaling usage using Amazon Redshift Console and get alerts on usage exceeding your defined limits. You can also create, modify, and delete usage limits programmatically by using the AWS CLI and API.

    This preview of Amazon Redshift Concurrency Scaling is available in limited regions. To learn more about regional availability, please see this documentation

    » Amazon ECS-optimized AMI is now available as an open-source project

    Posted On: Nov 22, 2021

    Amazon Elastic Container Service (Amazon ECS) today open-sourced the build scripts that Amazon ECS uses to build the Amazon ECS-optimized Amazon Machine Image (AMI). These build scripts are now available on GitHub as an open-source project under the Apache license 2.0. Customers can use these build scripts to build custom AMIs with security, monitoring, and compliance controls based on their organization’s requirements while using the same components as the Amazon ECS-optimized AMI.

    The Amazon ECS-optimized AMI repository includes templates and scripts to generate an AMI. These scripts are the source of truth for Amazon ECS optimized AMI builds, so customers can follow the GitHub repository to monitor changes to the Amazon ECS-optimized AMIs. Amazon ECS also continues to develop Amazon ECS Container Agent, which is responsible for managing containers on behalf of Amazon ECS, also continues to be developed as an open source project on Github.

    The Amazon ECS-optimized Amazon Linux AMI is built on top of Amazon Linux 2. Customers can view the source image name by querying the AWS Systems Manager Parameter Store API. Customers can get started using the build scripts that we are releasing today to build custom AMIs in all public AWS regions by following the steps in the blog or by reviewing our documentation.

    » Amazon Athena adds console support for visualizing AWS Step Functions workflows

    Posted On: Nov 22, 2021

    You can now manage AWS Step Functions workflows from the Amazon Athena console, making it easier to build scalable data processing pipelines, execute queries based on custom business logic, automate administrative and alerting tasks, and more.

    Amazon Athena is an interactive query service that enables you to analyze data in Amazon S3 using SQL. AWS Step Functions is a low-code visual workflow service used to orchestrate AWS services, automate business processes, and build serverless applications.

    You can use Athena and Step Functions to build distributed data processing pipelines where Athena processes the data and Step Functions orchestrates the workflow across multiple AWS services such as AWS Glue, Amazon S3, Amazon Kinesis Firehose and AWS Lambda. And with the recent addition of AWS SDK Integrations in Step Functions, expanding the number of supported AWS Services from 17 to over 200 and AWS API Actions from 46 to over 9000, you can now develop even more sophisticated workflows.

    Step Functions is now integrated with Athena’s upgraded console, and you can use it to view an interactive workflow diagram of your State Machines that invoke Athena. To get started, select Workflows from the left navigation panel. If you have existing state machines with Athena queries, select a state machine to view an interactive diagram of the workflow. If you are new to Step Functions, you can get started by launching a sample project from the Athena console and customizing it later to suit your production use cases.

    This feature is available in all regions where both AWS Step Functions and Amazon Athena’s redesigned console are available. View the AWS Regions table for details.

    To see Step Functions and Athena in action, see Build and orchestrate ETL pipelines using Amazon Athena and AWS Step Functions, or consult the Step Functions documentation.

    » Introducing two new Amazon EC2 bare metal instances

    Posted On: Nov 22, 2021

    Starting today, Amazon EC2 M6i and C6i bare metal instances are available. M6i and C6i instances are powered by 3rd generation Intel Xeon Scalable processors (code named Ice Lake) with an all-core turbo frequency of 3.5 GHz, offer up to 15% better compute price performance over M5 and C5 instances respectively, and always-on memory encryption using Intel Total Memory Encryption (TME). M6i instances are well suited for workloads such as web and application servers, back-end servers supporting enterprise applications, gaming servers, caching fleets, as well as for application development environments. C6i instances are well suited for compute-intensive applications like batch processing, distributed analytics, high performance computing (HPC), ad serving, highly scalable multiplayer gaming, and video encoding.

    Bare metal instances allow EC2 customers to run applications that benefit from deep performance analysis tools, specialized workloads that require direct access to bare metal infrastructure, legacy workloads not supported in virtual environments, and licensing-restricted business critical applications. They provide your applications with direct access to the 3rd generation Intel Xeon Scalable processor and memory resources of the underlying server. Workloads on bare metal instances continue to take advantage of all the comprehensive services and features of the AWS Cloud, such as Amazon Elastic Block Store (EBS), Elastic Load Balancer (ELB), and Amazon Virtual Private Cloud (VPC). Bare metal instances also make it possible for customers to run secured containers such as Clear Linux Containers.

    M6i metal and C6i metal instances come with 128 vCPUs, and 512 GiB and 256 GiB of memory respectively. They also support 50Gbps networking speed and 40Gbps of bandwidth for Amazon EBS. Customers can use Elastic Fabric Adapter on these instances, which enables low latency and highly scalable inter-node communication. For optimal networking performance on these new instances, Elastic Network Adapter (ENA) driver update may be required. For more information on the optimal ENA driver, see this article.

    M6i metal instances are available in the following AWS Regions: US East (Ohio), US East (N. Virginia), US West (N. California), US West (Oregon), Asia Pacific (Mumbai), Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Europe (Frankfurt), Europe (Ireland), Europe (Paris), and South America (São Paulo). C6i metal instances are available in the following AWS Regions: US East (Ohio), US East (N. Virginia), US West (Oregon), and Europe (Ireland).

    These new bare metal instances can be purchased as Savings Plans, Reserved, On-Demand, and Spot instances. To get started, visit the AWS Management Console, AWS Command Line Interface (CLI), and AWS SDKs. For more information visit the M6i instances and C6i instances pages.

    » AWS Database Migration Service now supports Kafka multi-topic

    Posted On: Nov 22, 2021

    AWS Database Migration Service (AWS DMS) has expanded functionality by adding support for Kafka multi-topic with a single task. Using AWS DMS, you can now replicate multiple schemas from a single database to different Kafka topics using the same task. This eliminates the need to create multiple separate tasks in situations where many tables from the same source database need to be migrated to different Kafka topics.

    For AWS DMS regional availability, please refer to the AWS Region Table.
    To learn more, see Using Apache Kafka as a target for AWS Database Migration Service.
     

    » Amazon EC2 Mac Instances now support macOS Monterey

    Posted On: Nov 22, 2021

    Starting today, customers can run macOS Monterey (12.0.1) as Amazon Machine Images (AMIs) on Amazon EC2 Mac instances. Apple macOS Monterey is the current major macOS release from Apple, and introduces multiple new capabilities and performance improvements over prior macOS versions. macOS Monterey supports running Xcode versions 13.0 and later, which include the latest SDKs for iOS, iPadOS, macOS, tvOS, and watchOS.

    macOS Monterey AMIs are AWS supported images that are backed by Amazon Elastic Block Store (EBS). These AMIs include the AWS Command Line Interface, Command Line Tools for Xcode, Amazon SSM Agent, and Homebrew. The AWS Homebrew Tap includes the latest versions of AWS packages included in the AMIs.

    EC2 Mac instances enable customers to run on-demand macOS workloads in the AWS cloud for the first time, extending the flexibility, scalability, and cost benefits of AWS to all Apple developers. With EC2 Mac instances, developers creating apps for iPhone, iPad, Mac, Apple Watch, Apple TV, and Safari can provision and access macOS environments within minutes, dynamically scale capacity as needed, and benefit from AWS’s pay-as-you-go pricing.

    macOS Monterey AMIs are available today in US East (N.Virginia), US East (Ohio), US West (Oregon), Europe (Ireland), Europe (Frankfurt), Europe (London), Europe (Stockholm), Asia Pacific (Singapore), Asia Pacific (Seoul), Asia Pacific (Tokyo), Asia Pacific (Mumbai), and Asia Pacific (Sydney) regions. Customers can get started with macOS Monterey AMIs via the AWS Console, Command Line Interface (CLI), or API.

    Learn more about EC2 Mac instances here or start a machine today in the AWS Console. You can also subscribe to EC2 macOS AMI release notifications here.

    » Amazon S3 Storage Lens metrics now available in Amazon CloudWatch

    Posted On: Nov 22, 2021

    Amazon S3 Storage Lens, a cloud storage analytics feature for organization-wide visibility into object storage usage and activity, now includes support for Amazon CloudWatch. You can now create a unified view of your operational health to monitor any of your S3 Storage Lens metrics alongside other application metrics using CloudWatch dashboards.

    CloudWatch support in S3 Storage Lens makes it easier to access and take action on S3 Storage Lens metrics. Once enabled, you can add S3 Storage Lens metrics to your customized CloudWatch dashboard to visualize storage trends alongside other operational metrics. You can receive notifications using CloudWatch alarms and triggered actions based on changes in storage usage, such as an increase in incomplete multipart upload bytes in your account. In addition, you can now use the CloudWatch API to develop applications that access S3 Storage Lens metrics, or to enable access via integrated AWS partners.

    Amazon S3 Storage Lens is available in all AWS Regions, as well as the AWS China (Beijing) Region, operated by Sinnet, and the AWS China (Ningxia) Region, operated by NWCD. S3 Storage Lens advanced metrics and recommendations includes support for CloudWatch for no additional charge. For S3 Storage Lens advanced metrics and recommendations pricing, visit the Amazon S3 pricing page.

    Please refer to the documentation to learn how to turn on CloudWatch support in S3 Storage Lens with just a few clicks in the S3 console, or using the Amazon S3 API or AWS Command Line Interface (CLI). To learn more about creating dashboards and alarms using Amazon CloudWatch, visit the product page.

    » Introducing Amazon EC2 R6i instances

    Posted On: Nov 22, 2021

    Amazon Web Services (AWS) announces the general availability of Amazon EC2 R6i instances. Designed for memory-intensive workloads, R6i instances are built on the AWS Nitro System, a combination of dedicated hardware and lightweight hypervisor, which delivers practically all of the compute and memory resources of the host hardware to your instances. R6i instances are powered by 3rd generation Intel Xeon Scalable processors (code named Ice Lake) with an all-core turbo frequency of 3.5 GHz, offer up to 15% better compute price performance over R5 instances, and always-on memory encryption using Intel Total Memory Encryption (TME). These instances are SAP-Certified and are ideal for workloads such as SQL and noSQL databases, distributed web scale in-memory caches like Memcached and Redis, in-memory databases like SAP HANA, and real time big data analytics like Hadoop and Spark clusters.

    To meet customer demands for increased scalability, R6i instances provide two new sizes (32xlarge and metal) with 128 vCPUs and 1,024 GiB of memory, 33% more than the largest R5 instance. They also provide up to 20% higher memory bandwidth per vCPU compared to R5 instances. R6i give customers up to 50 Gbps of networking speed and 40 Gbps of bandwidth to the Amazon Elastic Block Store, 2x that of R5 instances. Customers can use Elastic Fabric Adapter on the 32xlarge and metal sizes, which enables low latency and highly scalable inter-node communication. For optimal networking performance on these new instances, Elastic Network Adapter (ENA) driver update may be required. For more information on optimal ENA driver for R6i, see this article.

    R6i instances are generally available today in AWS Regions: US East (Ohio), US East (N. Virginia), US West (Oregon), and Europe (Ireland). R6i instances are available in 10 sizes with 2, 4, 8, 16, 32, 48, 64, 96, and 128 vCPUs in addition to the bare metal option. Customers can purchase the new instances via Savings Plans, Reserved, On-Demand, and Spot instances. To get started, visit the AWS Management Console, AWS Command Line Interface (CLI), and AWS SDKs. To learn more, visit the R6i instances page.

    » Amazon ECS for Windows now supports ECS Exec

    Posted On: Nov 22, 2021

    Amazon Elastic Container Service (Amazon ECS) now supports Amazon ECS Exec for workloads running on Windows operating systems. Amazon ECS Exec, launched in March 2021, makes it easier for customers to troubleshoot errors, collect diagnostic information, interact with processes in containers during development, or get “break-glass” access to containers to debug critical issues encountered in production.

    AWS customers running Windows-based containerized applications often need to run commands on a subset of containers. Today, they do this by logging in to the Amazon ECS instance the container is running on, which can raise concerns related to audit, access control, and batch processing, among others. Amazon ECS Exec gives customers interactive shell or single command access to a running container making it easier to debug issues, diagnose errors, collect one-off dumps and statistics, and interact with processes in the container.

    Amazon ECS Exec for Windows is now available at no additional cost in all AWS Regions globally. This feature is supported on Amazon ECS optimized Windows AMIs (2019 and onwards) running on Amazon Elastic Compute Cloud (Amazon EC2) instance with Network Address Translation (NAT) and task networking. Visit our documentation page or read more in the blog post about performing commands in a running Windows container using Amazon ECS Exec from API, AWS Command Line Interface (CLI) or AWS SDKs.

    » Amazon MemoryDB for Redis now supports AWS Graviton2-based T4g instances and a 2-month Free Trial

    Posted On: Nov 22, 2021

    Amazon MemoryDB for Redis now supports AWS Graviton2-based T4 instances. T4g is the next generation burstable general-purpose DB instance type that provides a baseline level of CPU performance, with the ability to burst CPU usage at any time for as long as required. This instance type offers a balance of compute, memory, and network resources for a broad spectrum of general purpose workloads.

    You can now try Amazon MemoryDB for Redis with a 2-month free trial. All existing and new AWS customers are eligible for 750 hours of a T4g.small instance per month for two months. Additionally, under the free trial you can write up to 20 GB of data per month for two months. AWS will automatically deduct the instance hours and data written charges from your AWS bill each month.

    T4g instances and free trial are available in all AWS regions where Amazon MemoryDB is available. You can launch T4g DB instances and get started with a free trial from the Amazon MemoryDB ConsoleAWS CLI, or AWS SDK. To learn more about Amazon MemoryDB for Redis, visit the Amazon MemoryDB product page or documentation. Have questions or feature requests? Email us at: memorydb-help@amazon.com.

    » Amazon Connect now supports custom contact attributes as search filters on the contact search page

    Posted On: Nov 22, 2021

    Amazon Connect now supports custom contact attributes as search filters on the contact search page. You can now add up to 15 custom contact attributes to the search filter and use them to build your search queries. For example, if you have created “AgentLocation” as a custom contact attribute, you can now use it as a search criterium, and search for contacts handled by Agents based in “Seattle”, by specifying “Seattle” as the target value. To learn more, see the Contact Search documentation.

    Searching for contacts with custom attributes is available in all AWS regions where Amazon Connect is offered. To learn more about Amazon Connect, the AWS contact center as a service solution on the cloud, please visit the Amazon Connect website.

    » Amazon CloudWatch Lambda Insights now supports AWS Lambda functions powered by AWS Graviton2 Processor (General Availability)

    Posted On: Nov 22, 2021

    You can now use Amazon CloudWatch Lambda Insights to monitor, troubleshoot, and optimize the performance of AWS Lambda functions powered by AWS Graviton2 processor. With CloudWatch Lambda Insights you have access to automated dashboards summarizing the performance and health of your Lambda functions.

    Lambda functions running on Arm-based AWS Graviton2 processors are designed to deliver up to 19% better performance at 20% lower cost compared to Lambda functions running on x86. This is for a variety of Serverless workloads such as web and mobile backends, data, and media processing. With lower latency and better performance, Lambda functions powered by AWS Graviton2 processors are ideal for powering mission critical Serverless applications. Customers can update existing x86-based functions to use the AWS Graviton2 processor or create new functions powered by AWS Graviton2 using the Console, API, AWS CloudFormation, and AWS CDK. AWS Lambda Layers will also support targeting x86-based or Arm-based functions using either zip files or container images. Now with CloudWatch Lambda Insights you can collect detailed performance metrics, logs, and metadata from these Lambda functions on Graviton2 providing visibility into issues such as memory leaks or performance changes caused by new function versions.

    To get started with CloudWatch Lambda Insights Lambda functions powered by AWS Graviton2 processor using the AWS Management Console, AWS CLI or CloudFormation see the CloudWatch Lambda Insights page, now available in all standard AWS Regions. You only pay for what you use for metrics and logs. See the CloudWatch pricing page for a pricing example.

    » Introducing spelling support in Amazon Lex

    Posted On: Nov 19, 2021

    Customer support conversations often require the caller to provide inputs such as first name and account ID so the agent can verify the information before handling customer requests. Starting today, you can configure your Amazon Lex bots to capture the spelling (e.g., “Z A C”) or the phonetic description (e.g., Z as in Zebra, A as in Apple, C as in Cat) for the first name, last name, email address, alphanumeric and UK postal code built-in slot types. Callers can use the spelling support to provide names with difficult or alternative spellings (e.g., “Chris” vs. “Kris”). They can disambiguate confusable letters such as “N” vs. “M” by using phonetic descriptions (e.g., to spell the name, Min: “M as in Mary, I as in Idea, N as in Nancy”). The spelling capability expands on the built-in slot types so you can simplify the dialog management and improve the end-user experience.

    The spelling support is available on English (US) and English (UK) languages in all the AWS regions where Amazon Lex V2 operates. To learn more, visit the Amazon Lex documentation page.

    » Amazon Athena accelerates queries with AWS Glue Data Catalog partition indexes

    Posted On: Nov 19, 2021

    Today, we're excited to announce that Amazon Athena supports AWS Glue Data Catalog partition indexes to optimize query planning and reduce query runtime. When you query a table containing a large number of partitions, Athena retrieves the available partitions from the AWS Glue Data Catalog and determines which are required by your query. As new partitions are added, the time needed to retrieve the partitions increases and can cause query runtime to increase. AWS Glue Data Catalog allows customers to create partition indexes which reduce the time required to retrieve and filter partition metadata on tables with tens and hundreds of thousands of partitions.

    Using partition indexes with Athena is a simple, two-step process. Start by selecting the columns you want to index from the Glue Data Catalog and start index creation. Next, enable partition filtering on your tables and return to Athena to run your query. For more information, see AWS Glue Partition Indexing and Filtering.

    Partition indexes are supported on new and existing tables so you don’t need to rebuild datasets or re-write queries to unlock the performance benefits. To learn more, see Improve Amazon Athena query performance using AWS Glue Data Catalog partition indexes.

    Partition indexes also benefit the analytics workloads running on Amazon EMR, Amazon Redshift Spectrum, and AWS Glue in addition to Amazon Athena. To learn more, see Improve query performance using AWS Glue partition indexes.

    » Amazon AppStream 2.0 launches Elastic fleets, a serverless fleet type

    Posted On: Nov 19, 2021

    Starting today, Amazon AppStream 2.0 introduces Elastic fleets, a serverless fleet type that lets you stream applications to your end users from an AWS-managed pool of streaming instances without needing to predict usage, create and manage scaling policies, or create an image. Elastic fleets are designed for customers that want to stream applications to users without managing any capacity or creating AppStream 2.0 images.

    Elastic fleet streaming instances rely on applications that are installed to virtual hard disks saved within an Amazon Simple Storage Service (S3) bucket in your account. When your user chooses their applications to launch, the virtual hard drive is downloaded to an Elastic fleet streaming instance, and launched. With Elastic fleets, you simply install your applications to virtual hard drives, upload them to an S3 bucket in your account, then assign them to a new Elastic fleet to start streaming applications to your users. You no longer need to create scaling policies, or create and manage any AppStream 2.0 images. The billing rate is determined by the instance type and size and operating system that you choose when creating the fleet, and you’re charged per second only for the duration of your users streaming session. For each end user who launches a streaming session on a Microsoft Windows Server-based fleet instance, you will be charged a Microsoft RDS SAL for the month in which the streaming session occurred.

    Elastic fleets are designed for applications that can be run from virtual hard disks without being configured, and optimized for use cases with sporadic usage patterns such as software trials, trainings, and sales demos, or converting a desktop application to a software-as-a-service delivery model.

    To get started, install your application to a virtual hard disk, then upload it to an S3 bucket. Once your application is packaged, you can create a new Elastic fleet and associate it to your application. To learn more, see Create and Manage App Blocks and Applications for Elastic Fleets and Create a Fleet in the Amazon AppStream 2.0 Administration Guide.

    You can create Elastic fleets in all AWS Regions where AppStream 2.0 is offered. AppStream 2.0 Elastic fleets offers pay-as-you-go pricing. Please see Amazon AppStream 2.0 Pricing for more information, and try our sample applications.

    » Amazon Connect CTI Adapter for Salesforce supports ML-based voice authentication

    Posted On: Nov 19, 2021

    The Amazon Connect Computer Telephony Integration (CTI) Adapter for Salesforce now simplifies the contact center authentication procedure with the integration of Amazon Connect Voice ID to make voice interactions faster and more secure. Amazon Connect Voice ID analyzes caller's unique voice characteristics using machine learning to help verify identity in real-time and display a confidence score and status within the Contact Control Panel (CCP) in the CTI Adapter. Using CTI Actions and Flows, you can automate fraud case creation or route the call to fraud agents based on the outcome of the Voice ID interaction. 

    The CTI Adapter integrates Amazon Connect with Salesforce Service Cloud to build innovative customer experiences such as an automated AI customer experience with CRM agent routing. To get started on the Amazon Connect CTI Adapter for Salesforce, see the help documentation. You can find out more about Amazon Connect, an easy to use omnichannel cloud contact center, by visiting the Amazon Connect website.

    » Amazon Forecast announces new APIs that create up to 40% more accurate forecasts and provide explainability

    Posted On: Nov 19, 2021

    We’re excited to announce two new forecasting APIs for Amazon Forecast that generate up to 40% more accurate forecasts and help you understand which factors, such as price, holidays, weather, or item category, are most influencing your forecasts. Forecast uses machine learning (ML) to generate more accurate demand forecasts, without requiring any ML experience. Forecast brings the same technology used at Amazon to developers as a fully managed service, removing the need to manage resources.

    With today’s launch of the new CreateAutoPredictor API, Forecast can now forecast up to 40% more accurate results by using a combination of ML algorithms that are best suited for your data. In many scenarios, ML experts train separate models for different parts of their dataset to improve forecasting accuracy. This process of segmenting your data and applying different algorithms can be very challenging for non-ML experts. Forecast uses ML to learn not only the best algorithm for each item, but the best ensemble of algorithms for each item, leading to up to 40% better accuracy on forecasts.

    Previously, you would have to train your entire forecasting model again if you were bringing in recent data to use the latest insights before forecasting for the next period. This can be a time-consuming process. Most Forecast customers deploy their forecasting workflows within their operations such as an inventory management solution and run their operations at a set cadence. Because retraining on the entire data can be time-consuming, customer operations may get delayed. With today’s launch, you can save up to 50% of retraining time by selecting to incrementally retrain your AutoPredictor models with the new information that you have added.

    Lastly, an AutoPredictor forecasting model also helps with model explainability. To further increase forecast model accuracy, you can add additional information or attributes such as price, promotion, category details, holidays, or weather information, but you may not know how each attribute influences your forecast. With today’s launch, Forecast now helps you understand and explain how your forecasting model is making predictions by providing explainability reports after your model has been trained. Explainability reports include impact scores, so you can understand how each attribute in your training data contributes to either increasing or decreasing your forecasted values. By understanding how your model makes predictions, you can make more informed business decisions. Additionally, using the new CreateExplainability API, Amazon Forecast now provides granular item level explainability insights across specific items and time duration of choice. Better understanding why a particular forecast value is high or low at a particular time is helpful for decision making and building trust and confidence in your ML solutions. Explainability removes the need of running multiple manual analyses to understand past sales and external variable trends to explain forecast results.

    To get more accurate forecasts, faster retraining, and model explainability, read our blog or follow the steps in this notebook in our GitHub repo. If you want to upgrade your existing forecasting models to the new CreateAutoPredictor API, you can do so with one click either through the console or as shown in the notebook in our GitHub repo. To learn more, review Training Predictors. To get item level explainability insights, read our blog and follow this notebook in our GitHub repo. You can also review Forecast Explainability or CreateExplainability API.

    These launches are accompanied with new pricing, which you can review at Amazon Forecast pricing. You can use these new capabilities in all Regions where Amazon Forecast is publicly available. For more information about region availability, see AWS Regional Services.

    » Amazon Connect CTI Adapter for Salesforce supports Wisdom to quickly solve customer issues

    Posted On: Nov 19, 2021

    The Amazon Connect Computer Telephony Integration (CTI) Adapter for Salesforce now helps reduce the time agents spend searching for answers with integration of Amazon Connect Wisdom. Previously, agents needed to spend valuable time manually searching across data sources for information to solve customer issues and were unable to help customers quickly. With Wisdom, agents can search for terms such as “what is the pet policy in hotel rooms” across connected repositories, including Salesforce knowledge bases from right inside the CTI Adapter. When used with Contact Lens real-time, Wisdom is designed to detect customer issues during calls and proactively provide knowledge article recommendations in real-time. The Wisdom widget can be configured within the agent’s Contact Control Panel (CCP) or alongside the agent’s Salesforce Lightning screen layout for cases, contacts and accounts.

    The CTI Adapter easily integrates Amazon Connect with Salesforce Service Cloud to build innovative customer experiences such as an automated AI customer experience with CRM agent routing. To get started on the Amazon Connect CTI Adapter for Salesforce, see the help documentation. You can find out more about Amazon Connect, an easy to use omnichannel cloud contact center, by visiting the Amazon Connect website.

    » Amazon Linux 2 AMI is now available with kernel 5.10

    Posted On: Nov 19, 2021

    Amazon Linux 2 is now available with an updated Linux kernel (5.10) as an Amazon Machine Image (AMI). Kernel 5.10 brings a number of features and performance improvements, including optimizations for Intel Ice Lake processors and AWS Graviton2 processors powering the latest generation Amazon EC2 instances. Live patching for Kernel 5.10 is supported in Amazon Linux 2 for both x86 and ARM architectures.

    The updated kernel 5.10 includes various security features including WireGuard VPN that helps setup a virtual private network with low attack surface and allows encryption with less overhead compared to alternatives. The updated kernel brings a kernel lockdown feature to prevent unauthorized modification of the kernel image and a number of BPF improvements, including the CO-RE (Compile Once - Run Everywhere). Customers will benefit from improved write performance, throughput, and support for the new exFAT system for better compatibility with storage devices. In addition, with the availability of MultiPath TCP (MPTCP), customers with several network interfaces can combine all available network paths to increase throughput and reduce network failures.

    We recommend you use Amazon Linux 2 with kernel 5.10 when launching new instances to benefit from new features and performance improvements. Previous version of the kernel (4.14) will continue to be supported until the end of life date  for Amazon Linux 2 (06/2023).

    You can launch Amazon Linux 2 with kernel 5.10 from AWS Management Console, AWS Command Line Interface (CLI), AWS Tools for Windows PowerShell, RunInstances or via a AWS CloudFormation template. To learn more about Amazon Linux 2, please refer to the documentation.

    » Amazon Pinpoint now includes an SMS simulator feature

    Posted On: Nov 19, 2021

    Amazon Pinpoint now includes an SMS simulator feature that you can use to test how your application handles different SMS sending scenarios. With this feature, you can simulate deliveries by sending SMS messages to a destination phone number that Amazon Pinpoint provides. This enables you to see see examples of message delivery receipts per destination country without owning a destination phone number for that country. You can use the SMS simulator to test that your application’s logic is functioning as intended through simulated successful or failed sending. You can also use this feature to test your origination identity throughput without impacting your quota.

    You can send to these destination phone numbers that Pinpoint provides as you would with any real destination phone number (using the Send Message API or sending a test message through the Pinpoint console). Messages sent to these destination phone numbers are designed to stay within Pinpoint, so they are not sent over the carrier network. You can use any of your origination identities to send to these destinations phone numbers in the same country. When you send a message, Pinpoint does the same sending check as it would for any real destination phone number. For example, Pinpoint checks that the origination identities are supported for the destination country. If they are then Pinpoint sends at the correct throughput for that origination identity. You can send to these destination phone numbers even if your account is in the SMS sandbox.

    For more information about the SMS simulator feature, see SMS Simulator in the Amazon Pinpoint User Guide.

    » AWS Amplify UI launches new Authenticator component for React, Angular, and Vue

    Posted On: Nov 19, 2021

    With today’s release, AWS Amplify offers a new Authenticator UI component for web apps built with React, Angular, and Vue, giving developers the easiest way to add login experiences to their app with a single line of code. The new Authenticator UI component not only gives developers the quickest way to add user login and registration workflows to their apps, but also also gives developers complete control over modifying the layout and behavior to match any designs.

    The Authenticator offers a number of new capabilities:

  • Visual refresh with a new layout and theme.
  • Social Sign-in with Facebook, Google, Amazon and Apple using Amazon Cognito User Pools.
  • Zero-config setup with Amplify Admin UI and Amplify CLI - all auth configurations made in the CLI or Admin UI are respected by the default UI without requiring any code modifications.
  • Better forms: Confirm password and show/hide password capabilities, custom sign-up attributes (e.g. birth date, properly displayed as a date picker), password manager support, and accessible error validation. 
  • Un-styled by default, giving developer full control over state, layout, styling, and transitions.
  • To get started, please visit our launch blog.

    » Amazon CloudWatch now supports anomaly detection on metric math expressions

    Posted On: Nov 19, 2021

    Amazon CloudWatch now supports anomaly detection based on metric math expressions. Amazon CloudWatch anomaly detection allows you to apply machine-learning algorithms to continuously analyze system and application metrics, determine a normal baseline, and surface anomalies with minimal user intervention. CloudWatch metric math allows you to aggregate and transform metrics to create custom visualizations of your health and performance metrics. Metric math supports basic arithmetic functions such as +,-,/,*, comparison and logical operators such as AND & OR, and a number of additional functions such as RATE and INSIGHT_RULE_METRIC. For example, with AWS Lambda metrics you can divide the Errors metric by the Invocations metric to get an error rate, use anomaly detection to visualize expected values on a metric graph, and create an anomaly detection alarm to dynamically alert you when the value falls outside of the expected range.

    It is easy to get started with anomaly detection for metric math. In the CloudWatch console, go to Alarms in the navigation pane to create an alarm based on anomaly detection, or start with metrics to overlay the math expression’s expected values onto the graph as a band. You can also enable anomaly detection using the AWS Command Line Interface, AWS SDKs, or AWS CloudFormation templates.

    Anomaly detection is available in all AWS Regions where CloudWatch is available. See the pricing page for more details. To get started please refer to our documentation.

    » You can now submit multiple operations for simultaneous execution with AWS CloudFormation StackSets

    Posted On: Nov 19, 2021

    Today, AWS CloudFormation StackSets announces the capability to execute multiple operations for simultaneous execution. StackSets extends the functionality of CloudFormation stacks by letting you create, update, or delete stacks across multiple AWS accounts and Regions with a single operation. You can now submit more than one operation per stack set to be executed concurrently. This capability will enable you to reduce overall processing times with StackSets. Additionally, you can avoid the overhead of building logic to batch and queue operations submitted to StackSets.

    With this launch, you can simultaneously submit for execution, several operations that belong to a single stack set. CloudFormation StackSets will concurrently execute non-conflicting operations such as two operations updating different stack instances. Additionally, CloudFormation StackSets will queue conflicting operations for immediate processing once the conflict is resolved, such as “update-stack-instance” and “update-stackset”. You can avoid building and maintaining automation that handles batching and queuing logic to submit individual operations sequentially. The concurrent operational execution enables you to obtain an improvement in the overall performance when using the StackSets.

    To get started, use the CloudFormation console, AWS CLI, or AWS SDKs to create or update a stack set. You can specify the “ManagedExecution” property of the CloudFormation stack set to “Active” to start submitting concurrent operations for your stack set. You can use the StackSets simultaneous execution functionality in all AWS Regions where AWS CloudFormation StackSets is currently available. For more information, please refer to the documentation.

    » AWS App Mesh now supports ARM64-based Envoy Images

    Posted On: Nov 19, 2021

    AWS App Mesh now supports ARM64-based images with Envoy. With App Mesh-optimized ARM64 Envoy images, customers now get enhanced deployment flexibility and platform support to suit their requirements. AWS App Mesh is a service mesh that provides application-level networking to make it easier for your services to communicate with each other across multiple types of compute infrastructure. AWS App Mesh standardizes how your services communicate, giving you end-to-end visibility and options to tune for high-availability of your applications.

    ARM64-based App Mesh Envoy images will be supported from Envoy image version v1.20.0.1-prod and later and include both AWS ECS/EC2 and AWS EKS/EC2 workloads.

    For more information, please visit the AWS App Mesh product page.

    » AWS Lambda now supports mTLS Authentication for Amazon MSK as an event source

    Posted On: Nov 19, 2021

    AWS Lambda now supports mutual TLS authentication for Amazon MSK and self managed Kafka as an event source. Customers now have the option to provide a client certificate to establish a trust relationship between AWS Lambda and Amazon MSK or self managed Kafka brokers that are configured as event sources. Lambda will support self-signed server certificates or server certificates signed by a private CA for self-managed Kafka event sources by letting customers provide a root CA certificate which allows our pollers to trust their Kafka brokers. Support for self-signed server certificates is not required for MSK event sources because all MSK brokers use public certificates signed by Amazon Trust Services CAs, which Lambda trusts by default.

    To learn more about how to use mTLS Authentication for your Kafka triggered AWS Lambda functions please refer to our documentation on using AWS Lambda with self managed Apache Kafka and using AWS Lambda with Amazon MSK.

    » General Availability of Syne Tune, an open-source library for distributed hyperparameter and neural architecture optimization

    Posted On: Nov 19, 2021

    Today we announce the general availability of Syne Tune, an open-source Python library for large-scale distributed hyperparameter and neural architecture optimization. It provides implementations of several state-of-the-art global optimizers, such as Bayesian optimization, Hyperband and population-based training. Additionally, it supports constrained and multi-objective optimization, and it allows users to bring their own global optimization algorithm.

    With Syne Tune users can run hyperparameter and neural architecture tuning jobs locally on their machine or remotely on Amazon SageMaker by changing just one line of code. The former is a well-suited backend for smaller workloads and fast experimentation on local CPUs or GPUs. The latter is well-suited for larger workloads, which come with a substantial amount of implementation overhead. Syne Tune makes it easy to use SageMaker as a backend to evaluate a large number of configurations on parallel Amazon Elastic Compute Cloud (Amazon EC2) instances instances to reduce wall-clock time, while taking advantage of its rich set of functionalities (e.g., pre-built Docker deep learning framework images, EC2 Spot instances, experiment tracking, virtual private networks).

    To learn more about the library, check out our GitHub repo for documentation and examples.

    » AWS Database Migration Service now supports parallel load for partitioned data to S3

    Posted On: Nov 19, 2021

    AWS Database Migration Service (AWS DMS) has expanded functionality by adding support for the parallel load for partitioned data to Amazon S3, improving the load times for migrating partitioned data from supported database engine source data to Amazon S3. This feature creates Amazon S3 sub-folders for each partition of the table in the database source, allowing AWS DMS to run parallel processes to populate each sub-folder.

    To learn more, see Using Amazon S3 as a target for AWS Database Migration Service . For AWS DMS regional availability, please refer to the AWS Region Table

    » AWS Amplify announces the ability to export Amplify backends as CDK stacks to integrate into CDK-based pipelines

    Posted On: Nov 19, 2021

    Today, AWS Amplify announces the ability to export Amplify CLI-generated backends as a Cloud Development Kit (CDK) stack and incorporate into existing CDK deployment pipelines. The AWS Amplify CLI is a command line toolchain that helps frontend developers create app backends in the cloud. This new capability allows frontend developers to build their app backend quickly and, each time it is ready to ship, hand it over to DevOps teams to deploy to production.

    Many developers need the ability to use Amplify with existing DevOps guidelines and tools. For example, some organizations require apps to be deployed to production with an existing deployment systems that enforces organizational DevOps and security guidelines. Now, frontend developers can use the Amplify CLI to iterate on their app backend quickly and, prior to each production deployment, run "amplify export" to provide an exported Amplify backend for an existing deployment system. A new “Amplify Exported Backend” CDK construct is now also available that allows DevOps engineers to incorporate Amplify backends as a deployment stage with only a few lines of code.

    Learn more about how to export Amplify backends to CDK in our blog post and or in the Amplify documentation.

    » AWS IoT Core now supports Multi-Account Registration certificates on IoT Credential Provider endpoint

    Posted On: Nov 19, 2021

    You can now use Multi-Account Registration certificates on AWS IoT Core Credential Provider endpoints. Multi-Account Registration is a feature of AWS IoT Core that makes it easy for customers to register and use the same device certificate across multiple AWS accounts and endpoints. For example, a customer could register the same certificate with testing and production accounts. Customers can subsequently move devices easily between these AWS accounts by specifying the account endpoint when devices connect to AWS IoT Core. Until now, Multi-Account Registration certificates were supported only on IoT data plane and IoT Jobs endpoints. Starting today, customers can also use Multi-Account Registration certificates on IoT Credential Provider endpoints. See AWS IoT device data and service endpoints for more details. 

    AWS IoT Core is a managed cloud service that lets connected devices easily and securely interact with cloud applications and other devices. IoT Devices can use X.509 certificates to connect to AWS IoT Core using TLS mutual authentication protocols. Other AWS services that do not support certificate-based authentication can be called using AWS credentials in AWS Signature Version 4 format. AWS IoT Core Credentials Provider allows you to use the built-in X.509 certificate as the unique device identity to authenticate any AWS request. 

    You can visit AWS IoT Core Multi-Account Registration and AWS IoT Core Credential Provider documentations to learn more. 

    » Amazon SageMaker Model Registry now supports cross account registration of model versions

    Posted On: Nov 19, 2021

    Amazon SageMaker Model Registry, the purpose-built service which enables customers to catalogue their ML models, now supports cross account registration of model versions.

    SageMaker Model Registry catalogs customer’s models in a logical group (a.k.a. model package group) and stores incremental versions of models as model package versions. Model package version primarily stores model artifact and metadata. In addition, Model Registry allows customers to approve/reject/deploy model package version using Sagemaker Studio and SDK.

    Many customers have multiple accounts set up where they configure different AWS accounts to - train, register and deploy models. Typically, customers want to perform an operation from source AWS account in target AWS account. Previously, users in source AWS account could approve, reject or deploy model package versions located in a target AWS account. However, cross account registration of new model package version was not supported. Now, customers can create cross-account resource policy with Describe/Create/Update/List/Delete actions and attach this policy to a model package group. Once configured, customers can perform cross account registration and deployment of model versions. This will allow customers to organize their model development and registration in separate AWS accounts.

    This feature is available in all AWS regions where Amazon SageMaker is available. To get started, create a new SageMaker Model Package Group from the Amazon SageMaker SDK or Studio and visit our documentation page on cross-account model registration.

    » AWS Service Management Connector for ServiceNow supports AWS Systems Manager Change Manager

    Posted On: Nov 18, 2021

    Starting today, customers can make change requests for AWS resources/services based on templates in ServiceNow via AWS Systems Manager Change Manager. Upon approval in ServiceNow, these change requests will execute the AWS Systems Manager Automation runbooks associated to the change template. AWS Systems Manager Change Manager simplifies the way you request, approve, implement, and report on operational changes to your application configuration and infrastructure on AWS . This integration enables customers to streamline and align the maintenance, management and governance of AWS resources/services with their familiar IT Change Management (enablement) processes and tools.

    This release also introduces a dual sync integration with AWS Support cases and ServiceNow incidents and includes a guided setup process for the Connector scoped app ServiceNow configurations. The connector provides existing integration features for AWS Service Catalog, AWS Config, AWS Systems Manager OpsCenter, AWS Systems Manager Automation and AWS Security Hub, which help simplify cloud provisioning, operations and resource management as well as enables you to view streamlined Service Management governance and oversight over AWS services.

    The AWS Service Management Connector for ServiceNow is available at no charge in the ServiceNow Store. These new features are generally available in all AWS Regions where AWS Service Catalog, AWS Config, AWS Systems Manager and AWS Security Hub services are available. For more information, please visit the documentation on the AWS Service Management Connector. You can also learn more about AWS Service Catalog, AWS Config, AWS Systems Manager, AWS Support and AWS Security Hub.

    » Amazon EMR Studio is now available in Europe (Paris) and South America (Sao Paulo)

    Posted On: Nov 18, 2021

    EMR Studio is an integrated development environment (IDE) that makes it easy for data scientists and data engineers to develop, visualize, and debug big data and analytics applications written in R, Python, Scala, and PySpark. Today, we are excited to announce that EMR Studio is now available in the Europe (Paris), and South America (Sao Paulo) regions.

    With this launch, EMR Studio is now available in 15 regions globally: US East (Ohio, N. Virginia), US West (Oregon), Canada (Central), Europe (Ireland, Frankfurt, London, Stockholm, and Paris), Asia Pacific (Mumbai, Seoul, Singapore, Sydney, and Tokyo) and South America (Sao Paulo) regions.

    You can learn more by reading our Amazon EMR Studio documentation, visiting the Amazon EMR Studio detail page, or watching the Amazon EMR Studio demos.

    » Amazon Interactive Video Service adds high resolution metrics for monitoring stream health

    Posted On: Nov 18, 2021

    With Amazon Interactive Video Service (Amazon IVS) you can now monitor the health of your live stream inputs using four new Amazon CloudWatch metrics and two new APIs. These metrics and APIs can help you diagnose and troubleshoot issues with live streams either as they happen or after the streams have ended. You can also use APIs from Amazon IVS and Amazon CloudWatch to embed data into your own dashboard or application.

    This release brings configuration data from the encoder sending the stream, metrics from the infrastructure receiving the stream, and events from the stream processing pipeline to give you visibility into the health of live stream inputs. You can use the encoder configuration to monitor how streamers have set up their broadcasting hardware or software and determine if a misconfiguration is causing issues. The new CloudWatch metrics offer insights into key information including video and audio bitrate, frame rate, and keyframe interval from the stream which you can monitor to determine if Amazon IVS is consistently receiving data from the encoder. Finally, you can track stream events and set up alerts when streams encounter events such as when video data stops being transmitted to Amazon IVS.

    Amazon Interactive Video Service (Amazon IVS) is a managed live streaming solution that is quick and easy to set up, and ideal for creating interactive video experiences. Send your live streams to Amazon IVS using standard streaming software like Open Broadcaster Software (OBS) and the service does everything you need to make low-latency live video available to any viewer around the world, letting you focus on building interactive experiences alongside the live video.

    The Amazon IVS console and APIs for control and creation of video stream resources are available in the US East (N. Virginia), US West (Oregon), and Europe (Ireland) regions. Video ingest and delivery are available around the world over a managed network of infrastructure optimized for live video.

    To learn more about Amazon Interactive Video Service:

  • Read the AWS News blog post
  • Visit the Amazon IVS product page
  • Read the Amazon IVS Documentation
  • » Contact Lens for Amazon Connect is now FedRAMP Moderate compliant and has also added support for Asia Pacific (Seoul) AWS Region

    Posted On: Nov 18, 2021

    Contact Lens for Amazon Connect has now been included on the list of AWS Services in Scope for the FedRAMP Moderate baseline. The security and compliance of Contact Lens is assessed as part of multiple AWS compliance programs. Contact Lens is compliant with PCI and SOC, while also being a HIPAA eligible service. For a list of AWS services in scope of specific compliance programs, see AWS Services in Scope by Compliance Program. For general information, see AWS Compliance Programs.

    In addition to being FedRAMP moderate compliant, Contact Lens for Amazon Connect is now also available in the Asia Pacific (Seoul) AWS Region. This launch adds to the list of regions that Contact Lens already supports: US West (Oregon), US East (Northern Virginia), Canada (Central), Europe (London), Europe (Frankfurt), Asia Pacific (Singapore), Asia Pacific (Tokyo), and Asia Pacific (Sydney). Contact Lens supports both post-call and real-time analytics for Korean language. More information on supported languages can be found here.

    Contact Lens, a feature of Amazon Connect, helps enable businesses to better understand the sentiment and trends of customer conversations for identifying crucial company and product feedback. In addition, with real-time capabilities, businesses can get alerted to issues during live customer calls and can deliver proactive assistance to agents while calls are in progress, helping improve customer satisfaction.

    With Contact Lens for Amazon Connect, you only pay for what you use based on the number of minutes used. There are no required up-front payments, long-term commitments, or minimum monthly fees. Please visit our website to learn more about Contact Lens for Amazon Connect.

    » Amazon Rekognition Custom Labels now offers an enhanced experience to train computer vision models more easily

    Posted On: Nov 18, 2021

    Amazon Rekognition Custom Labels is an automated machine learning (AutoML) service that allows you to build custom computer vision models to detect objects and scenes specific to your business needs without the need of in-depth machine learning expertise. Starting today, we have updated the Amazon Rekognition Custom Labels console to introduce step-by-step directions on how to manage, train, and evaluate your custom models. This revamped guided experience makes it even easier for you to train your own computer vision models in four simple steps with just a few clicks.

    Customers can manage their custom models with projects, which are a set of resources needed to build and train a model. Datasets are a collection of labelled images that are used to train the model. Previously, datasets were not directly associated to projects. With this update, datasets will now be associated to projects created, making it even easier for customers to manage their custom trained models.

    Customers who have previously trained models will see no impact. Amazon Rekognition will automatically associate the dataset used to train the most recent model with the project the model belongs to. Previous datasets that were never used or used to train an older version of the model can still be used by associating them to a new project.

    In addition, we have introduced seven new APIs to make it even easier for you to build and train computer vision models programmatically. With these new APIs, you can: (a) create, copy, or delete datasets, (b) list the contents and get details of the datasets, (c) modify datasets and auto-split them to create a test dataset. To learn more about these new APIs, please visit this section of our documentation guide.

    This updated experience is available in all Amazon Rekognition Custom Labels regions. To learn more about Amazon Rekognition Custom Labels and region availability, please visit our documentation and region table. Get started with Amazon Rekognition Custom Labels console today.

    » Amazon Redshift simplifies the use of other AWS services by introducing the default IAM role

    Posted On: Nov 18, 2021

    Amazon Redshift now simplifies the use of other services such as Amazon S3, Amazon SageMaker, AWS Lambda, Amazon Aurora, and AWS Glue by allowing customers to create an IAM role from the Redshift console and assigning it as the default IAM role while creating an Amazon Redshift cluster. The default IAM role helps simplify SQL operations such as COPY, UNLOAD, CREATE, EXTERNAL FUNCTION, CREATE EXTERNAL TABLE, CREATE EXTERNAL SCHEMA, CREATE MODEL, or CREATE LIBRARY that accesses other AWS services by eliminating the need to specify the Amazon Resource Name (ARN) for the IAM role .

    Amazon Redshift now provides a new managed IAM policy AmazonRedshiftAllCommandsFullAccess policy that has required privileges to use other related services such as S3, SageMaker, Lambda, Aurora, and Glue. This policy is used for creating the default IAM role with Amazon Redshift console. The end users can use the default IAM role with COPY, UNLOAD, CREATE, EXTERNAL FUNCTION, CREATE EXTERNAL TABLE, CREATE EXTERNAL SCHEMA, CREATE MODEL, or CREATE LIBRARY commands by specifying IAM_ROLE with DEFAULT keyword without having to specify ARN for the IAM role. 

    This feature is now available in all AWS commercial regions except eu-south-1, af-south-1, and ap-northeast-3. You can find more information about the IAM role from the Redshift cluster management guide.

    » AWS Glue DataBrew announces native console integration with Amazon AppFlow

    Posted On: Nov 18, 2021

    AWS Glue DataBrew now has native console integration with Amazon AppFlow, allowing users to connect to data from Salesforce, Zendesk, Slack, ServiceNow, and other Software-as-a-Service (SaaS) applications, as well as AWS services like Amazon S3 and Amazon Redshift. When creating a new dataset in DataBrew, you can now create a flow via Amazon AppFlow that loads data (by schedule, event, or on-demand) into Amazon S3. Once the flow has been established to Amazon S3, you can easily clean, normalize, and transform this data in DataBrew and join it with datasets from other data stores or SaaS applications. DataBrew also provides information about when your flow was last refreshed and allows you to trigger flows directly from the DataBrew console. Learn more about supported AppFlow sources and destinations here.

    AWS Glue DataBrew is a visual data preparation tool that makes it easy to clean and normalize data using 250 pre-built transformations for data preparation, all without the need to write any code. Amazon AppFlow is a fully managed integration service that enables you to securely transfer data between Software-as-a-Service (SaaS) applications and AWS services.

    This feature is available in every region where DataBrew and Amazon AppFlow are both available. To get started, visit the AWS Management Console or install the DataBrew plugin in your Notebook environment and refer to the DataBrew documentation.

    » The dashboard feature is now generally available in AWS Audit Manager

    Posted On: Nov 18, 2021

    AWS Audit Manager now offers a dashboard to simplify your audit preparations with at-a-glance views of your evidence collection status per control. You can instantly track the progress of your audit assessments relative to common control domains. These control domains are general categories of controls, not specific to any one framework that allow customers to quickly assess status on common themes (E.g.- track overall issues in Identity and Compliance control domain).

    The dashboard highlights all active assessment controls that have failed evidences, and groups them by control domains. This enables you to focus on remediation efforts by specific subject matter experts as you prepare for your audits. You can either view evidence metrics at account level across all audits or at individual audit assessment level. Moreover, the dashboard not only helps you to visualize but also download control details that have failed evidences in a comma-delimited format. You can spot potential issues faster and remediate them by bringing related stakeholders together, thus reducing the time and cost you spend in auditing your AWS resources.

    The Audit Manager dashboard feature is available in all regions where AWS Audit Manager is available, specifically, US East (Ohio), US East (N. Virginia), US West (N. California), US West (Oregon), Asia Pacific (Mumbai), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Canada (Central), Europe (Frankfurt), Europe (Ireland) and Europe (London).

    Learn more about the dashboard feature in our feature page and refer to our documentation. Get started today by visiting the AWS Audit Manager Console, AWS Command Line Interface, or APIs.

    » AWS Glue DataBrew now allows customers to create data quality rules to define and validate their business requirements

    Posted On: Nov 18, 2021

    AWS Glue DataBrew users can now create data quality rules, which are customizable validation checks that define business requirements for specific data. You can create rules to check for duplicate values in certain columns, validate that one column does not match another, or define many more custom checks and conditions based on your specific data quality use cases. You can group rules for a given dataset into a ruleset for efficiency and apply these checks as part of a standard data profile job. Results are populated in a data quality dashboard and validation report, helping you to quickly view rule outcomes and determine whether your data is fit for use.

    AWS Glue DataBrew is a visual data preparation tool that makes it easy to clean and normalize data using over 250 pre-built transformations, all without the need to write any code. You can automate filtering anomalies, converting data to standard formats, correcting invalid values, and other tasks.

    To get started with DataBrew, visit the AWS Management Console or install the DataBrew plugin in your Notebook environment. To learn more, view this getting started video and refer to the DataBrew documentation.

    » Amazon Aurora supports MySQL 8.0

    Posted On: Nov 18, 2021

    Amazon Aurora MySQL-Compatible Edition now supports MySQL major version 8.0. MySQL 8.0 includes improved performance functionality from enhancements such as instant DDL to speed up the overall process of creating and loading a table and its associated indexes and SKIP LOCKED and NOWAIT options to avoid waiting for other transactions to release row locks. MySQL 8.0 adds developer productivity features such as window functions to more easily solve query problems and common table expressions to enable use of named temporary result sets. It also includes JSON functionality additions, new security capabilities, and more. MySQL 8.0 on Aurora MySQL-Compatible Edition supports popular Aurora features including Global Database, RDS Proxy, Performance Insights, and Parallel Query.

    To use the new version, create a new Aurora MySQL database instance with just a few clicks in the Amazon RDS Management Console. Please review the Aurora documentation to learn more. 

    Amazon Aurora combines the performance and availability of high-end commercial databases with the simplicity and cost-effectiveness of open source databases.

    To learn more about Aurora MySQL 8.0:

  • Check out this blog
  • Read the Aurora documentation
  • Launch an Aurora MySQL 8.0 cluster via the Amazon RDS Management Console 
  • Refer to the MySQL 8.0 documentation 
  • Visit the Aurora website and take a look at our getting started page
  • » AWS Glue DataBrew now supports custom SQL statements to retrieve data from Amazon Redshift and Snowflake

    Posted On: Nov 18, 2021

    AWS Glue DataBrew customers are now able to create datasets by writing Structured Query Language (SQL) statements to retrieve data from Amazon Redshift and Snowflake using Java Database Connectivity (JDBC) connections. You can use a purpose-built query to select the data you want and limit the data returned from large tables before cleaning, normalizing, and transforming that data with DataBrew. For a list of supported input formats, please see the AWS Glue DataBrew input formats list.

    AWS Glue DataBrew is a visual data preparation tool that makes it easy to clean and normalize data using over 250 pre-built transformations, all without the need to write any code. You can automate filtering anomalies, converting data to standard formats, correcting invalid values, and other tasks.

    To get started with DataBrew, visit the AWS Management Console or install the DataBrew plugin in your Notebook environment. To learn more, view this getting started video and refer to the DataBrew documentation.

    » Amazon S3 on Outposts now delivers strong consistency automatically for all applications

    Posted On: Nov 18, 2021

    Amazon S3 on Outposts now delivers strong read-after-write and list-after-write consistency for any storage request at no additional cost. 

    Amazon S3 on Outposts helps you meet your low latency, local data processing, and data residency needs by storing data on AWS Outposts. Using the S3 APIs and features, you can store, secure, tag, retrieve, report on, and control access to data stored locally in S3 on Outposts buckets. Any request to S3 on Outposts storage is now strongly consistent. After a successful write of a new object or an overwrite of an existing object, any subsequent read request immediately receives the latest object. S3 also provides strong consistency for list operations, so after a write, you can immediately perform a listing of the objects in a bucket with all changes reflected. 

    Analytics applications often require access to an S3 object immediately after a write. Without strong consistency, you would need to insert custom code into these applications, or provision databases to keep objects consistent with any changes in S3 across all objects. With this launch, S3 on Outposts object APIs are strongly consistent, and applications that require access to an object immediately after a write are able to do so directly, with no custom code required.

    All Amazon S3 on Outposts customers automatically receive strong consistency starting today, in all AWS Regions where AWS Outposts is available. To learn more about strong consistency for Amazon S3, read the blog and visit our documentation.

    » AWS Identity and Access Management now makes it more efficient to troubleshoot access denied errors in AWS

    Posted On: Nov 18, 2021

    To help you quickly troubleshoot your permissions in Amazon Web Services (AWS), AWS Identity and Access Management (IAM) now includes the policy type that’s responsible for the denied permissions in access denied error messages. Amazon Sagemaker, AWS CodeCommit and AWS Secrets Manager are among the first AWS services that now offer this additional context, with other services following in the next few months. When you troubleshoot access-related challenges, the identified policy type in the access denied error message helps you to quickly identify the root cause and unblock your developers by updating relevant policies.

    For example, when a developer attempting the DescribeDomain action in Amazon Sagemaker is denied access, the error message can enable her to understand that the access is denied due to Service Control Policy (SCP) which is managed by the central security team. She can create a trouble ticket with her central security team, providing the access denied error message and highlighting the policy type that is responsible for the denied access. The security administrator can focus their troubleshooting efforts on SCPs that are related to Sagemaker, enabling them to save time and effort on troubleshooting access-related challenges.

    To learn more, see IAM troubleshooting documentation.

    » Amazon Polly Launches a new French Neural Text-to-Speech voice

    Posted On: Nov 18, 2021

    Amazon Polly is a service that turns text into lifelike speech. Today, we are excited to announce the general availability of the Neural Text-to-Speech (NTTS) version of Léa, a French Polly voice. Now, Amazon Polly customers can enjoy Léa either as an NTTS or a Standard voice. With this launch, we now offer 23 NTTS voices across 13 languages.

    To get started, log into the Amazon Polly console and give Léa a try. For more details, please visit the Amazon Polly documentation and review our full list of text-to-speech voices. For more information, go to our Neural TTS pricing, regional availability, service limits, and FAQs.

    » AWS Service Management Connector makes installation easier through ServiceNow Guided Setup

    Posted On: Nov 18, 2021

    Starting today, customers can install the AWS Service Management Connector via a guided setup in ServiceNow. This guided setup simplifies the ServiceNow scoped app configurations tasks, minimizing the expertise needed to establish the connection between AWS and ServiceNow. ServiceNow administrators, or power users with permissions to the Connector scoped app, simply follow the guided steps and mark each task complete or skipped where applicable. The AWS Service Management Connector documentation also includes an AWS CloudFormation baseline permissions template that sets up the AWS environment. Thus, the ServiceNow Guide Setup and AWS baseline permissions give customers the ability to focus on developing guardrails and detective controls via integrated AWS services and validating that connection between AWS and ServiceNow.

    This release also introduces a dual sync integration with AWS Support cases and ServiceNow incidents. ServiceNow change requests can also now be derived from AWS Systems Manager Change Manager templates, including executing curated automation playbooks upon approval. The connector provides existing integration features for AWS Service Catalog, AWS Config, AWS Systems Manager OpsCenter, AWS Systems Manager Automation and AWS Security Hub, which simplifies cloud provisioning, operations and resource management as well as enables you to view streamlined Service Management governance and oversight over AWS services.

    The AWS Service Management Connector for ServiceNow is available at no charge in the ServiceNow Store. These new features are generally available in all AWS Regions where AWS Service Catalog, AWS Config, AWS Systems Manager and AWS Security Hub services are available. For more information, please visit the documentation on the AWS Service Management Connector. You can also learn more about AWS Service Catalog, AWS Config, AWS Systems Manager, AWS Support and AWS Security Hub.

    » Amazon SNS now supports publishing batches of up to 10 messages in a single API request

    Posted On: Nov 18, 2021

    Amazon Simple Notification Service (Amazon SNS) now supports message batching for the publish action, which let’s you publish up to 10 messages in a single batch request to either Standard Topics or FIFO Topics. Batching messages into a single API request is intended for those who want to reduce their costs associated with connecting decoupled applications with Amazon SNS. Previously, Amazon SNS required individual API requests for every published message.

    Amazon SNS is a fully managed, reliable, and highly available messaging service that enables you to connect decoupled microservices or send messages directly to users via SMS, mobile push, and email. Amazon SNS offers flexible pricing and you only get charged for what you use. Publish batch API requests cost the same as individual publish API requests. This means that you can reduce the costs of using Amazon SNS by a factor of 10 if you batch the maximum amount of 10 messages per request.

    Amazon SNS message batching is available in all public AWS Regions and AWS GovCloud (US).

    Start batching messages with Amazon SNS in minutes using the AWS Software Development Kit (SDK) or AWS Command Line Interface (CLI).

    To learn more about how to publish messages in batches with Amazon SNS, see the following:

  • Publishing messages in batch to Amazon SNS topics blog post
  • Amazon SNS Batch Actions in the Developer Guide
  • Amazon SNS PublishBatch in the API Reference
  • Amazon SNS quotas 
  • » Bottlerocket is now available in AWS GovCloud (US) Regions

    Posted On: Nov 18, 2021

    Bottlerocket, a Linux-based operating system designed to run container workloads is now available in AWS GovCloud (US) Regions.

    Bottlerocket focuses on improving the security posture of container hosts. It reduces exposure to vulnerabilities on hosts by including only the essential software required to run containers. Updates to Bottlerocket hosts are atomic. This brings consistency and further simplifies updates for container hosts. 

    Bottlerocket is an open source distribution, available at no additional cost, and is fully supported by AWS. Please refer to our guides for EKS and ECS to get started using Bottlerocket in AWS GovCloud (US) regions. 

    » Amazon Cognito launches new console experience for user pools

    Posted On: Nov 18, 2021

    Amazon Cognito now offers a new console experience that makes it even easier for customers to manage Amazon Cognito user pools and add sign-in and sign-up functionality to their applications. Customers that wish to opt in to the new and streamlined experience can do so by navigating to the Amazon Cognito console.

    Amazon Cognito makes it easier to add authentication, authorization, and user management to your web and mobile apps. Amazon Cognito scales to millions of users and supports sign-in with social identity providers such as Apple, Facebook, Google, and Amazon, and enterprise identity providers via standards such as SAML 2.0 and OpenID Connect.

    The new console provides a streamlined experience based on customer feedback and intuitively follows the steps that developers take when enabling sign-up and sign-in for their applications. When creating new user pools, customers can confidently complete complementary tasks that are now grouped together and are easy to discover. When viewing existing user pools, customers have improved ability to manage user pools as well as individual users. Direct access to contextual help and documentation is now consistently and readily available.

    The new console is available in all AWS Regions globally where Cognito User Pools are available. To learn more about the new console, check out a short video preview or see Using the Amazon Cognito console. To get started, visit the Amazon Cognito console. Tell us about your experience by clicking Feedback in the bottom-left corner of the console.

    » Amazon Monitron launches Web App

    Posted On: Nov 18, 2021

    Today, we are announcing the launch of the Amazon Monitron Web App. The Web App joins the existing Amazon Monitron Android App and iOS App, giving customers more options for using Amazon Monitron. Customers can now use the Amazon Monitron Web App from their desktops, laptops or tablets to monitor equipment and receive reports on operating behavior and alerts to potential failures in those equipment. They can access the Web app in a browser by clicking on the Amazon Monitron project link that can be found on the Amazon Monitron console. To commission the sensors and gateways, users will still need the Amazon Monitron Android App or iOS App since the commissioning process requires their phone’s Near Field Communication (NFC) and Bluetooth (BT) capabilities.  

    Amazon Monitron is an end-to-end system that uses machine learning (ML) to detect abnormal conditions in industrial equipment, enabling customers to implement predictive maintenance and reduce unplanned downtime. It includes sensors to capture vibration and temperature data from equipment, a gateway device to securely transfer data to AWS, the Amazon Monitron service that analyzes the data for abnormal equipment conditions using machine learning, and a companion app for setup, analytics and alert notifications.

    Amazon Monitron helps monitor and detect potential failures in a broad range of rotating equipment such as motors, gearboxes, pumps, fans, bearings, and compressors. Amazon Monitron Sensors and Gateways are available to purchase separately or bundled in starter packs on Amazon.com or with your Amazon Business account, in US, UK, Germany, Spain, France, Italy, and Canada. The Amazon Monitron service is available in the US East (N. Virginia) and Europe (Ireland) regions and you can download the Amazon Monitron App from the Google Play Store and the Apple App Store. You can access the Amazon Monitron Web App in a browser by clicking on the Amazon Monitron project link that can be found on the Amazon Monitron console.

    » AWS announces the launch of AWS AppConfig Feature Flags in preview

    Posted On: Nov 18, 2021

    Today, we are announcing the launch of AWS AppConfig Feature Flags, which will enable you to move faster and safer while releasing new features to your customers. Feature flags allow you to release features to your applications, independent of code deployments. Development teams often coordinate application feature releases along with a large-scale marketing event and are required to release features gradually to the users. Similarly, DevOps teams often respond to operational events by enabling existing functionality in their application. This launch enables Developers and DevOps teams to use AWS AppConfig to create and validate feature flag configuration data and deploy single or multiple features flags to their application in a monitored and controlled way. AWS AppConfig, a feature of AWS Systems Manager, is used as a best practice by thousands of teams within Amazon to deploy feature flags and application configuration changes to applications at run-time.

    With AppConfig Feature Flags, you can either deploy a single flag at a time or centrally manage and deploy multiple flags together to your application. You can not only set a boolean value for a flag, but can also define flag attributes values to store granular configuration values within the flag rather than embedding them into the application code. AppConfig feature flags also enable you to define constraints to validate your flag attributes values to make sure they are free of any errors. This validation helps ensure that any unexpected values are not deployed to your application, therefore mitigating risk of application outages. Once the flag data is validated, you can either roll out the flag instantly or gradually to your application. You can also use Amazon CloudWatch to set up monitors to watch for errors during deployments and AWS AppConfig can be configured to roll back the deployment if any errors are detected.

    This feature is available at no extra charge. Customers only pay for AWS AppConfig usage and the resources they use.

    This feature is available in all AWS regions where AWS Systems Manager is offered. To learn more, see our documentation.

    » Amazon Pinpoint now supports Safari push notifications

    Posted On: Nov 18, 2021

    You can now use Amazon Pinpoint to send push notifications to your website users on their Mac desktop using Apple Push Notification service. Safari push notifications display your website icon and notification text that users can click to go to your website. This allows you to reach your end users right on their desktop to inform them of new product launches, engage them in upcoming promotions, and share events as they unfold.

    Amazon Pinpoint supports the following push notification channels: Amazon Device Messaging (ADM), Apple Push Notification service (APNs), Baidu Cloud Push, and Firebase Cloud Messaging (FCM). To learn more about Amazon Pinpoint push notifications, see Sending Safari Push Notifications in the Amazon Pinpoint User Guide.

    » Amazon Rekognition reduces pricing of all Image APIs by up to 38%

    Posted On: Nov 18, 2021

    Starting November 9, 2021, Amazon Rekognition Image APIs pricing has been reduced by up to 38% in all 14 supported regions. This price reduction will automatically reflect in customer bills starting from November 2021.

    Previously, all Amazon Rekognition Image APIs had the following pricing tiers billed by monthly usage, aggregated across all APIs:

  • AWS Free Tier of 5 thousand images per month, within the first 12 months of usage
  • Tier 1 of up to 1 million images at $0.001 per image
  • Tier 2 from 1 - 10 million images at $0.0008 per image
  • Tier 3 from 10 - 100 million images at $0.0006 per image
  • Tier 4 above 100 million images at $0.0004 per image
  • With the price reduction, Amazon Rekognition Image APIs are categorized into Group 1 and Group 2 APIs, with the monthly usage aggregated separately across the two groups. Further, the pricing tiers and rates for Tier 2, Tier 3, and Tier 4 have been adjusted. The key changes are summarized below:

  • Group 1 APIs include CompareFaces, IndexFaces, SearchFacebyImage, and SearchFaces
  • Group 2 APIs include DetectFaces, DetectModerationLabels, DetectLabels, DetectText, RecognizeCelebrities, and DetectProtectiveEquipment
  • AWS Free Tier of 5 thousand images per month, within first 12 months of usage
  • Tier 1 of up to 1 million images per month at $0.001 per image
  • Tier 2 is now from 1 - 5 million images per month at $0.0008 per image
  • Tier 3 is now from 5 - 35 million images per month at $0.0006 per image
  • Tier 4 is now for monthly usage above 35 million images per month at $0.0004 per image for Group 1 APIs and $0.00025 per image for Group 2 APIs
  • Customer savings vary based on monthly usage. For example, for a usage level of 60 million images per month, effective monthly price is $28,450 with new pricing versus $38,200 with old pricing, resulting in customer savings of 25.5%. And, for a usage level of 120 million images per month, effective monthly price is $43,450 with new pricing versus $70,200 with old pricing, resulting in customer savings of 38.1%. Referenced prices and savings are based on US East (N. Virginia) region pricing.

    To learn more, please visit the Amazon Rekognition pricing page and get started in the Rekognition console

    » Amazon OpenSearch Service (successor to Amazon Elasticsearch Service) now offers - M6g - instances for Asia Pacific (Mumbai) and US West (N. California) Regions

    Posted On: Nov 18, 2021

    Amazon OpenSearch Service (successor to Amazon Elasticsearch Service) now offers AWS Graviton2 general purpose - M6g instance family. Customers can enjoy up to 38% improvement in indexing throughput, 50% reduction in indexing latency, and 30% improvement in query performance when compared to the corresponding x86-based instances from the current generation M5.

    Amazon EC2 M6g instances and their disk variants are powered by AWS Graviton2 processors that are built utilizing 64-bit Arm Neoverse cores and custom silicon designed by AWS. AWS Graviton2 processors deliver a major leap in performance and capabilities. AWS Graviton2 instances are built on the AWS Nitro System, a collection of AWS-designed hardware and software innovations that enable the delivery of efficient, flexible, and secure cloud services with isolated multi-tenancy, private networking, and fast local storage. Amazon OpenSearch Service Graviton2 instances come in sizes large through 12xlarge and offer compute, memory, and storage flexibility.

    Amazon OpenSearch Service Graviton2 instances support Elasticsearch version 7.9, 7.10 and OpenSearch 1.0. The instances also include support for all recently launched features like encryption at rest and in-flight, role-based access control, cross-cluster search, Auto-Tune, Trace Analytics, Kibana Reporting, and UltraWarm.

    Amazon OpenSearch Service Graviton2 instances provide up to 44% price/performance improvement over previous generation instances. Further savings are available via Reserved Instance (RI) pricing for these instances.

    M6g is available for Amazon OpenSearch Service across 20 regions globally. Please refer to the AWS Region Table for more information about Amazon OpenSearch Service availability. To learn more about Amazon OpenSearch Service, please visit the product page.

    » AWS Application Migration Service is now available in the Africa (Cape Town), Europe (Milan), Europe (Paris), and Middle East (Bahrain) Regions

    Posted On: Nov 18, 2021

    AWS Application Migration Service (AWS MGN) is now available in four additional AWS Regions: Africa (Cape Town), Europe (Milan), Europe (Paris), and Middle East (Bahrain).

    AWS Application Migration Service is the primary service recommended for lift-and-shift migrations to AWS. The service minimizes time-intensive, error-prone manual processes by automatically converting your source servers from physical, virtual, and cloud infrastructure to run natively on AWS. You can use the same automated process to migrate a wide range of applications to AWS without making changes to your applications, their architecture, or the migrated servers.

    By using AWS Application Migration Service, you can more quickly realize the benefits of the AWS Cloud—and leverage additional AWS services to further modernize your applications.

    With this launch, AWS Application Migration Service is now available in 21 AWS Regions: US East (Ohio), US East (N. Virginia), US West (N. California), US West (Oregon), Africa (Cape Town), Asia Pacific (Hong Kong), Asia Pacific (Mumbai), Asia Pacific (Osaka), Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Canada (Central), Europe (Frankfurt), Europe (Ireland), Europe (London), Europe (Milan), Europe (Paris), Europe (Stockholm), Middle East (Bahrain), and South America (São Paulo). Access the AWS Regional Services List for the most up-to-date availability information.

    For more information about AWS Application Migration Service, visit the product page or get started for free in the AWS Console.

    » AWS Control Tower now supports nested organizational units

    Posted On: Nov 18, 2021

    We are excited to announce the support for AWS Organizations nested organizational units (OUs) in AWS Control Tower. An organization is an entity that you create to consolidate a collection of AWS accounts so that you can administer them as a single unit. Within each organization, you can create organizational units which help manage and govern groups of accounts in an organization. Nested OUs provide further customization between groups of accounts within OUs, giving you more flexibility when applying policies for different workloads or applications. For example, you can separate production workloads and non-production workloads within an OU. With support for nested OUs, you can now easily organize accounts in your Control Tower environment in a hierarchical, tree-like structure that best reflects your business needs.

    Control Tower provides guardrails that can be attached to your OUs to simplify governance. With nested OUs, you can attach guardrails to OUs instead of directly to each account. This becomes an important scaling mechanism as you add accounts in your Control Tower environment, as policies applied at the OU-level automatically apply to accounts within the OU. In the Control Tower console governance status of each OU is representative of the status for the OUs nested beneath it in the hierarchy.

    AWS Control Tower offers the easiest way to set up and govern a new, secure, multi-account AWS environment based on AWS best practices. Customers will create new accounts using AWS Control Tower’s account factory and enable governance features such as guardrails, centralized logging and monitoring in supported AWS Regions. To learn more, visit the AWS Control Tower homepage or see the AWS Control Tower User Guide. For a full list of AWS Regions where AWS Control Tower is available, see the AWS Region Table.

    » AWS Glue DataBrew now provides detection and data masking transformations for Personally Identifiable Information (PII)

    Posted On: Nov 18, 2021

    AWS Glue DataBrew now provides customers the ability to mask Personally Identifiable Information (PII) data during data preparation. With just a few clicks, you can detect PII data as part of a data profiling job and gather statistics such as number of columns that may contain PII and potential categories, then use built-in data masking transformations including substitution, hashing, encryption, decryption, and more, all without writing any code. You can then use the cleaned and masked datasets downstream for analytics, reporting, and machine learning tasks. 

    AWS Glue DataBrew is a visual data preparation tool that makes it easy to clean and normalize data using over 250 pre-built transformations, all without the need to write any code. You can automate filtering anomalies, converting data to standard formats, correcting invalid values, and other tasks.

    To get started with DataBrew, visit the AWS Management Console or install the DataBrew plugin in your Notebook environment. To learn more, view this getting started video and refer to the DataBrew documentation.

    » AWS Application Migration Service now supports agentless replication

    Posted On: Nov 17, 2021

    AWS Application Migration Service (AWS MGN) now supports agentless replication from VMware vCenter versions 6.7 and 7.0 to the AWS Cloud. AWS Application Migration Service is the primary service for lift-and-shift migrations to AWS.

    The new agentless replication feature is intended for users who want to rehost their applications to AWS but cannot install the AWS Replication Agent on individual servers due to company policies or technical restrictions. You can perform agentless snapshot replication from your vCenter source environment to AWS by installing the AWS MGN vCenter Client in your vCenter environment.

    AWS Application Migration Service minimizes time-intensive, error-prone manual processes by automatically converting your source servers from physical, virtual, and cloud infrastructure to run natively on AWS. You can use the same automated process to migrate a wide range of applications to AWS without making changes to applications, their architecture, or the migrated servers. By using AWS Application Migration Service, you can more quickly realize the benefits of the AWS Cloud — and leverage additional AWS services to further modernize your applications.

    When possible, we recommend using AWS Application Migration Service’s agent-based replication option as it enables continuous replication and shortens cutover windows.

    To learn more about the agentless replication feature, visit the AWS Application Migration Service documentation.

    » Amazon CloudWatch Container Insights adds console support for visualizing workload issues and problems via Amazon CloudWatch Application Insights problems

    Posted On: Nov 17, 2021

    You can now easily setup workload specific monitoring and view the health of these workloads via Amazon CloudWatch Application Insights problems directly from the Amazon CloudWatch Container Insights console, making it easier to dive deep into issues, troubleshoot problems and reduce mean time to resolution.

    Amazon CloudWatch Container Insights enables customers to easily collect container metrics and analyze them along with other metrics in Amazon CloudWatch. Amazon CloudWatch Application Insights provides simple setup of observability for your enterprise applications and underlying AWS Resources. Amazon CloudWatch Application Insights provides automated dashboards that show potential problems with monitored applications, which help you quickly isolate ongoing issues with your applications and infrastructure. The new service integration allows Amazon CloudWatch Container Insights customers to easily enable and visualize problems identified by Amazon CloudWatch Application Insights for Amazon Elastic Container Service (ECS) Clusters, ECS Services, ECS Tasks, and Amazon Elastic Kubernetes Service Clusters, all in one console.

    To get started, open the Amazon CloudWatch Container Insights console. If you are new to Amazon CloudWatch Application Insights, select Auto-configure Application Insights. If CloudWatch Application Insights is already set up or once the auto set up is completed you'll see a dashboard with any application or resource problems. If there is a problem, you can navigate through the problems to troubleshoot your container applications.

    This service integration is available in all regions where both Amazon CloudWatch Container Insights and Amazon CloudWatch Application Insights are available. View the AWS Regions table for details. To learn more, read the Amazon CloudWatch Container Insights and Amazon CloudWatch Application Insights pages.

    » Visualize all your Kubernetes clusters in one place with Amazon EKS Connector, now generally available

    Posted On: Nov 17, 2021

    Today, we are excited to announce the general availability of Amazon Elastic Kubernetes Service (EKS) Connector. With EKS Connector, you can now extend the EKS console to view your Kubernetes clusters outside of AWS. You can use the EKS console to visualize Kubernetes clusters including your on-premises Kubernetes clusters, self-managed clusters running on Amazon Elastic Compute Cloud (EC2), and clusters from other cloud providers. Once connected, you can see all of your clusters’ statuses, configurations, and workloads in one place on the EKS console.

    We also added support for connected cluster tagging and Kubernetes version display on the EKS console. Registering a cluster is now easier with fewer steps as we’ve automated the Service Linked Role creation. You can also use the Service Quotas console to request connecting more than 10 clusters per region at a time. To learn more about EKS Connector, visit the documentation.

    » AWS Network Firewall achieves ISO compliance

    Posted On: Nov 17, 2021

    Starting today, AWS Network Firewall is compliant with the ISO 9001, ISO 27001, ISO 27017, ISO 27018 and ISO 27701 standards. AWS maintains certifications through extensive audits of its controls to ensure that information security risks that affect the confidentiality, integrity, and availability of company and customer information are appropriately managed.

    AWS Network Firewall is a managed firewall service that makes it easy to deploy essential network protections for all your Amazon Virtual Private Clouds (VPCs). The service automatically scales with network traffic volume to provide high-availability protections without the need to set up or maintain the underlying infrastructure. AWS Network Firewall is integrated with AWS Firewall Manager to provide you with central visibility and control of your firewall policies across multiple AWS accounts. To get started with AWS Network Firewall, please see the AWS Network Firewall product page and service documentation.

    You can download copies of the AWS ISO and CSA STAR certificates in AWS Artifact. To learn more, visit AWS Compliance Programs, or visit the AWS Services in Scope by Compliance Program webpage to see a full list of services covered by each compliance program.

    » AWS Marketplace launches upfront contract pricing for Amazon Machine Images (AMI) and Container products

    Posted On: Nov 17, 2021

    Today, AWS announced that customers can purchase Amazon Machine Image (AMI) and Container products from AWS Marketplace with one, two, or three-year contracts on supported products. 

    With this launch, Independent Software Vendors (ISVs) can set upfront payment terms for any custom dimension on their product in AWS Marketplace. Customers have the option to purchase these products immediately from AWS Marketplace using monthly, annual and multi-year contracts. Customers can also upgrade their contract, buy additional licenses, or accept a privately negotiated price on any of the AMI and Container products that supports contract pricing model.

    Customers can distribute these licenses to other accounts in their organization using AWS License Manager, giving member accounts more flexibility. The new pricing model allows customers to get better pricing at payment terms that fit with their procurement process. The initial launch includes products from sellers Delphix and Cloud Storage Security

    To learn more, visit the documentation.

    » AWS Glue FindMatches now provides match scores

    Posted On: Nov 17, 2021

    The FindMatches ML transform in AWS Glue now includes an option to output match scores, which indicate how closely each grouping of records match each other. The FindMatches transform allows you to identify duplicate or matching records in your dataset, even when the records do not have a common unique identifier and no fields match exactly. FindMatches helps automate complex data cleaning and deduplication tasks.

    AWS Glue FindMatches automates the process of identifying partially matching records for use cases including linking customer records, deduplicating product catalogs, and fraud detection. Use match scoring in FindMatches to understand your FindMatches models, decide if they are trained to your satisfaction, and to decide which records to merge.

    This feature is available in the same AWS Regions as AWS Glue.

    To learn more, visit our documentation and read the FindMatches blog post.

    » Amazon Rekognition text detection supports 7 new languages and improves accuracy

    Posted On: Nov 17, 2021

    Amazon Rekognition can detect and read text in an image, and return bounding boxes for each word found. Starting today, Amazon Rekognition supports text detection in images in 7 new languages - Arabic, Russian, German, French, Italian, Portuguese and Spanish. Amazon Rekognition automatically detects and extracts text in images in all supported languages, without requiring a language parameter. In addition, Amazon Rekognition delivers higher overall accuracy, with improvements for vertical and curved text in images.

    Customers can use text detection in images for multiple use cases. First, text detection can support content moderation workflows, where detected text can be checked against a list of inappropriate or unwanted words and phrases. Second, customers can use the detected text bounding box area to redact personally identifiable information (PII). Third, text detection can be used to understand how particular words, text placement, and text size impact marketing campaign performance. Fourth, customers can use text detection to easily search for image or video assets with specific keywords or captions in the Digital Asset Management (DAM) system. Additionally, text detection supports mapping and automotive applications, to read text on road and street signs, and in public safety and transportation. Following is a quote from OLX Global on how they are using Amazon Rekognition for text detection in images:

    As a leader in the classified marketplace sector and to foster a safe, inclusive and vibrant buying and selling community, it is paramount that we make sure that all products listed on our platforms comply with our rules for product display and authenticity. To do that, among other aspects of the ads, we have placed focus on analyzing the non-organic text featured on images uploaded by our users. We tested Amazon Rekognition’s text detection functionality for this purpose and found that it was highly accurate and augmented our in-house violations detection systems, helping us improve our moderation workflows. Using Amazon Rekognition for text detection we were able to flag 350,000 policy violations last year. It has also helped us save significant amounts in development costs and has allowed re-focus data science time on other projects. We are very excited about the upcoming text model update as it will even further expand our capabilities for text analysis.”- Jaroslaw Szymczak, Data Science Manager, OLX Group 

    These improvements are now available in all AWS Regions supported by Amazon Rekognition. To get started, please visit the Rekognition Console or download the latest AWS SDK to start building applications. To learn more about text detection with Amazon Rekognition, please refer to our documentation.

    » Announcing general availability of AWS Elastic Disaster Recovery

    Posted On: Nov 17, 2021

    Today we are announcing the general availability of AWS Elastic Disaster Recovery (AWS DRS), a new service that enables organizations to minimize downtime and data loss with fast, reliable recovery of on-premises and cloud-based applications. AWS Elastic Disaster Recovery is the recommended service for disaster recovery to AWS.

    You can use AWS Elastic Disaster Recovery to simplify recovery of a wide range of applications on AWS, including critical databases such as Oracle, MySQL, and SQL Server, and enterprise applications such as SAP. This service uses a unified process for drills, recovery, and failback, so you do not need application-specific skillsets to operate the service.

    AWS Elastic Disaster Recovery continuously replicates your source servers to your AWS account, without performance impact. It reduces costs compared to traditional on-premises disaster recovery solutions by removing idle recovery site resources, and instead leverages affordable AWS storage and minimal compute resources to maintain ongoing replication. If you need to recover applications on AWS, you can do so within minutes. During recovery, you can choose the most up-to-date server state as a recovery point, or choose to recover an operational copy of your applications from an earlier point in time. Point in time recovery is helpful for recovery from data corruption events such as ransomware. After issues are resolved in your primary environment, you can use AWS Elastic Disaster Recovery to fail back your recovered applications.

    AWS Elastic Disaster Recovery is based on the technology of CloudEndure Disaster Recovery, and is operated from the AWS Management Console. This enables seamless integration with other AWS services, such as AWS CloudTrail, AWS Identity and Access Management (IAM), and Amazon CloudWatch. Visit our product comparison page to understand when to use AWS Elastic Disaster Recovery or CloudEndure Disaster Recovery.

    AWS Elastic Disaster Recovery is currently available in the following AWS Regions: US East (Ohio), US East (N. Virginia), US West (Oregon), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Europe (Frankfurt), Europe (Ireland), and Europe (London).

    When you use AWS Elastic Disaster Recovery, you can easily add or remove source servers. The service is billed at an hourly rate per replicating source server. Costs for your fully provisioned disaster recovery site on AWS are incurred only when needed for drills or recovery. Visit the pricing page for additional details.

    To learn more about AWS Elastic Disaster Recovery, visit our product page or documentation. To get started, sign in to the AWS Elastic Disaster Recovery Console  to start replicating servers.

    » Amazon Virtual Private Cloud now supports Bring your own IP (BYOIP) in seven additional AWS Regions

    Posted On: Nov 17, 2021

    Starting today, Bring Your Own IP (BYOIP) is available in seven additional AWS Regions. These AWS Regions are Africa (Cape Town), Asia Pacific (Osaka, Seoul), Europe (Milan, Paris, Stockholm), and Middle East (Bahrain). This launch makes BYOIP available in all commercial regions, AWS GovCloud (US-East), and AWS GovCloud (US-West).

    BYOIP allows you to bring your own IPv4 and IPv6 addresses to AWS. You can use these IP addresses the same way you use Amazon provided IPv4 and IPv6 addresses, including advertising them to the Internet. You can also create Elastic IP addresses from your BYOIPv4 addresses and use them with AWS resources such as EC2 instances, Network Load Balancers, and NAT gateways.

    There is no additional charge to use the BYOIP feature. Also, you don’t have to pay for Elastic IP addresses that you create from BYOIP address prefixes.

    To learn more about BYOIP, read our previous what’s new, blog post, and documentation.

    With this regional expansion, BYOIP is available in the Africa (Cape Town), Asia Pacific (Hong-Kong), Asia Pacific (Mumbai), Asia Pacific (Osaka), Asia Pacific (Sydney), Asia Pacific (Tokyo), Asia Pacific (Seoul), Asia Pacific (Singapore), Canada (Central), Europe (Dublin), Europe (Frankfurt), Europe (London), Europe (Milan), Europe (Paris), Europe (Stockholm), Middle East (Bahrain), South America (Sao Paulo), US West (Northern California), US East (N. Virginia), US East (Ohio), US West (Oregon), AWS GovCloud (US-West) AWS GovCloud (US-East) Regions.

    » FreeRTOS cellular LTE-M interface library is now generally available

    Posted On: Nov 17, 2021

    Starting today, cellular LTE-M interface library is generally available in FreeRTOS. With this launch, developers will find it easier to build IoT devices that use the cellular LTE-M protocol to connect to the cloud. The main FreeRTOS download includes AWS IoT reference integrations with cellular modules from vendors such as Sierra Wireless, u-blox, and Quectel.

    To learn more about the FreeRTOS cellular interface library, visit the libraries page, and download the FreeRTOS cellular interface library and demos.

    » Amazon Kendra releases AWS Single Sign-On integration for secure search

    Posted On: Nov 17, 2021

    Amazon Kendra is an intelligent search service powered by machine learning, enabling organizations to provide relevant information to customers and employees, when they need it. 

    When a user searches for content, organizations want to only show search results that the user has access to. Organizations can now use AWS Single Sign-On (AWS SSO) identity store with Amazon Kendra for user context filtering. User context filtering allows organizations to only show content that a user has access to. Amazon Kendra can fetch access levels of groups and users from an AWS SSO identity store and use this information to only return documents a given user has access to. Amazon Kendra indexes the document access control information and at search time, this is compared with the user and group information retrieved from the AWS SSO to return filtered search results that the user has access to. AWS SSO supports identity providers such as Azure AD, CyberArk, Okta etc.

    The Amazon Kendra AWS SSO identity store is available in all commercial AWS regions where Amazon Kendra is available. To learn more about the feature, visit the documentation page. To explore Amazon Kendra, visit the Amazon Kendra website.

    » Amazon Translate Now Extends Support for Active Custom Translation to all language pair combinations

    Posted On: Nov 16, 2021

    Amazon Translate is a neural machine translation service that delivers fast, high-quality, affordable, and customizable language translation. Today, we are excited to announce the general availability of Active Custom Translation (ACT) to customize your translation between any currently supported languages. For example, you can now use ACT between German and French.

    ACT produces custom-translated output without the need to build and maintain a custom translation model. With other custom translation products, customers can spend a lot of time and money on overhead expenses to manage and maintain various instances of both the customer data and the custom translation model for each language pair. With ACT, Amazon Translate will use your preferred translation examples as parallel data (PD) to customize the translation output. You can update your PD as often as needed to improve the translation quality without having to worry about retraining or managing custom translation models. Currently, ACT is available in the Europe West (Ireland), US East (Northern Virginia), and US West (Oregon) AWS Regions.

    You can find step-by-step instructions in this “How to use Amazon Translate’s Active Custom Translation” post. For more information on Amazon Translate ACT, see Amazon Translate documentation.

    » Amazon Location Service adds new capabilities to help customers better filter geographical search results

    Posted On: Nov 16, 2021

    Today, Amazon Location Service added five new parameters to help developers filter and process search results for points of interest, addresses (known as geocoding), and geographical positions (known as reverse geocoding). With these new parameters, they can tailor and optimize location results to meet the needs of their specific applications. For example, developers can choose to only select the closest search result, personalize the results to the end-user's preferred language, or enable time-related features such as turning lights on and off in a home automation application.

    Four new return parameters help filter the results of searches. First, a distance measurement indicates how far each of the returned results are from the bias or exact position of their search. Second, an interpolated parameter indicates whether each resulting address is estimated based on known locations or a known location itself. Third, a relevance score is returned for customers searching for a place or point of interest, indicating how closely the returned result matches the original search text. Fourth, timezone indicates the time zone for each returned result. In addition, an input parameter, language, allows developers to specify the preferred language to use for the returned results.

    Amazon Location Service is a fully managed service that helps developers easily and securely add maps, points of interest, geocoding, routing, tracking, and geofencing to their applications without compromising on data quality user privacy, or cost. With Amazon Location Service, you retain control of your location data, protecting your privacy and reducing enterprise security risks. Customers using the Amazon Location Place API can search for addresses and points of interest data from our high-quality data providers Esri and HERE.

    Amazon Location Service is available in the following AWS Regions: US East (N. Virginia), US East (Ohio), US West (Oregon), Europe (Frankfurt), Europe (Ireland), Europe (Stockholm), Asia Pacific (Singapore), Asia Pacific (Sydney) Region, and Asia Pacific (Tokyo).

    To learn more, visit to the Amazon Location Service developer guide.

    » Amazon Rekognition improves accuracy of content moderation for images

    Posted On: Nov 16, 2021

    Amazon Rekognition content moderation is a deep learning-based feature that can detect inappropriate, unwanted, or offensive images and videos, making it easier to find and remove such content at scale. Amazon Rekognition provides a detailed taxonomy across 35 sub-categories and 10 distinct top-level moderation categories. Starting today, Amazon Rekognition content moderation comes with an improved model for image moderation that significantly reduces false positive rates across all of the moderation categories, particularly ‘explicit nudity’, without reduction in detection rates for truly unsafe content. Lower false positive rates imply lower volumes of flagged images to be reviewed further, leading to a better experience for human moderators and more cost savings.

    Today, many companies employ teams of human moderators to review third-party or user generated content, while others simply react to user complaints to take down offensive or inappropriate content. However, human moderators alone cannot scale to meet these needs at sufficient quality or speed, which could lead to a poor user experience, high costs to achieve scale, or even a loss of brand reputation. Amazon Rekognition content moderation enables you to streamline your moderation workflows using machine learning. Using fully managed moderation APIs, you can quickly review millions of images or thousands of videos, and flag only a small subset of assets for further action. This ensures that you get comprehensive but cost-effective moderation coverage for all your content as your business scales, and you can reduce the burden on your workforce from having to look at large volumes of content for potential moderation. Following is a quote from Flipboard on how they are using Amazon Rekognition for image content moderation:

    Flipboard is a content recommendation platform that enables publishers, creators, and curators to share stories with readers to help them stay up to date on their passions and interests. On average, Flipboard processes approximately 90M images per day. To maintain a safe and inclusive environment and to confirm that all images comply with platform guidelines at scale, it is crucial to implement a content moderation workflow using machine learning. To build models for this system internally was labor intensive and lacked the accuracy necessary to meet the high quality standards Flipboard users expect. This is where Amazon Rekognition became the right solution for our product. Amazon Rekognition is a highly accurate, easily deployed, and performant content moderation platform that provides a robust moderation taxonomy. Since putting Amazon Rekognition into our workflows we’ve been catching approximately 63K images that violate our standards per day. Moreover, with frequent improvements like the latest content moderation model update, we can be confident that Rekognition will continue to help make Flipboard an even more inclusive and safe environment for our users over time. - Anuj Ahooja, Sr. Engineering Manager, Flipboard

    Accuracy improvements for Amazon Rekognition content moderation in images are now available in all supported AWS Regions. To get started, you can try the latest version in the Amazon Rekognition Console. For more information on Amazon Rekognition content moderation, refer to feature documentation.

    » Amazon AppStream 2.0 Introduces Linux Application Streaming

    Posted On: Nov 16, 2021

    Amazon AppStream 2.0 adds support for Amazon Linux 2. With this launch, you can now stream Linux applications and desktops to your users, and greatly lower the total streaming cost by migrating Matlab, Eclipse, Firefox, PuTTY, and other similar applications from Windows to Linux on Amazon AppStream 2.0.

    You can now stream Linux-compatible apps to your users in the same simple way you currently stream Windows apps, at a lower hourly rate, charged per second, and with no per user fee. With Linux application streaming, you can transform your application delivery into software as a service (SaaS) model, provide your developers remote Linux dev environments with popular tools like Python and Docker, enable your designers to access CAD applications that require high performance GPU Linux desktops from anywhere, and setup cloud-based Linux learning environment for your students.

    AppStream 2.0 support for Linux is currently supported in all AWS Regions where AppStream 2.0 is available. To get started, log into the AppStream 2.0 console and select a region, and launch an Amazon Linux 2-based image builder to install your applications, and create the image for your users. AppStream 2.0 Linux-based instances use per second billing (with a minimum of 15 minutes). For more information, see Amazon AppStream 2.0 Pricing.

    » AWS Amplify announces the ability to add custom AWS resources to Amplify-created backends using CDK and CloudFormation

    Posted On: Nov 16, 2021

    Today, AWS Amplify announces a new “amplify add custom” command to add any of the 175+ AWS services to an Amplify-created backend using the AWS Cloud Development Kit (CDK) or AWS CloudFormation. The AWS Amplify CLI is a command line toolchain that helps frontend developers create app backends in the cloud. The new ability to add custom resources enables developers to add additional resources beyond Amplify’s built-in use cases with a single command.

    To add custom AWS resources, developers can run "amplify add custom" in their Amplify project. This command provides CDK or CloudFormation starter files. For example, developers can create an SNS topic using "amplify add custom" to send SMS or emails notifications to their customers. To reference Amplify-generated resources from the newly added custom AWS resources, developers can use the provided "amplifyParams" parameter in CDK or use the "Parameters" object in CloudFormation.

    Learn more about how to setup Amplify CLI’s custom AWS resources feature in our blog post or in the Amplify documentation.

    » AWS Transfer Family adds identity provider options and enhanced monitoring capabilities

    Posted On: Nov 16, 2021

    Starting today, you can use AWS Lambda with your AWS Transfer Family server to integrate an identity provider of your choice. This results in easier ways to authenticate and authorize your users. Additionally, you can now monitor your file transfers using a centralized CloudWatch metrics dashboard in the AWS Transfer Family Management Console.

    AWS Transfer Family provides fully managed file transfers over SFTP, FTPS, and FTP for Amazon S3 and Amazon EFS. Today, you have three options for managing identities with AWS Transfer Family – service managed, Microsoft Active Directory (AD) integration using AWS Directory Services, and a custom identity provider of your choice. Until today, supplying an API Gateway URL was required to integrate a custom identity provider, even when using AWS Lambda to interface with the identity provider. With this launch, you can directly integrate your identity provider using AWS Lambda, simplifying user access management. Continue using Amazon API Gateway if you need a RESTful API to connect to your identity provider, or if you want to leverage AWS WAF for its rate limiting and geo-blocking capabilities. 

    You can now also access CloudWatch graphs for metrics such as number of files and bytes transferred in the AWS Transfer Family Management Console, giving you a single pane of glass to monitor file transfers using a centralized dashboard.

    Support for both features is available in all AWS Regions where AWS Transfer Family is available. To learn more about using AWS Lambda to integrate an identity provider, read the documentation or deploy this CloudFormation template to get started.

    » Observe SAP HANA databases with Amazon CloudWatch Application Insights

    Posted On: Nov 16, 2021

    Amazon CloudWatch Application Insights now supports observability for SAP HANA databases so you can troubleshoot and resolve problems impacting your SAP HANA-based workloads more easily.

    CloudWatch Application Insights for SAP HANA can be configured in just a few minutes. It creates dashboards with recommended monitors that provide a HANA-centric view into key performance and availability metrics of the database, operating system, and AWS infrastructure layers.

    Additionally, it correlates and analyzes log data to identify warnings of common problems such as backup failures, resource utilization, replication issues and critical process failures in the HANA database. These capabilities help you troubleshoot and resolve issues with your SAP applications, so you can reduce mean time to repair (MTTR).

    To get started, visit the CloudWatch detail page. To learn more, visit the CloudWatch Application Insights documentation.

    » Amazon MQ now supports RabbitMQ version 3.8.23

    Posted On: Nov 16, 2021

    You can now launch RabbitMQ 3.8.23 brokers on Amazon MQ. This patch update to RabbitMQ contains several fixes and enhancements compared to the previously supported version, RabbitMQ 3.8.22.

    Amazon MQ is a managed message broker service for Apache ActiveMQ and RabbitMQ that makes it easier to set up and operate message brokers on AWS. Amazon MQ reduces your operational responsibilities by managing the provisioning, setup, and maintenance of message brokers for you. Because Amazon MQ connects to your current applications with industry-standard APIs and protocols, you can more easily migrate to AWS without having to rewrite code.

    You can upgrade RabbitMQ with just a few clicks in the AWS Management Console. If your broker has automatic minor version upgrade enabled, AWS automatically upgrades the broker to version 3.8.23 during the prescribed maintenance window. To learn more about upgrading, please see: Managing Amazon MQ for RabbitMQ engine versions in the Amazon MQ Developer Guide.

    RabbitMQ 3.8.23 includes the fixes and features of all previous releases of RabbitMQ. To learn more, read the RabbitMQ Changelog.

    » AWS Glue FindMatches now supports incrementally matching new data against an existing dataset

    Posted On: Nov 16, 2021

    The FindMatches ML transform in AWS Glue now allows you to match newly arrived data against existing matched datasets. The FindMatches transform allows you to identify duplicate or matching records in your dataset, even when the records do not have a common unique identifier and no fields match exactly. It makes it faster and easier to clean and deduplicate data sets.

    AWS Glue FindMatches automates the process of identifying partially matching records for use cases including linking customer records, deduplicating product catalogs, and fraud detection. Use incremental matching in FindMatches to match new data against existing data without combining the datasets and mixing matched and unmatched data.

    This feature is available in the same AWS Regions as AWS Glue.

    To learn more, visit our documentation and read the blog post.

    » AWS Network Firewall is now SOC compliant

    Posted On: Nov 16, 2021

    AWS Network Firewall is now SOC 1, SOC 2, and SOC 3 compliant. You can now use AWS Network Firewall for use cases that are subject to System and Organization Controls (SOC) reporting. AWS SOC reports are independent third-party examination reports that demonstrate how AWS achieves key compliance controls and objectives.

    In addition to meeting standards for SOC, AWS Network Firewall is HIPAA eligible, PCI-DSS compliant, and ISO (9001, 27001, 27017, 27018, 27701) and CSA STAR Level 2 V4 compliant. AWS Network Firewall is a managed firewall service that makes it easy to deploy essential network protections for all your Amazon Virtual Private Clouds (VPCs). The service automatically scales with network traffic volume to provide high-availability protections without the need to set up or maintain the underlying infrastructure. AWS Network Firewall is integrated with AWS Firewall Manager to provide you with central visibility and control of your firewall policies across multiple AWS accounts. To get started with AWS Network Firewall, please see the AWS Network Firewall product page and service documentation.

    You can download the AWS SOC reports in AWS Artifact. To learn more, visit AWS Compliance Programs, or you can go to the AWS Services in Scope by Compliance Program webpage to see a full list of services covered by each compliance program.

    » New and improved Amazon Athena console is now generally available

    Posted On: Nov 16, 2021

    Amazon Athena’s redesigned console is now generally available in all AWS commercial and GovCloud regions where Athena is available. The new and improved console brings a modern, more personalized experience to all of the features you enjoy in the current console and includes several new features which make analyzing data with Athena more powerful and productive.

    When using Athena’s new console, you’ll benefit from new features that improve the experience of developing queries, analyzing data, and managing your usage. You can now:

  • Rearrange, navigate to, or close multiple query tabs from a redesigned query tab bar.
  • Read and edit queries more easily with an improved SQL formatter and new text formatting themes.
  • Copy query results to your clipboard in addition to downloading the full result set.
  • Sort your query history, saved queries, and workgroups and choose which columns to show or hide.
  • Use a simplified interface to configure data sources and workgroups in fewer clicks.
  • Set preferences for displaying query results, query history, line wrapping, and more.
  • Increase your productivity with new and improved keyboard shortcuts and embedded product documentation.
  • With today’s announcement, the new and improved console is now the default console experience. If desired, you may use the earlier console until further announced. To switch to the earlier console, log into your AWS account, choose Amazon Athena, and deselect New Athena experience from the navigation panel on the left.

    To access the numerous enhancements of the new Athena console, log into your AWS account today and navigate to Amazon Athena. And, tell us about your experience by clicking Feedback in the bottom-left corner of the console.

    » AWS Snow Family now supports external NTP server configuration

    Posted On: Nov 16, 2021

    AWS Snow Family now supports external Network Time Protocol (NTP) server configuration on Snowball Edge and Snowcone devices. By providing external NTP support, customers are able to synchronize device time with the NTP servers they provide.

    With this launch, customers can achieve time synchronization on Snowball Edge and Snowcone by configuring external NTP servers, enabling them to deploy Snow devices at the rugged, mobile edge locations for the long-term (more than 1-year) and to enable time synchronization with the surrounding systems such as private 5G core network deployment at farm locations, and video analytics application at industrial ports. Prior to this launch, customers’ Snowball Edge or Snowcone devices relied on its internal device time which may have deviated from other systems over time.

    To use this feature, customers can provide up to 5 NTP servers for their Snowball Edge or Snowcone devices. This feature is available for devices ordered on or after November 16, 2021. This feature is available in all AWS Regions where AWS Snow Family is available and at no additional cost. To learn more, visit the AWS Snow Family documentation. Log into the AWS Snow Family console to get started.

    » Amazon Connect launches API to configure security profiles programmatically

    Posted On: Nov 15, 2021

    Amazon Connect now provides an API to programmatically create and manage security profiles. Security profiles help you manage who can access and perform actions in Amazon Connect, such as using the Contact Control Panel (CCP), adding a new agent, or viewing the built-in reports. Using this API, you can programmatically update security profiles as your Amazon Connect access control needs change. To learn more, see the API documentation.

    Security profiles API are available in all AWS regions where Amazon Connect is offered. To learn more about Amazon Connect, the AWS contact center as a service solution on the cloud, please visit the Amazon Connect website.

    » Amazon Connect Customer Profiles now provides a contact block to personalize customer service

    Posted On: Nov 15, 2021

    Amazon Connect Customer Profiles now offers a contact block that enables contact center managers to personalize the contact center experience without the need to write code. Using Amazon Connect’s contact flow builder’s graphical user interface and the new Customer Profiles contact block, contact center managers can create personalized experiences that leverage customer information such as name and address. For example, you can play a personalized greeting by using the customer name from the Customer Profiles block or route customers to different queues based on their address. The new flow block also enables you to update customer information using inputs customer provide, helping you keep profiles up to date with the latest customer information.

    Customer Profiles is a feature of Amazon Connect that makes it simple to bring together customer information (e.g., contact information, contact history, purchase history) from multiple applications into a unified customer profile, delivering the profile directly to the agent as soon as they begin interacting with the customer. To learn more about Amazon Connect Customer Profiles please visit our website.

    » AWS Launch Wizard now supports Microsoft SQL Server deployments using Amazon EBS gp3, io2, and io2 Block Express volumes

    Posted On: Nov 15, 2021

    AWS Launch Wizard supports Amazon Elastic Block Store (EBS) gp3, io2, and io2 Block Express volumes for Microsoft SQL Server deployments. Now you can take full advantage of the new generations of EBS volumes when you use Launch Wizard for the high availability or single node deployments of SQL Server on Amazon EC2.

    AWS Launch Wizard enables you to easily size, configure, and deploy SQL Server on EC2. Launch Wizard can recommend appropriate EBS volumes based on your storage performance requirements, and you have the ability to choose an appropriate EBS volume. EBS io2 volumes are ideal for your IOPS intensive and throughput intensive SQL Server workloads that require low latency. io2 is designed to provide you up to 64,000 IOPS and 1,000 MB/s of throughput per volume with 99.999% durability. io2 Block Express offers the highest performance block storage with 4x higher throughput, IOPS, and capacity than io2 volumes, along with sub-millisecond latency. gp3 volumes are ideal for your SQL Server workloads that require high performance at low cost. With gp3, you can get a baseline performance of 3,000 IOPS and 125MB/s with up to 20% lower price-point per GB than gp2 volumes.

    AWS Launch Wizard for SQL Server is available at no additional charge. You only pay for the AWS resources that are provisioned for running your SQL Server. To learn more about using AWS Launch Wizard, visit the AWS Launch Wizard page and technical documentation.

    » AWS IoT Greengrass now supports Microsoft Windows devices

    Posted On: Nov 15, 2021

    AWS IoT Greengrass is an Internet of Things (IoT) edge runtime and cloud service that helps customers build, deploy, and manage device software. With this release, AWS IoT Greengrass version 2.5 adds support for Microsoft Windows devices. Windows gateway devices are commonly used in industrial IoT scenarios to automate manufacturing operations by collecting local sensor and equipment data and triggering local actions using application business logic. For example, consider an automotive assembly line where a steel stamping press creates a complex part that is prone to defects. Quality Control (QC) automation can be built using a video camera stream fed to a gateway device that uses local ML inference to check part dimensions and find cosmetic defects. The gateway could then notify operators if defects are identified.

    Building this solution with AWS is easy. First, you install AWS IoT Greengrass on your Windows gateway device from the console in minutes. Once AWS IoT Greengrass is running, you can now deploy your QC application onto the gateway with just a few clicks. You can also use the managed MQTT broker to receive messages locally from your video camera, or use AWS IoT Greengrass device as a hub to aggregate data from other local sensors and equipment, process and take action on those messages, run local ML inference, and easily send data to the AWS cloud to be analyzed. Once you have a Windows gateway up and running, you can easily replicate this configuration to thousands of gateways across your manufacturing sites in hours, not weeks, using AWS IoT fleet provisioning.

    We have extensively tested AWS IoT Greengrass version 2.5 on Windows Server 2019 and Windows 10. If you are unsure if your Windows-based device can run AWS IoT Greengrass and interoperate with other AWS IoT services, you can use AWS IoT Device Tester to run a series of automated functional validation tests.

    Please see the AWS Region table for all the regions where AWS IoT Greengrass 2.5 is available. To learn more, visit our product page and view the updated developer guide.

    » AWS App Runner supports AWS CDK to build and deploy applications

    Posted On: Nov 15, 2021

    AWS App Runner now supports using the AWS Cloud Development Kit (AWS CDK) to build and deploy applications. AWS CDK enables you to compose your infrastructure across AWS from a single source using familiar programming languages such as Python and Node.js. With the AWS CDK integration, you can create App Runner services by defining your source code location as Amazon Elastic Container Registry (Amazon ECR) Public, Amazon ECR private, or GitHub. You can also create the required Identity and Access Management (IAM) roles using the AWS CDK for other services your application uses, such as Amazon DynamoDB and AWS Lambda.

    You can use the service construct of AWS CDK with source property to define the source location. For creating and managing IAM roles you can define the instanceRole and accessRole properties in the service construct of AWS CDK. For more information on using AWS CDK for AWS App Runner, see AWS CDK documentation. To learn more about App Runner, refer to the developer guide

    » AWS Amplify announces the ability to override Amplify-generated resources using CDK

    Posted On: Nov 15, 2021

    AWS Amplify announces the ability for developers to override Amplify-generated IAM, Cognito, and S3 configuration to best meet app requirements. The AWS Amplify CLI is a command line toolchain that helps frontend developers create app backends in the cloud. With the new override capability, developers can easily configure their backend with Amplify-provided defaults but still customize fine-grained resource settings.

    The new “amplify override auth” command generates a developer-configurable “overrides” TypeScript function which provides Amplify-generated resources as CDK constructs. For example, developers can set auth settings that are not directly available in the Amplify CLI workflow, such as the number of valid days for a temporary password. Developers can also customize S3 and DynamoDB resources configured through Amplify's storage category. For example, developers can run the “amplify override storage” command to enable Transfer Acceleration for Amplify-generated S3 buckets. Also new in this release, developers can modify Amplify-generated IAM roles for authenticated and unauthenticated access deployed as part of an Amplify backend's root CloudFormation stack. For example, developers can run "amplify override project" to change the authenticated and unauthenticated IAM role names to comply with organization-specific naming conventions.

    Learn more about how to setup Amplify CLI’s resource overrides feature our blog post or in the Amplify documentation.

    » AWS Step Functions Synchronous Express Workflows now supports AWS PrivateLink

    Posted On: Nov 15, 2021

    AWS Step Functions’ Synchronous Express Workflows now supports AWS PrivateLink allowing you to start a Synchronous Express Workflow from your Virtual Private Cloud (VPC) without traversing the public internet.

    AWS Step Functions is a low-code, visual workflow service that developers can use to help build distributed applications, automate IT and business processes, and build data and machine learning pipelines using AWS services. For high-throughput and short duration workloads Express Workflows are ideal while Synchronous Express Workflows also allow developers to quickly receive the workflow response without needing to poll additional services or build a custom solution. AWS PrivateLink provides private connectivity between VPCs, AWS services, and your on-premises networks, without exposing your traffic to the public internet.

    Now, with AWS PrivateLink support you can start Synchronous Express Workflows while traffic remains within the AWS network which can reduce the risk of DDoS attacks or man-in-the-middle attacks (MITM). PrivateLink makes it easier to connect services across different accounts and VPCs to help simplify your network architecture. Synchronous Express Workflows allows you to coordinate AWS services with more than 200 AWS services and 9,000 API Actions supported. You will need to create a new VPC endpoint to connect to Synchronous Express Workflows, but no code changes are required for your SDK configurations if Private DNS resolution is enabled in VPC and VPCe.

    Synchronous Express Workflow support for AWS PrivateLink is generally available in all commercial regions where Synchronous Express Workflows is available. For a complete list of regions and service offerings, see AWS Regions.

    To learn more, visit the Amazon VPC endpoints page in the AWS Step Functions Developer Guide, or see our documentation on Express Workflows.

    » AWS IoT Device Management is now supported on AWS CloudFormation

    Posted On: Nov 15, 2021

    We are excited to announce that AWS IoT Device Management resources are now supported on AWS CloudFormation. With a few clicks, you can now use a CloudFormation template to pre-configure and deploy IoT fleet management infrastructure like Job Templates, Fleet Metrics, and IoT Logging settings in a standardized and repeatable way across multiple regions and accounts.

    AWS IoT Device Management is a fully managed service that enables you to organize, monitor, manage, and update your fleet of IoT devices at scale. AWS CloudFormation automates the deployment of your IoT Fleet Management infrastructure. You can write a simple YAML or JSON file to deploy your IoT fleet management resources in all AWS regions where AWS IoT Device Management is available.

    To learn more about the AWS IoT Device Management resource types that AWS CloudFormation supports, visit the IoT resource type reference.

    » AWS releases open source JDBC driver to connect to Amazon Neptune

    Posted On: Nov 15, 2021

    AWS released an open source Java (JDBC) driver to connect to Amazon Neptune. This makes it easy for customers to connect to Neptune with tools and libraries that support JDBC, such as popular Business Intelligence (BI) tools.

    Amazon Neptune is a fast, reliable, fully managed graph database service that makes it easy to build and run applications that work with highly connected datasets. Customers have asked us for an easy way to connect Neptune to BI tools in order to visualize their data and better understand the results of their queries. Customers can now connect their graph database to these tools, such as Tableau, to visualize the results of their queries.

    There are no additional costs to using a JDBC driver with Amazon Neptune, customers only pay for the Neptune resources they consume. Learn more about the driver here. To download the driver and get started, visit the JDBC driver’s GitHub project page. You can use the GitHub project to file issues, and open feature requests.

    » Safer interrupt management demo for FreeRTOS kernel

    Posted On: Nov 15, 2021

    FreeRTOS now contains an example code that demonstrates a method of minimizing the time an application spends in privileged mode in FreeRTOS ports on microcontrollers (MCU) with Memory Protection Unit (MPU) support. FreeRTOS ports with MPU support enable MCU applications to be more robust and secure by running application tasks in unprivileged mode, where they have access only to their own stacks and pre-configured memory regions. The only application code that runs in privileged mode on these MPU enabled MCUs are Interrupt Service Routines (ISRs). The example code demonstrates an approach to keep ISRs short and defer most of the application work to unprivileged FreeRTOS tasks, which helps improve security of the application by minimizing the time it spends in privileged mode.

    To learn more and get started, visit the demo page and download the example code from the Downloads page or GitHub.

    » Sheet Change Performance Optimizations is now generally available for Amazon QuickSight

    Posted On: Nov 12, 2021

    Amazon QuickSight now only refreshes visuals when switching sheets if required, such as when a parameter filter change is made. This creates a seamless sheet change experience for users by further enhancing QuickSight visual load time performance.

    This setting can be enabled or disabled at the dashboard level. More details can be found here.

    » Amazon Athena announces cross-account federated query

    Posted On: Nov 12, 2021

    If you have data in sources other than Amazon S3, you can use Amazon Athena federated query to analyze the data in-place or build pipelines that extract and store data in Amazon S3. Until today, querying this data required the data source and its connector to use the same AWS account as the user querying the data. Athena now supports cross-account federated query to enable teams of analysts, data scientists, and data engineers to query data stored in other AWS accounts.

    Enabling cross-account federated query is simple with Athena’s upgraded console. Using an AWS account with administrator privileges, deploy and configure one of Athena’s pre-built connectors, or a custom connector developed with Athena’s connector SDK, by selecting Data sources from the Athena console and then choosing Connect data source.

    Once the connector is configured for cross-account access, navigate to the Athena console from your AWS account and select Data sources followed by Custom or shared connector to add the connector as a new data source. You can now query the data using the Athena console, compatible SQL clients, business intelligence applications, and AWS SDK. To learn more about querying federated sources see Using Amazon Athena Federated Query and Writing Federated Queries.

    There are no new charges for querying connectors in another account, but Athena’s standard rates for data scanned, Lambda usage, and other services apply. For more information on Athena pricing, visit the pricing page.

    » Amazon Connect launches Contact APIs to fetch and update contact details programmatically

    Posted On: Nov 12, 2021

    Amazon Connect now provides Contact APIs that allow you to describe contact details (e.g., queue information, chat attachments, task references) and update contact information (e.g., task name). The new APIs offer more flexible ways to interact and manage contacts and enable you to create customized experiences for your customers. For example, with these APIs, you can add or update contact details programmatically from your business applications, like Customer Relationship Management (CRM). You can also retrieve contact progress timestamps (e.g., enqueued, connected with an agent, disconnected) for use in a custom reporting solution or workforce management solution. To learn more, see the API documentation.

    The new Amazon Connect Contact APIs are available in all AWS regions where Amazon Connect is offered. To learn more about Amazon Connect, the AWS contact center as a service solution on the cloud, please visit the Amazon Connect website.

    » Amazon Connect launches scheduled tasks

    Posted On: Nov 12, 2021

    Amazon Connect now allows customers to schedule tasks, up-to six days in the future to follow-up on customer issues when promised. For example, to call a customer back on a particular date/time to provide a status update on their issue or to follow up with an internal team for progress updates on a customer service issue. Additionally, customers can now update the task scheduled date/time using the UpdateContactSchedule API Amazon Connect Tasks empowers contact center managers to prioritize, assign, track, and automate customer service tasks across the disparate applications used by agents. You can dynamically prioritize and assign tasks based on agent skill set, availability, information about the task (e.g., type, priority/urgency, category), and now a scheduled date/time. Amazon Connect Tasks provides pre-built integrations with CRM applications (e.g., Zendesk, Salesforce) and APIs to more easily integrate with your homegrown and business-specific applications.

    Amazon Connect Tasks is available in US East (N. Virginia), US West (Oregon), Canada (Central), Europe (London), Europe (Frankfurt), Asia Pacific (Sydney), Asia Pacific (Singapore), Asia Pacific (Tokyo), and Asia Pacific (Seoul). Scheduled tasks are priced at the same rate as other tasks. To learn more about Amazon Connect Tasks pricing visit the Amazon Connect pricing page. To learn more, see the API reference guide, help documentation, visit our webpage, or read this blog post that provides instructions on how to setup Amazon Connect Tasks for your contact center.

    » Amazon SageMaker Autopilot now generates additional data insights and recommendations

    Posted On: Nov 12, 2021

    Amazon SageMaker Autopilot automatically builds, trains, and tunes the best machine learning models based on your data, while allowing you to maintain full control and visibility. As a part of building models, SageMaker Autopilot automatically cleans, prepares and preprocesses data to optimize performance of machine learning models. Starting today, Autopilot generates several additional data insights that can help you improve the quality of data and thereby build higher quality models that better meet your business needs.

    Some of the most important data insights now generated include prediction power, correlation between features, target column distribution, duplicate rows, anomalous rows, imbalanced class distribution, cardinality for multi-class classification target response. These insights are presented in the Data exploration notebook generated by Autopilot and are available to you early on before the training process is underway. Wherever possible these insights, also include recommendations to fix any detected data quality issues before attempting to automatically pre-process and curate the data.

    The data insights and recommendations are now generated in all AWS regions where SageMaker Autopilot is currently supported. To learn more see data insights . To get started with SageMaker Autopilot, see the Getting Started or access Autopilot within SageMaker Studio.

    » Amazon announces new NVIDIA Triton Inference Server on Amazon SageMaker

    Posted On: Nov 12, 2021

    Today, we are excited to announce NVIDIA Triton™ Inference Server on Amazon SageMaker, enabling customers who choose NVIDIA Triton as their model server to bring their containers and deploy them at scale in SageMaker. 

    NVIDIA Triton is an open source model server that runs trained ML models from multiple ML frameworks including PyTorch, TensorFlow, XGBoost, and ONNX. Triton is an extensible server to which developers can add new frontends, which can receive requests in specific formats, and new back-ends, which can handle additional model execution runtimes. AWS worked closely with NVIDIA to add a new Triton frontend that is compatible with SageMaker hosted containers and a new backend that is compatible with SageMaker Neo-compiled models. As a result, customers can easily build a custom container that includes their model with Triton and bring it to SageMaker. SageMaker Inference will handle the requests and automatically scale the container as usage increases, making model deployment with Triton on AWS easier.

    Support for NVIDIA Triton™ Inference Server in Amazon SageMaker is available in all regions where Amazon SageMaker is available at no additional cost for the Triton Inference Server container. Read the blog and documentation to learn more.

    » Amazon Connect now enables you to create and orchestrate tasks directly from Flows

    Posted On: Nov 12, 2021

    Amazon Connect now allows customers to create and orchestrate tasks directly from contact flows based on customer input (e.g., Dual tone multi frequency (DTMF)) or call, chat, and task information (e.g., type, priority/urgency, category, schedule data/time) without any coding required. For example, when a customer reaches out after office hours, you can automatically create a task for an agent to follow with them when available. Amazon Connect Tasks empowers contact center managers to prioritize, assign, track, and automate customer service tasks across the disparate applications used by agents. You can turn on this in a few clicks by using the Create tasks flow block in your contact flows.

    Amazon Connect Tasks is available in US East (N. Virginia), US West (Oregon), Canada (Central), Europe (London), Europe (Frankfurt), Asia Pacific (Sydney), Asia Pacific (Singapore), Asia Pacific (Tokyo), and Asia Pacific (Seoul). To learn more about Amazon Connect Tasks pricing visit the Amazon Connect pricing page. To learn more, see the help documentation, or visit our webpage.

    » Unified Search in the AWS Management Console now includes blogs, knowledge articles, events, and tutorials

    Posted On: Nov 12, 2021

    We are excited to announce that blogs, knowledge articles, events, and tutorials are available in Unified Search to enable users to easily search and discover information in the AWS Management Console. AWS users can now search for blogs (e.g., Implementing Auto Scaling for EC2 Mac Instances), knowledge articles (e.g., Set Your Preferences for AWS Emails), tutorials (e.g., Remotely Run Commands on an EC2 Instance), and events (e.g., AWS Container Day) without leaving the AWS Management Console.

    You can access Unified Search by signing into AWS Management Console. Unified Search is available in all public AWS Regions.

    » Announcing general availability of Amazon EC2 G5 instances

    Posted On: Nov 12, 2021

    Today we are announcing the general availability of Amazon EC2 G5 instances powered by NVIDIA A10G Tensor Core GPUs. G5 instances can be used for a wide range of graphics intensive and machine learning use cases. They deliver up to 3x higher performance for graphics-intensive applications and machine learning inference, and up to 3.3x higher performance for training simple to moderately complex machine learning models when compared to Amazon EC2 G4dn instances.

    G5 instances feature up to 8 NVIDIA A10G Tensor Core GPUs and 2nd generation AMD EPYC processors. They also support up to 192 vCPUs, up to 100 Gbps of network bandwidth, and up to 7.6 TB of local NVMe SSD storage. With eight G5 instance sizes that offer access to single GPU or multiple GPUs, customers have the flexibility to pick the right instance size for their applications.

    Customers can use G5 instances for graphics-intensive applications such as remote workstations, video rendering, and cloud gaming to produce high fidelity graphics in real time. Machine learning customers can use G5 instances for high performance and cost-efficient training and inference for natural language processing, computer vision, and recommender engine use cases.

    With access to NVIDIA’s Tesla drivers for compute workloads, GRID drivers to provision RTX Virtual Workstations, and Gaming drivers at no additional cost, customers can easily optimize the G5 instances for their workloads. 

    Amazon EC2 G5 instances are available today in the AWS US East (N. Virginia), US West (Oregon), and Europe (Ireland) regions. Customers can purchase G5 instances as On-Demand Instances, Reserved Instances, Spot Instances, or as part of Savings Plans.

    To get started, visit the AWS Management Console, AWS Command Line Interface (CLI), and AWS SDKs. To learn more, visit the G5 instance page.

    » AWS Amplify announces new observeQuery API for Amplify DataStore to help apps with real-time data open faster

    Posted On: Nov 11, 2021

    With today’s release, developers can use AWS Amplify DataStore’s new observeQuery API to help open apps faster using locally stored data, and then update the app UI with real-time data using no additional code. DataStore provides frontend app developers the ability to build real-time apps with offline capabilities by storing data on-device (web browser or mobile device) and automatically synchronizing data to the cloud and across devices on an internet connection. With the new observeQuery API, developers can retrieve both locally stored data and subscribe to subsequent data changes synced from the cloud with a single API call.

    A low latency first-run experience for mobile and web apps is critical to a great customer experience. With Amplify DataStore, developers can retrieve local offline app data to hydrate the application user interface, reducing the latency on the customer experience. As the app comes online, subsequent changes to the app data can automatically be repopulated in the application UI, with no change in code. The new observeQuery API also saves developers the overhead of managing data availability states within their app code, helping developers write less code and better architected apps.

    Install the latest Amplify Library for JavaScript, Android, and iOS and get started by checking out our documentation.

    » Achieve up to 30% better performance with Amazon DocumentDB (with MongoDB compatibility) using new Graviton2 instances

    Posted On: Nov 11, 2021

    Amazon DocumentDB (with MongoDB compatibility) is a scalable, highly durable, and fully managed database service for operating mission-critical MongoDB workloads. 

    Amazon DocumentDB (with MongoDB compatibility) now supports the T4g.medium and R6g instance types (Graviton2 instances). Graviton2 instances provide up to 30% performance improvement for Amazon DocumentDB workloads depending on database size.

    AWS Graviton2 processors are custom built by Amazon Web Services using 64-bit Arm Neoverse cores and deliver several performance optimizations over first-generation AWS Graviton processors. This includes 7x the performance, 4x the number of compute cores, 2x larger private caches per core, 5x faster memory, and 2x faster floating-point performance per core. Additionally, the AWS Graviton2 processors feature always-on fully encrypted DDR4 memory and 50% faster per core encryption performance. These performance improvements make Graviton2 R6g database instances a great choice for database workloads. Amazon DocumentDB Graviton2 instances are available in sizes medium to 16xlarge. 

    You can launch Graviton2 R6g instances and T4g.medium instances in the Amazon DocumentDB Management Console or using the AWS CLI. Upgrading an existing Amazon DocumentDB instance to Graviton2 requires a simple instance type modification, using the same steps as any other instance modification. Your applications will continue to work as normal and you will not have to port application code. For more details on database version support, refer to our documentation. For more information on how to get started, or see our getting started guide. For full pricing and regional availability see Amazon DocumentDB pricing.

    » Amazon Kendra releases SharePoint Connector to enable SharePoint site search

    Posted On: Nov 11, 2021

    Amazon Kendra is an intelligent search service powered by machine learning, enabling organizations to provide relevant information to customers and employees, when they need it. Starting today, AWS customers can index and search documents from Microsoft SharePoint 2013 or Microsoft SharePoint 2016 servers.

    Critical information can be scattered across multiple data sources in an enterprise, including sources such as SharePoint On-Premise Servers. Amazon Kendra customers can now use the Amazon Kendra SharePoint connector to index documents (HTML, PDF, MS Word, MS PowerPoint, and plain text) made available in a SharePoint server that is accessible from their virtual private cloud. Users can search for information across content using Amazon Kendra’s intelligent search. This is in addition to being able to index documents from Microsoft SharePoint Online. Organizations can provide relevant search results from these data sources to users seeking answers to their questions. 

    The Amazon Kendra SharePoint Connector is available in all AWS regions where Amazon Kendra is available. To learn more about the feature, visit the documentation page or the Amazon Kendra connector library. To explore Amazon Kendra, visit the Amazon Kendra website.

    » Amazon EKS adds support for additional cluster configuration options using AWS CloudFormation

    Posted On: Nov 11, 2021

    Amazon Elastic Kubernetes Service (EKS) now allows you to configure tags, endpoint access control, and control plane logging through AWS CloudFormation.

    Using AWS CloudFormation, you can specify tags to apply to your cluster, configure private and public networking access of your cluster’s Kubernetes endpoint, and enable streaming of control plane logs to Amazon CloudWatch.

    To learn more, visit the Amazon EKS documentation and AWS CloudFormation EKS documentation.

    » AWS CloudTrail announces ErrorRate Insights

    Posted On: Nov 11, 2021

    AWS CloudTrail announces CloudTrail error rate Insights, a new feature of CloudTrail Insights that enables customers to identify unusual activity in their AWS account based on API error codes and their rate.

    Error rate Insights work by building a baseline statistical model of normal operating patterns for an API. By comparing actual error rates to the model, it can notify customers of error rate spikes, so customers can take remedial actions such as updating permissions or raising resource limits.

    Error rate Insights work without customers having to configure thresholds or understand advanced statistical techniques. By giving customers the ability to identify issues before they impact end users, error rate Insights continue to build on CloudTrail Insights' log analytics capabilities.

    You can enable error rate Insights across your AWS organization or in individual AWS accounts with a few clicks from within the CloudTrail console. You can also enable error rate Insights from the AWS CLI. CloudTrail error rate Insights is available in all regions where AWS CloudTrail is available, except for regions in China. To get started with CloudTrail Insights, see our documentation.

    To learn more about AWS CloudTrail, visit its product page. To learn more about pricing, visit the pricing page.

    » Amazon EC2 M6i instances are now available in 5 additional regions

    Posted On: Nov 11, 2021

    Starting today, Amazon EC2 M6i instances are available in additional AWS Regions Asia Pacific (Mumbai), Europe (Paris), South America (Sao Paulo), Asia Pacific (Seoul), and Asia Pacific (Sydney). Designed to provide a balance of compute, memory, storage and network resources, M6i instances are built on the AWS Nitro System, a combination of dedicated hardware and lightweight hypervisor, which delivers practically all of the compute and memory resources of the host hardware to your instances. These instances are SAP-Certified and are ideal for workloads such as web and application servers, back-end servers supporting enterprise applications (e.g. Microsoft Exchange Server and SharePoint Server, SAP Business Suite, MySQL, Microsoft SQL Server, and PostgreSQL databases), gaming servers, caching fleets, as well as for application development environments.

    M6i instances are powered by 3rd generation Intel Xeon Scalable processors (code named Ice Lake) with an all-core turbo frequency of 3.5 GHz, offer up to 15% better price performance over M5 instances, and always-on memory encryption using Intel Total Memory Encryption (TME). To meet customer demands for increased scalability, M6i instances provide a new instance size (m6i.32xlarge) with 128 vCPUs and 512 GiB of memory, 33% more than the largest M5 instance. They also provide up to 20% higher memory bandwidth per vCPU compared to M5 instances. M6i also give customers up to 50 Gbps of networking speed and 40 Gbps of bandwidth to the Amazon Elastic Block Store, twice that of M5 instances. Customers can use Elastic Fabric Adapter on the 32xlarge size, which enables low latency and highly scalable inter-node communication. For optimal networking performance on these new instances, Elastic Network Adapter (ENA) driver update may be required. For more information on optimal ENA driver for M6i, see this article.

    With this regional expansion, M6i instances are now available in the following AWS Regions: US East (Ohio), US East (N. Virginia), US West (N. California), US West (Oregon), Asia Pacific (Mumbai), Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Europe (Frankfurt), Europe (Ireland), Europe (Paris), and South America (São Paulo). M6i instances are available in 9 sizes with 2, 4, 8, 16, 32, 48, 64, 96, and 128 vCPUs. Customers can purchase the new instances via Savings Plans, Reserved, On-Demand, and Spot instances. 

    To get started, visit the AWS Management Console, AWS Command Line Interface (CLI), and AWS SDKs. To learn more, visit the M6i instances page.

    » Amazon QLDB is now available in the Canada (Central) region

    Posted On: Nov 11, 2021

    Starting today, Amazon Quantum Ledger Database (QLDB) is available in the Canada (Central) region. With this launch, QLDB is now available in 11 Regions globally: Canada (Central), US East (N. Virginia), US East (Ohio), US West (Oregon), Europe (Frankfurt), Europe (Ireland), Europe (London), Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific (Sydney), and Asia Pacific (Tokyo).

    Amazon QLDB is a fully managed ledger database that provides a transparent, immutable, and cryptographically verifiable log. Customers can use QLDB to track all application data changes, as well as maintain a complete and verifiable history of changes to data over time.

    Get started with Amazon QLDB today.

    » Amazon Nimble Studio launches the ability to test launch profile configurations via the Nimble Studio console.

    Posted On: Nov 11, 2021

    Amazon Nimble Studio today supports the ability for administrators to test their launch profile configurations directly from the console, which can reduce the number of errors artists experience when provisioning a workstation.

    Nimble Studio customers utilize launch profiles to define access to AWS resources and connect to managed workstations. As the needs of studios grow, administrators may update AWS resources such as VPC, storage, license servers, and the corresponding launch profiles which can correspond to instance launch failure and prevent artists from working on their tasks. Starting today, administrators have the ability to test launch profile configuration changes on the same workstations the artists use for tasks directly from the Nimble Studio console. By utilizing this new test launch feature, administrators can reduce failed launches of virtual workstations, reducing the impact on artists. Administrators will see the launch profiles marked as “Ready” or “Impaired” based on the success of the test launch. History of all the test launches will be stored as a metric in the customers’ CloudWatch account.

    Using the Test Launch feature will incur the same metered workstation charges Nimble Studio users experience today. Furthermore, customers may experience a charge for an additional CloudWatch metric, visit Amazon CloudWatch pricing for information. To learn more, visit the Nimble Studio documentation page.

    » Introducing 34 new resource types in the CloudFormation Registry

    Posted On: Nov 11, 2021

    Since our last update in August 2021, AWS CloudFormation Registry has expanded to include support for 34 new resource types (refer to the complete list below) between August and October 2021. A resource type includes schema (resource properties and handler permissions) and handlers that allow API interactions with the underlying AWS or third-party services. Customers can now configure, provision, and manage the lifecycle of these newly supported resources as part of their cloud infrastructure through CloudFormation, by treating the infrastructure as code. Furthermore, we are pleased to announce that 4 new AWS services added CloudFormation support on the day of launch. These services include: Amazon Managed Service for Prometheus, Amazon OpenSearch Service, Amazon MemoryDB for Redis, and Amazon Connect Wisdom. CloudFormation now supports 165 AWS services spanning over 800 resource types, along with over 40 third-party resource types.

    Customers can now centrally discover the schema associated with these 34 new resource types on the CloudFormation Registry. With the addition of these resource types to the Registry, customers can also benefit from the resource import feature of CloudFormation. For example, if you create an Amazon MemoryDB for Redis User resource through the AWS console or the Command Line Interface, you can bring that resource into CloudFormation’s management using the resource import feature.

    If you have feedback on the type of resources for which you want CloudFormation support, please refer to aws-cloudformation-coverage-roadmap.

    Now you can configure, provision, and manage the following 34 resource types with CloudFormation.

    1. AWS::APS::RuleGroupsNamespace
    2. AWS::APS::Workspace
    3. AWS::Athena::PreparedStatement
    4. AWS::Connect::HoursOfOperation
    5. AWS::Connect::User
    6. AWS::Connect::UserHierarchyGroup
    7. AWS::DeviceFarm::DevicePool
    8. AWS::DeviceFarm::InstanceProfile
    9. AWS::DeviceFarm::NetworkProfile
    10. AWS::DeviceFarm::Project
    11. AWS::DeviceFarm::TestGridProject
    12. AWS::DeviceFarm::VPCEConfiguration
    13. AWS::IoT::FleetMetric
    14. AWS::IoT::JobTemplate
    15. AWS::Lightsail::Database
    16. AWS::Lightsail::Disk
    17. AWS::Lightsail::Instance
    18. AWS::Lightsail::StaticIp
    19. AWS::MemoryDB::ACL
    20. AWS::MemoryDB::Cluster
    21. AWS::MemoryDB::ParameterGroup
    22. AWS::MemoryDB::SubnetGroup
    23. AWS::MemoryDB::User
    24. AWS::OpenSearchService::Domain
    25. AWS::Panorama::ApplicationInstance
    26. AWS::Panorama::Package
    27. AWS::Panorama::PackageVersion
    28. AWS::Rekognition:Project
    29. AWS::Route53Resolver::ResolverConfig
    30. AWS::S3::MultiRegionAccessPoint
    31. AWS::S3::MultiRegionAccessPointPolicy
    32. AWS::Wisdom::Assistant
    33. AWS::Wisdom::AssistantAssociation
    34. AWS::Wisdom::KnowledgeBase
    » Amazon Translate now enables multidirectional custom terminology

    Posted On: Nov 11, 2021

    Amazon Translate is a neural machine translation service that delivers fast, high-quality, affordable, and customizable language translation. Today, we are introducing multidirectional custom terminology, to give you more control and flexibility over your translation workflows. Custom terminology is a feature of Amazon Translate that enables you to customize your translation of named entities such that your brand names, character names, model names, and other unique content using your terminology file. With multidirectional custom terminology, you no longer have to constrain yourself to set the first column of your terminology file as your source language. You will be now able to use the same terminology file to translate both to and from a specific language.

    To get started review our customer terminology documentation page. You can use AWS CLI, AWS Management Console, or supported SDK to create your customer terminology. You can upload your terminology file using the AWS CLI, Management Console, or supported SDKs. While creating a custom terminology you can specify whether you want it to be unidirectional or multidirectional. If you create a multidirectional custom terminology, you no longer have to constrain yourself to set the first column of your terminology file as your source language. You will be now able to use the same terminology file to translate both to and from a specific language. When a custom terminology is used as part of the translation request, the engine scans the terminology file before returning the final result. When the engine identifies an exact match between a terminology entry and a string in the source text, it locates the appropriate string in the proposed translation and replaces it with the terminology entry.

    Multi-directional Custom Terminology is now available in commercial AWS regions in both real-time and asynchronous batch translation operations . For step by step instructions to enable multidirectional custom terminology, read our "how to blog" here. To learn more, please read the Amazon Translate documentation to get started, particularly read the documentation on Custom Terminology and for more information.

    » Amazon EC2 M6gd and C6gd instances powered by AWS Graviton2 now available in additional regions

    Posted On: Nov 11, 2021

    Starting today, general-purpose Amazon EC2 M6gd instances are now available in Asia Pacific (Mumbai), and Europe (London). The compute-optimized Amazon EC2 C6gd instances are now available in Asia Pacific (Mumbai), Canada (Central), and Europe (London). 

    These instances are powered by AWS Graviton2 processors and deliver up to 40% better price performance and 50% more NVMe storage GB/vCPU over comparable x86-based instances for a wide variety of workloads, including application servers, micro-services, high-performance computing, CPU-based machine learning inference, electronic design automation, gaming, open-source databases, and in-memory caches. The local SSD storage provided on these instances is ideal for applications that need access to high-speed, low latency storage, as well as for temporary storage of data such as batch and log processing, and for high-speed caches and scratch files .

    AWS Graviton2 processors are custom designed by AWS using 64-bit Arm Neoverse cores. AWS Graviton2 processors deliver a major leap in performance and capabilities over first-generation AWS Graviton processors, with 7x performance, 4x the number of compute cores, 2x larger caches, and 5x faster memory. AWS Graviton2 processors feature always-on 256-bit DRAM encryption and 50% faster per core encryption performance compared to the first-generation AWS Graviton processors. These instances are built on the AWS Nitro System, a collection of AWS-designed hardware and software innovations that enable the delivery of efficient, flexible, and secure cloud services with isolated multi-tenancy, private networking, and fast local storage. These instances provide up to 3.8 TB of NVMe-based SSD storage, up to 25 Gbps of network bandwidth, and support up to 19 Gbps Elastic Block Store (EBS) bandwidth.

    Amazon EC2 C6gd and M6gd instances are supported by many popular operating systems and services from Independent Software Vendors (ISVs) as well as AWS. These include popular commercial Linux distributions (Amazon Linux 2, Red Hat Enterprise Linux, SUSE Linux Enterprise Server, and Ubuntu), community Linux distributions (AlmaLinux, CentOS, Debian, Fedora, and Rocky Linux), FreeBSD/NetBSD, Java distributions (Amazon Corretto, Azul Platform Core, and OpenJDK), container services (Amazon ECR, Amazon ECS, Amazon EKS, Docker, Kubernetes, and Rancher), management/observability/security tools (Amazon CloudWatch, AWS Systems Manager, Amazon Inspector, Aqua Security, Cribl, Crowdstrike, Datadog, Dynatrace, Honeycomb.io, Grafana, Lacework, New Relic, One Identity, Qualys, Rapid7, Snyk, Splunk universal forwarder, Teleport, Tenable, Threat Stack, Trend Micro, and Wazuh), developer/automation tools (AWS CloudFormation, AWS Cloud Development Kit (CDK), AWS Code Suite, Buildkite, Chef, CircleCI, Cirrus CI, Drone.io, GitHub, GitLab, Granulate, HashiCorp, Jenkins, Puppet, and TravisCI), and data/analytics tools and services (Anaconda, Elastic, Instaclustr, Intersystems, and Starburst).

    With this regional expansion, Amazon EC2 M6gd is now available across AWS US East (Ohio), US East (N. Virginia), US West (Oregon), US West (San Francisco), Asia Pacific (Mumbai), Asia Pacific (Tokyo), Asia Pacific (Singapore), Asia Pacific (Sydney), and Europe (Frankfurt), Europe (Ireland), Europe (London) regions. Amazon EC2 C6gd is now available across AWS US East (Ohio), US East (N. Virginia), US West (Oregon), US West (San Francisco), Asia Pacific (Mumbai), Asia Pacific (Tokyo), Asia Pacific (Singapore), Asia Pacific (Sydney), Canada (Central), and Europe (Frankfurt),  Europe (Ireland), Euroupe (London) regions. To learn more about AWS Regional Services please visit the Region table . All instances are available in 8 sizes, with 1, 2, 4, 8, 16, 32, 48, and 64 vCPUs in addition to the bare metal option. These instances are purchasable On-Demand, as Reserved instances, as Spot instances, or as part of Savings Plans.

    Many customers and partners are realizing significant price performance benefits of AWS Graviton2 based instances with minimal effort. AWS services such as Amazon ElastiCache deliver up to 45% better price performance and Amazon RDS (MariaDB, MySQL, and Postgres) deliver up to 52% better price performance benefits through Graviton2-based instances. Amazon MemoryDB for Redis, supported by Graviton2 instances, makes it easy and cost-effective to build applications that require microsecond read and single-digit millisecond write performance with data durability and high availability. Amazon EMR provides up to 30% lower cost and up to 15% improved performance for Spark workloads on Graviton2-based instances. AWS Graviton2-based database instances are now generally available for Amazon Aurora , and provide up to 20% performance improvement and up to 35% price performance improvement versus comparable x86 instances depending on database size. Additionally, customers using Amazon Elasticsearch Service can now use Graviton2 instances, and enjoy up to 38% improvement in indexing throughput, 50% reduction in indexing latency, and 30% improvement in query performance versus comparable x86-based instances. AWS Lambda functions powered by AWS Graviton2 processors deliver up to 34% better price performance versus comparable x86 Lambda functions for a variety of Serverless workloads, such as web and mobile backends, data, and media processing.

    To get started with AWS Graviton2-based Amazon EC2 M6gd and C6gd instances, visit the AWS Management Console, AWS Command Line Interface (CLI), and AWS SDKs. To learn more, visit the AWS Graviton pageM6g pageC6g page or the Getting Started Github page.

    » AWS Security Hub adds three new FSBP controls and three new partners

    Posted On: Nov 11, 2021

    AWS Security Hub has released three new controls for its Foundational Security Best Practice standard (FSBP) to enhance customers’ Cloud Security Posture Management (CSPM). These controls conduct fully-automatic checks against security best practices for Elastic Load Balancing and AWS Systems Manager. If you have Security Hub set to automatically enable new controls and are already using AWS Foundational Security Best Practices, these controls are enabled by default. Security Hub now supports 162 security controls to automatically check your security posture in AWS.

    The three controls that we have launched are:

  • [ELB.2] Classic Load Balancers with SSL/HTTPS listeners should use a certificate provided by AWS Certificate Manager 
  • [ELB.8] Classic Load Balancers with HTTPS/SSL listeners should use a predefined security policy that has strong configuration 
  • [SSM.4] SSM documents should not be public 
  • Security Hub also added two integration partners and one consulting partner, which brings Security Hub up to 73 total partners. The new integration partners include HackerOne and Logz.io. HackerOne sends vulnerability findings to Security Hub and is a platform that connects organizations with a global ethical hacker community to identify and fix vulnerabilities before they can be exploited. Logz.io Cloud SIEM receives findings from Security Hub and enables SecOps teams to quickly identify and investigate threats across the entire attack surface. The new consulting partner is DFX5, and it provides AWS security consultancy and managed security services. For Security Hub customers, DFX5 has developed customized automated response and remediation solution and customized reporting capabilities.

    AWS Security Hub is available globally and is designed to give you a comprehensive view of your security posture across your AWS accounts. With Security Hub, you now have a single place that aggregates, organizes, and prioritizes your security alerts, or findings, from multiple AWS services, including Amazon GuardDuty, Amazon Inspector, Amazon Macie, AWS Firewall Manager, AWS Systems Manager Patch Manager, AWS Config, AWS IAM Access Analyzer, as well as from over 60 AWS Partner Network (APN) solutions. You can also continuously monitor your environment using automated security checks based on standards, such as AWS Foundational Security Best Practices, the CIS AWS Foundations Benchmark, and the Payment Card Industry Data Security Standard. In addition, you can take action on these findings by investigating findings in Amazon Detective or AWS Systems Manager OpsCenter or by sending them to AWS Audit Manager or AWS Chatbot. You can also use Amazon EventBridge rules to send the findings to ticketing, chat, Security Information and Event Management (SIEM), response and remediation workflows, and incident management tools.

    You can enable your 30-day free trial of AWS Security Hub with a single-click in the AWS Management console. To learn more about AWS Security Hub capabilities, see the AWS Security Hub documentation, and to start your 30-day free trial see the AWS Security Hub free trial page.

    » Amazon ECS has improved Capacity Providers to deliver faster Cluster Auto Scaling

    Posted On: Nov 11, 2021

    Amazon Elastic Container Services (Amazon ECS) has improved Amazon ECS Capacity Providers to deliver a faster Cluster Auto Scaling experience. Customers who need to launch a large number of tasks (>100 tasks) on their Amazon ECS clusters will now see their cluster infrastructure scale faster.

    Amazon ECS is a fully managed container orchestration service that makes it easier for you to deploy, manage, and scale containerized applications. Cluster auto scaling is an Amazon ECS capability that manages the scaling of Amazon Elastic Compute Cloud (EC2) Auto Scaling groups on your behalf, so that you can just focus on running your tasks. We have made several optimizations in capacity providers, which now enable Amazon ECS to adjust cluster capacity in a more responsive manner, especially in cases when a cluster needs to scale-out to launch a large number of tasks (>100 tasks) or when multiple ECS services that have disparate resource requirements are scaling-out simultaneously. These performance optimizations will help deliver a faster cluster auto scaling experience to you.

    These optimizations are now automatically enabled for you in AWS Regions where ECS is available. No further action is needed from you end. To learn more, refer to Deep Dive on Amazon ECS Cluster Auto Scaling and Cluster Auto Scaling user guide.

    » Amazon QuickSight launches 4 new administration features including IP-based access restrictions and Bring-you-own-role for account setup

    Posted On: Nov 10, 2021

    Amazon QuickSight now supports 4 new features that make it easier for AWS administrators to secure and roll out Amazon QuickSight to more users and accounts within their organizations - IP-based access restrictions, AWS Service Control Policy-based restrictions, automated email syncing for federated SSO users and bring-your-own-role during QuickSight account sign up.

    IP-based access restrictions allow administrators to enforce source IP restrictions on access to the Amazon QuickSight UI, mobile app as well as embedded pages. For example, admins can create an IP rule that allows users to access Amazon QuickSight account only from IP addresses associated with the company’s office or remote virtual private network (VPN). For more information, see Turning On Internet Protocol (IP) Restrictions in Amazon QuickSight.

    Service Control Policy (SCP)-based sign up allows AWS administrators to restrict Amazon QuickSight account setup options within their AWS accounts. Administrators can restrict the Amazon QuickSight edition (Standard vs Enterprise), and also the type of authentication mechanisms they can use with QuickSight. For example, admins can set up service control policy that denies sign up for Amazon QuickSight Standard Edition and turns off the ability to invite any users other than via federated Single-Sign On (SSO). For more information, see Using Service Control Policies to Restrict Amazon QuickSight Sign-up Options

    Automated email sync for federated SSO users allows Admins to setup QuickSight/SSO such that email addresses for end-users are automatically synced at first time login, avoiding any manual errors during entry. For example, administrators can setup their QuickSight accounts so that only corporate-assigned email addresses are used when users are provisioned to their Amazon QuickSight account through their Identity Providers. For more information, see Configuring Email Syncing for Federated Users in Amazon QuickSight.

    Lastly, Bring-your-own-role during Amazon QuickSight account setup allows users setting up a QuickSight account to pick from an existing role in their AWS account that Amazon QuickSight will use, instead of Amazon QuickSight creating a custom service role for the account. This launch allows customers set up their own role for a group of co-dependent AWS Service that they want to provide access to. For more information, see Passing IAM Roles to Amazon QuickSight.

    IP-based access restriction, Email Syncing for Federated Users and Bring your own role are available in Amazon QuickSight Enterprise Edition only, while Service Control Policy support for sign up restrictions in available in both Amazon QuickSight Standard and Enterprise editions. All features available in Amazon QuickSight Enterprise Edition in all Amazon QuickSight regions - US East (N. Virginia), US East (Ohio), US West (Oregon), Europe (Frankfurt), Europe (Ireland), Europe (London), Asia Pacific (Mumbai), Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), and AWS GovCloud (US-West).

    » AWS Control Tower now supports concurrent operations for detective guardrails

    Posted On: Nov 10, 2021

    AWS Control Tower now supports concurrent operations for detective guardrails to help expedite guardrail management. You can now enable multiple detective guardrails without needing to wait for individual guardrail operations to complete. AWS Control Tower provides customers with out-of-the-box preventive and detective guardrails that you can deploy to increase your security, operational, and compliance posture.

    You can enable different detective guardrails (e.g. Detect Whether MFA for the Root User is Enabled and Detect Whether Public Write Access to Amazon S3 Buckets is Allowed) on the same Organizational Unit (OU), or different detective guardrails on different OUs concurrently. Guardrail error messaging has also been improved to give additional guidance for supported guardrail concurrent operations. Guardrails remain in effect as you create new accounts or make changes to your existing accounts, and Control Tower provides a summary report of how each account conforms to your enabled policies. For a full list of available guardrails, see Guardrail Reference - AWS Control Tower.

    AWS Control Tower offers the easiest way to set up and govern a new, secure, multi-account AWS environment based on AWS best practices. Customers will create new accounts using AWS Control Tower’s account factory and enable governance features such as guardrails, centralized logging and monitoring in supported AWS Regions. To learn more, visit the AWS Control Tower homepage  or see the AWS Control Tower User Guide. For a full list of AWS Regions where AWS Control Tower is available, see the AWS Region Table.

    » AWS CDK releases v1.126.0 - v1.130.0 with high-level APIs for AWS App Runner and hotswap support for Amazon ECS and AWS Step Functions

    Posted On: Nov 10, 2021

    During October, 2021, 5 new versions of the AWS Cloud Development Kit (AWS CDK) for JavaScript, TypeScript, Java, Python, .NET and Go were released (v1.126.0 through v.130.0). The AWS CDK now includes high-level APIs (L2 constructs) for AWS App Runner, a fully managed service that makes it easy for developers to quickly deploy containerized web applications and APIs, at scale and with no prior infrastructure experience required. Additionally, the CDK CLI can now perform hotswap deployments for containers in Amazon ECS tasks and AWS Step Functions. These releases also resolve 40 issues and introduce over 50 new features that span over 50 different modules across the library. Many of these changes were contributed by the developer community.

    The AWS CDK is a software development framework for defining cloud applications using familiar programming languages. The AWS CDK simplifies cloud development on AWS by hiding infrastructure and application complexity behind intent-based, object-oriented APIs for each AWS service.

    To get started, see the following resources:

  • Read the full release notes for 1.126.0, 1.127.0, 1.128.0, 1.129.0 and 1.130.0 
  • Get started with the AWS CDK in all supported languages by taking CDK Workshop.
  • Read our Developer Guide and API Reference.
  • Find useful constructs published by AWS, partners and the community in Construct Hub.
  • Connect with the community in the cdk.dev Slack workspace.
  • Follow our Contribution Guide to learn how to contribute fixes and features to the CDK.
  • » Announcing general availability of AWS Resilience Hub

    Posted On: Nov 10, 2021

    Amazon Web Services (AWS) has announced the general availability of AWS Resilience Hub, a new service that provides you with a single place to define, validate, and track the resilience of your applications so that you can avoid unnecessary downtime caused by software, infrastructure, or operational disruptions.

    The resilience of an application refers to its ability to maintain availability and recover from software and operational disruption within a specified target measured in terms of Recovery Time Objective (RTO) and Recovery Point Objective (RPO). Using AWS Resilience Hub, you can define your applications’ resilience targets (RTO and RPO) and help validate that these targets can be met prior to deployment. AWS Resilience Hub provides automated assessments that identify resilience weaknesses and provide recommended remediation. AWS Resilience Hub also integrates with AWS Fault Injection Simulator to test that resilience targets can be met under different conditions (e.g., database disruptions). When integrated into customers’ CI/CD pipelines, AWS Resilience Hub provides continuous resilience assessments and testing.

    AWS Resilience Hub provides a comprehensive view of your overall application portfolio resilience status through its dashboard. To help you track the resilience of applications, AWS Resilience Hub aggregates and organizes resilience events (e.g., unavailable database or failed resilience validation), alerts, and insights from services like Amazon CloudWatch and AWS Fault Injection Simulator. AWS Resilience Hub also generates a resilience score, a scale that indicates the level of implementation for recommended resilience tests, alarms, and recovery SOPs. This score can be used to measure resilience improvements over time.

    AWS Resilience Hub is available today in US East (Ohio), US East (N. Virginia), US West (Oregon), Asia Pacific (Singapore), Asia Pacific (Tokyo), Europe (Ireland), and Europe (Frankfurt). Additional regions will be added in the future.

    You can try AWS Resilience Hub free for 6 months – up to 3 applications. After that, the AWS Resilience Hub price is $15.00 per application per month. Metering begins once you run the first resilience assessment in AWS Resilience Hub. You will incur charges for any AWS service provisioned by AWS Resilience Hub. Consult your AWS pricing plan for more details on additional charges and visit the AWS Resilience Hub pricing page.

    To learn more about AWS Resilience Hub, visit our product page.

    » Amazon Lex launches support for South African English

    Posted On: Nov 10, 2021

    Today, Amazon Lex announces language support for South African English. Amazon Lex is a service for building conversational interfaces into any application using voice and text. Amazon Lex provides deep learning powered automatic speech recognition (ASR) for converting speech to text, and natural language understanding (NLU) to recognize the intent of the text so you can build applications with highly engaging user experiences and lifelike conversational interactions. With the addition of South African English, you can build and expand your conversational experiences to better understand and engage your customer base.

    Amazon Lex can be applied to a diverse set of use cases such as virtual agents, interactive voice response systems, self-service chatbots, or application bots. Language support for South African English is available in all AWS Regions where Amazon Lex operates. To learn more, visit the Amazon Lex documentation page.

    » Amazon EKS on AWS Fargate now Supports the Fluent Bit Kubernetes Filter

    Posted On: Nov 10, 2021

    Amazon Elastic Kubernetes Service (EKS) on Fargate now supports the use of Kubernetes Fluent Bit filters which provide enriched Kubernetes-specific metadata to Fluent Bit logs. Customers can now more easily observe and troubleshoot their applications by using the Kubernetes pod, container, or namespace name, among other Kubernetes metadata, to associate with their applications’ logs.

    Without support for the Kubernetes filter for Fluent Bit on EKS Fargate, customers had to manually read through log files to find the events they were interested in and it was difficult to associate a given log line with a Kubernetes application. Some customers who needed this kind of observability were unable to use Fargate with EKS and couldn’t benefit from its fully managed compute environment.

    Today, customers can more easily find the information they need in their EKS clusters’ Fluent Bit logs using the Kubernetes metadata added by the Kubernetes filter. By creating a Kubernetes ConfigMap that includes the Fluent Bit Kubernetes filter configuration, the logs for your EKS Fargate cluster are designed to automatically be annotated with Kubernetes metadata.

    To learn more, visit the Amazon EKS Fargate logging documentation. To learn more about Amazon EKS, visit our product page.

    » AWS Marketplace announces enhancements to change requests submission experience

    Posted On: Nov 10, 2021

    AWS Marketplace sellers can now submit multiple, self-service change requests simultaneously using AWS Marketplace Management Portal (AMMP) or AWS Marketplace Catalog API. Now, AWS Marketplace sellers can start multiple self-serve change requests for AMI, Container, Professional Services, and Machine Learning products via AMMP, and via the AWS Marketplace Catalog API for AMI, and Container products. Sellers will no longer have to wait to submit a subsequent change request for a product while prior change requests are in progress. For example, if a seller wants to update product information and version information of their product, they can now submit these requests one after another in quick succession without having to wait for the first request to complete.

    There are some restrictions on requests that cannot happen in parallel. To learn more about submitting self-service change requests simultaneously in AMMP refer to the Request in Progress section here, and to learn more about making multiple change requests simultaneously with AWS Marketplace Catalog API visit here.

    To get started, login to the AWS Marketplace Management Portal (AMMP), choose an AMI, Container, or Professional Services product, and request changes to update your product.

    » Manage Access Centrally for CyberArk Users with AWS Single Sign-On

    Posted On: Nov 10, 2021

    Customers can now connect their CyberArk Workforce Identity (CyberArk) to AWS Single Sign-On (SSO) once, manage access to AWS centrally in AWS SSO, and enable end users to sign in using CyberArk Workforce Identity to access all their assigned AWS accounts. The integration helps customers simplify AWS access management across multiple accounts while maintaining familiar CyberArk Workforce Identity experiences for administrators who manage identities, and for end users as they sign in. AWS SSO and CyberArk Workforce Identity use standards-based automation to provision users and groups into AWS SSO, saving administration time and increasing security.

    The interoperability of AWS SSO and CyberArk Workforce Identity enables administrators to assign users and groups access centrally to their AWS Organizations accounts and AWS SSO integrated applications. This makes it easier for an AWS administrator to manage access to AWS and ensure CyberArk Workforce Identity users have the right access to the right AWS accounts, including those created with AWS Control Tower account factory. Ongoing management is also simplified. For example, when using group assignments, CyberArk Workforce Identity administrators can grant or remove AWS account access by adding or removing users from a CyberArk Workforce Identity group.

    AWS SSO and CyberArk use the System for Cross-domain Identity Management (SCIM) standard to automate the process of provisioning users and groups into AWS SSO. AWS SSO also authenticates CyberArk users to their assigned AWS accounts through the Security Assertion Markup Language (SAML 2.0) standard. To configure the SCIM and SAML connections, administrators can use the AWS SSO Connector available in CyberArk Application Catalog.

    Your end users get their familiar CyberArk sign-in experience including MFA and central access to all of their assigned AWS accounts, including those created with AWS Control Tower account factory. In addition, your users can use their CyberArk credentials to sign in to the AWS Management Console, AWS Command Line Interface (CLI) and Amazon Managed Grafana.

    It is straightforward to get started with AWS SSO. With just a few clicks in the AWS SSO management console, you can choose AWS SSO, Active Directory, or an external identity provider, now including CyberArk Workforce Identity, as your identity source. Your users sign in with the convenience of their familiar sign-in experience and get single-click access to all their assigned accounts from the AWS SSO user portal. To learn more, please visit AWS Single Sign-On. To connect CyberArk Workforce Identity to AWS SSO as an external identity provider, please see the AWS SSO documentation.

    There is no cost for AWS SSO, and it is available in the US East (N. Virginia), US East (Ohio), US West (Oregon), Canada (Central), Asia Pacific (Singapore), Asia Pacific (Mumbai), Asia Pacific (Sydney), Asia Pacific (Seoul), Asia Pacific (Tokyo), EU (Ireland), EU (Frankfurt), EU (London), EU (Paris), EU (Stockholm), AWS GovCloud (US-West) and South America (São Paulo) Regions.

    » Manage Access Centrally for JumpCloud Users with AWS Single Sign-On

    Posted On: Nov 10, 2021

    Customers can now connect their JumpCloud Directory Platform (JumpCloud) to Amazon Web Services Single Sign-On (SSO) once, manage access to AWS centrally in AWS SSO, and enable end users to sign in using JumpCloud to access all their assigned AWS accounts. The integration helps customers simplify AWS access management across multiple accounts while maintaining familiar JumpCloud experiences for administrators who manage identities, and for end users as they sign in. AWS SSO and JumpCloud use standards-based automation to provision users and groups into AWS SSO, enabling customers to save administration time and increase security.

    The interoperability of AWS SSO and JumpCloud enables administrators to assign users and groups access centrally to their AWS Organizations accounts and AWS SSO integrated applications. This makes it easier for an AWS administrator to manage access to AWS and confirm whether JumpCloud users have the right access to the right AWS accounts. Ongoing management is also simplified. For example, when using group assignments, JumpCloud administrators can grant or remove AWS account access by adding or removing users from a JumpCloud group.

    AWS SSO and JumpCloud use the System for Cross-domain Identity Management (SCIM) standard to automate the process of provisioning users and groups into AWS SSO. AWS SSO also authenticates JumpCloud users to their assigned AWS accounts through the Security Assertion Markup Language (SAML 2.0) standard. To configure the SCIM and SAML connections, administrators can use the AWS SSO Connector available in JumpCloud Application Catalog.

    Your end users get their familiar JumpCloud sign-in experience including MFA and central access to all of their assigned AWS accounts, including those created with AWS Control Tower account factory. In addition, your users can use their JumpCloud credentials to sign in to the AWS Management Console, AWS Command Line Interface (CLI), AWS Console Mobile Application, and AWS integrated services, including AWS IoT SiteWise Monitor and Amazon SageMaker Notebooks.

    It is straightforward to get started with AWS SSO. With just a few clicks in the AWS SSO management console, you can choose AWS SSO, Active Directory, or an external identity provider, now including JumpCloud, as your identity source. Your users sign in with the convenience of their familiar sign-in experience and get single-click access to all their assigned accounts from the AWS SSO user portal. To learn more, please visit AWS Single Sign-On. To connect JumpCloud to AWS SSO as an external identity provider, please see the AWS SSO documentation.

    There is no cost for AWS SSO, and it is available in the US East (N. Virginia), US East (Ohio), US West (Oregon), Canada (Central), Asia Pacific (Singapore), Asia Pacific (Mumbai), Asia Pacific (Sydney), Asia Pacific (Seoul), Asia Pacific (Tokyo), EU (Ireland), EU (Frankfurt), EU (London), EU (Paris), EU (Stockholm), AWS GovCloud (US-West) and South America (São Paulo) Regions.

    » Amazon ECS now adds container instance health information

    Posted On: Nov 10, 2021

    Amazon Elastic Container Service (Amazon ECS) now provides customers enhanced visibility into the health of their compute infrastructure. Customers running containerized workloads using Amazon ECS on Amazon Elastic Compute Cloud  (Amazon EC2) or on-premises with Amazon ECS Anywhere can now query the health status of the container runtime (i.e Docker) for their container instances directly from the Amazon ECS API. This helps customers improve application resiliency.

    Although rare, problems can arise with the host infrastructure or Docker runtime which can prevent new containers from being started and even affect containers that are already running. With today’s release, Amazon ECS automatically monitors the container runtime for responsiveness on customers’ behalf. Customers can use the ECS Describe-Instances API with the include Health Status option to view the health information for their Amazon ECS Tasks.

    Customers can view the instance health status for all their Amazon ECS container instances running version 1.57.0 of the Amazon ECS container Agent or higher. This version is automatically available with version 20211103 of the Amazon ECS-optimized AMI. Amazon ECS is available in all public AWS regions. To learn more, please look at the documentation here.

    » Announcing new deployment guardrails for Amazon SageMaker Inference endpoints

    Posted On: Nov 10, 2021

    Amazon SageMaker Inference now supports new model deployment options to update your machine learning models in production. Using the new deployment guardrails, you can easily switch from the current model in production to a new one in a controlled way. This launch introduces canary and linear traffic shifting modes so that you can have granular control over the shifting of traffic from your current model to the new one during the course of the update. With built-in safeguards such as auto-rollbacks, you can catch issues early and automatically take corrective action before they cause significant production impact.

    Amazon SageMaker is a fully managed service that helps developers and data scientists to prepare, build, train, and deploy high-quality machine learning models quickly by bringing together a broad set of capabilities purpose-built for ML. When you deploy your trained ML models to Amazon SageMaker, it takes care of provisioning, patching, and updating the endpoints so that you can focus on powering your applications with ML. When you need to update your endpoint with a newer version of your ML model or serving container, SageMaker brings up a new fleet (green fleet) containing the updates and shifts traffic from the existing fleet (blue fleet) in one shot, referred to as a blue/green deployment. This makes sure that the endpoint is able to respond to requests even when the update is in progress, maximizing availability.

    With this launch, Amazon SageMaker adds canary and linear traffic shifting modes to blue/green deployments. These modes provide you more granular control in shifting traffic between the fleets so that you can build confidence before dialing up traffic. Additionally, you can pre-specify CloudWatch alarms on metrics such as latency or error rates and automatically rollback the deployment to the blue fleet if any of these alarms are tripped. Canary mode allows you to shift a small percentage of traffic to the green fleet (called a canary fleet), observe the behavior of the canary fleet for a period of time (known as the baking period), and shift the remainder of the traffic only when no alarms are triggered during the baking period. Linear mode allows you shift traffic to the green fleet in configurable fixed increments (say 10%), and observe the behavior for a baking period before shifting the subsequent increment. With all the blue/green deployments, you can observe the fleets after all traffic has been shifted (known as the final baking period) before terminating the blue fleet. These traffic shifting modes help you balance the trade-off between managing the risk of introducing new models into production and controlling the duration of the update, so you can pick the right option for your use case. All at once traffic shifting minimizes the duration of the update and linear mode minimizes the risk of introducing a new model into production by shifting traffic in multiple steps. Canary mode shifts all the traffic in two steps, providing a balance between risk and update duration.

    For detailed information on these new capabilities, please read our documentation, which also contains sample notebooks to help you get started. These new phased deployment capabilities are available for all newly created endpoints in all commercial regions where Amazon SageMaker is available. For a list of features that are not supported, please refer to the exclusions section of our documentation.

    » AWS Backup provides new resource assignment rules for your data protection policies

    Posted On: Nov 10, 2021

    AWS Backup introduces new resource assignment options that help make it easier to manage data protection of your applications at scale. The new resource assignment options allow you to define your selection criteria using AWS-supported resource types, a combination of AWS tags and Resource IDs, enabling you to automatically identify the AWS resources that store data of your business critical applications and protect your data using immutable backups.

    AWS Backup enables you to centralize and automate data protection across AWS services based on your organizational best practices and regulatory standards. You can get started with the new resource assignment options in all the AWS Regions where AWS Backup is supported, and create protected and separable backups to help you support your data protection needs. To learn more about AWS Backup, visit the product page and documentation. To get started, visit the AWS Backup console.

    » AWS Batch introduces fair-share scheduling

    Posted On: Nov 9, 2021

    Today AWS Batch introduced fair-share scheduling for AWS Batch job queues, making it easier to run different workloads in a single queue. Now, customers can determine whether to run jobs in first-in, first-out (FIFO) or determine a “fair-share” policy, which can allocate resources equally or based on admin-defined weights and priorities. With fair-share scheduling of jobs, AWS Batch will handle assigning compute among multiple users and workloads based on factors other than just whichever workload showed up first, resulting in enhanced processing efficiency and better respecting user or workload priority.

    AWS Batch is a cloud-native batch scheduler that enables anyone - from enterprises, to scientists and developers - to efficiently run batch jobs on AWS. Whether you have a few jobs or hundreds of thousands, AWS Batch is designed to provision the optimal quantity and type of compute resources based on the volume and specific resource requirements of the work you submit. With AWS Batch, there is no need to install and manage batch computing software or server clusters that you use to run your jobs, allowing you to focus on analyzing results and solving problems.

    Prior to today, AWS Batch used a FIFO scheduling mechanism for queues. In FIFO, jobs are scheduled when they reach the head of the queue, and there are enough available compute resources. While this works for many customers, FIFO can cause “unfair” situations where one user’s workloads become trapped behind another’s, such as a very large number of long-running jobs in front of a few short jobs. In these cases, customers want a way to provide fairness in compute allocation to users with short jobs. Particularly for larger organizations with many different workloads, fairness in computing is critical to giving users confidence that when they submit work, it will process in a timely manner while respecting priority.

    Now, AWS Batch supports fair-share scheduling in addition to FIFO as a scheduling method, allowing customers to have many different users and workloads in a single queue, with AWS Batch assigning out compute resources according to the fair-share policy defined by the administrator. By default, this is roughly equal among each user/workload. AWS Batch assigns users or workloads a “share”, which defines how much that user or workload receives of compute resources. Customers can give special weight to certain users or workloads with a higher priority if needed. Customers simply submit the jobs to AWS Batch, which will automatically dispatch the jobs according to the specified shares, enabling customers to simply run workloads in a single, combined queue.

    To learn more, visit our blog post.

    » Amazon SNS now supports token-based authentication for APNs mobile push notifications

    Posted On: Nov 9, 2021

    Amazon Simple Notification Service (Amazon SNS) now supports token-based authentication for sending mobile push notifications to Apple devices. When creating a new platform application in the Amazon SNS console or API, you can now choose between token-based (.p8 key file) or certificate-based (.p12 certificates) authentication.

    Token-based authentication provides stateless communication between Amazon SNS and the Apple Push Notification service (APNs). Stateless communication is faster than certificate-based communication because it doesn’t require APNs to look up the certificate. When using .p12 certificates, you had to renew the certificate and the endpoint once a year. Now, by using .p8 key file, you can reduce your operational burden by removing the need for yearly renewals. For platform applications created using .p8 certificates, Amazon SNS uses token-based authentication for delivering messages to mobile applications.

    You can use token-based authentication for APNs endpoints in the following AWS regions where Amazon SNS supports mobile push notifications: US East (N. Virginia), US West (N. California), US West (Oregon), Asia Pacific (Mumbai), Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Europe (Frankfurt), Europe (Ireland), and South America (São Paulo).

    To get started, see the following resources:

  • Apple authentication methods in the Amazon SNS Developer Guide.
  • SetPlatformApplicationAttributes in the Amazon SNS API Reference.
  • Prerequisites for Amazon SNS user notifications in the Amazon SNS Developer Guide.
  • Token-based authentication for iOS applications with Amazon SNS in the AWS Compute Blog.
  • » AWS Device Farm announces support for testing web applications hosted in an Amazon VPC

    Posted On: Nov 9, 2021

    AWS Device Farm’s Desktop Browser Testing feature lets you test your web applications on different desktop versions of Chrome, Firefox, Internet Explorer, and Microsoft Edge browsers. With today’s launch, we are adding support for testing web applications that are hosted in an Amazon Virtual Private Cloud (VPC).

    You can configure your Amazon VPC from the AWS Device Farm console or the AWS CLI. When a VPC is configured, Device Farm creates a network interface within your VPC and assigns it to the specified subnets and security groups. All future Selenium sessions associated with your Device Farm project will use the configured VPC connection.

    To learn more about the feature and to get started please visit our documentation. We have also published this blog post to help you get started.

    » Incident Manager from AWS Systems Manager is now available in 7 additional AWS Regions

    Posted On: Nov 9, 2021

    Today, we are excited to announce the general availability (GA) of Incident Manager from AWS Systems Manager in 7 additional AWS regions: Asia Pacific (Mumbai), Asia Pacific (Seoul), Canada (Central), Europe (London), Europe (Paris), South America (Sao Paulo), US West (N. California). To learn about Incident Manager, see the Incident Manager product page.

    Incident Manager helps you prepare for incidents with automated response plans that bring the right people and information together. The Incident Manager console provides a unified user interface to view operational data from multiple AWS services and track incident updates, such as alarm status changes and response plan progress. Incident Manager helps you improve service reliability by suggesting post-incident action items, such as automating a runbook step or adding a new alarm.

    With these additional regions, Incident Manager is now available in 16 AWS regions. To get started, select Incident Manager from the AWS Management Console or navigate to AWS Systems Manager to find it in the left navigation pane under Operations Management. To learn more about Incident Manager, read our blog post or documentation.

    » AWS announces a new capability to switch license types for Windows Server and SQL Server applications on Amazon EC2

    Posted On: Nov 9, 2021

    AWS now offers the ability to easily switch between AWS provided licenses and bring your own licenses (BYOL) for Windows Server and SQL Server workloads using AWS License Manager. License switching capabilities can be used as your business and licensing needs evolve. Changing the license type associated with your instance will still retain the application, instance, and networking configuration associated with the workload, saving your time and effort. You will be billed per the new license type from the next billing second. As an optional flexibility, AWS will also provide the ability to change the tenancy from Shared to Dedicated or vice-versa.

    Changing your license type from License Included to BYOL will require you to import your your own Windows and SQL Server media through VM Import Export (VMIE) to create Amazon Machine Images (AMIs). You can use these AMIs to launch Windows and SQL Server instances. You can change the License types at any point after you have launched an instance using the License Manager console, API or Command Line Interface (CLI). In addition, when switching from license included to BYOL, you will be required to activate Windows Server using either your own Key Management Service (KMS) activation or Multiple Activation Key (MAK) activation.

    When you switch from BYOL to license included, EC2 automatically activates Windows Server on the instance and will retain your SQL server product key on the instance. License switching feature is offered at no additional charge and is available inCanada (Central), US East (N. Virginia), EU (Stockholm), South America (Sao Paulo), US West (Oregon), EU (Frankfurt), EU (Ireland), US West (N. California), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Mumbai), EU (Paris), US East (Ohio), Asia Pacific (Seoul), EU (London) and Asia Pacific (Tokyo) regions. Visit, the License Switching user guide, VMIE documentation and API documentation to learn more.

    » AWS Fault Injection Simulator now supports Amazon CloudWatch Alarms and AWS Systems Manager Automation Runbooks.

    Posted On: Nov 8, 2021

    You can now create and run AWS Fault Injection Simulator (FIS) experiments that check the state of Amazon CloudWatch alarms and run AWS Systems Manager (SSM) Automations. You can also now run new FIS experiment actions that inject I/O, network black hole, and packet loss faults into your Amazon EC2 instances using pre-configured SSM Agent documents. Because it can be difficult to predict how applications will respond to stress under real world conditions whether in testing or production environments, integrating alarm checks and automated runbooks into your FIS experiments can help you gain more confidence when injecting disruptive events such as network problems, instance termination, API throttling, or other failure conditions.

    First, the new CloudWatch action allows you to assert the state of a CloudWatch alarm as part your FIS experiment workflow. Then, when the experiment runs, it will verify that the alarm is in the expected state: OK, ALARM, or INSUFFICIENT_DATA. You can use this for example to check whether or not the impact of a previous action (such as network latency injection) has taken effect before moving on to the next action in the experiment (such as an EC2 instance reboot).

    Next, you can now execute AWS Systems Manager Automation runbooks from within an FIS experiment. AWS Systems Manager Automation allows you to build and run automations to perform a variety of common tasks, such as creating and deleting EC2 AMIs or CloudFormation templates, deleting S3 buckets, running AWS Step Function state machines, invoking AWS Lambda functions, creating tags, launching EC2 instances, or making AWS APIs requests. By configuring Automation runbooks to be triggered from within FIS experiments, you can more easily, safely, and repeatably recreate complex failure conditions that more closely resemble real world conditions.

    Finally, several new and updated SSM Agent documents are now available to run as fault injection actions, including: an IO stress action; a network blackhole action that drops inbound or outbound traffic for a given protocol and port; a network latency action that adds latency and/or jitter through a given network interface to or from sources you specify such as IP addresses/blocks, domains, or AWS services including S3 and DynamoDB; and two network packet loss actions that can inject packet loss failures into a given interface and (optionally) source. These SSM documents are pre-configured for EC2 instances running Amazon Linux and Ubuntu.

    You can get started creating and running fault injection experiments in the AWS Management Console or using the AWS SDKs, and each of these new features is available today. AWS FIS is available in all commercial AWS Regions.

    » Amazon Translate Now Adds Support for four more languages and variants - Irish, Marathi, Portugal Portuguese and Punjabi

    Posted On: Nov 8, 2021

    Amazon Translate is a fully managed neural machine translation service that delivers real-time, high-quality, affordable, and customizable language translation. Today, we are announcing that Amazon Translate now adds supports to the following languages and variants - Irish, Marathi, Portuguese Portugal, and Punjabi.

    Now Amazon Translate supports a total of 75 languages and variants in 17 regions. With the launch of these languages, and continuous and rapidly accelerating innovation in quality and performance in Amazon Translate, AWS customers can reach a wider set of users that are increasingly expecting to consume media and interact with people and organizations in the language of their choice.

    Amazon Translate supports translation between the following 75 languages: Afrikaans, Albanian, Amharic, Arabic, Armenian, Azerbaijani, Bengali, Bosnian, Bulgarian, Chinese (Simplified), Catalan, Chinese (Traditional), Croatian, Czech, Danish, Dari, Dutch, English, Estonian, Finnish, French, French Canadian, Georgian, German, Greek, Gujarati, Haitian Creole, Hausa, Hebrew, Hindi, Hungarian, Icelandic, Indonesian, Irish, Italian, Japanese, Kannada, Kazakh, Korean, Latvian, Lithuanian, Macedonian, Malay, Malayalam, Maltese, Mongolian, Marathi, Norwegian, Farsi (Persian), Pashto, Polish, Portuguese, Portuguese Portugal , Punjabi, Romanian, Russian, Serbian, Sinhala, Slovak, Slovenian, Somali, Spanish, Spanish Mexican, Swahili, Swedish, Filipino Tagalog, Tamil, Telugu, Thai, Turkish, Ukrainian, Urdu, Uzbek, Vietnamese, and Welsh. Between these languages, the service supports 5550 translation combinations. For up to date list of supported languages, see the Amazon Translate documentation.

    Amazon Translate’s’ 75 languages are available in the following AWS regions: Asia Pacific (Hong Kong), Asia Pacific (Mumbai), Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), AWS GovCloud (US-West), Canada (Central), Europe (Frankfurt), Europe (Ireland), Europe (London), Europe (Paris), Europe (Stockholm), US East (Northern Virginia), US East (Ohio), US West (Northern California), US West (Oregon). For up to date list of regions Amazon Translate supports, refer to Global Infrastructure page. This enables AWS customers to reach a wider set of users in many geographies that are increasingly expecting to consume media and interact with organizations in the language of their choice.

    Try all the supported languages in the Amazon Translate console, or see the Amazon Translate documentation for more information on Command Line Interface (CLI) and AWS SDKs.

    » Amazon SageMaker Pipelines now supports retry policies and resume

    Posted On: Nov 8, 2021

    Amazon SageMaker Pipelines, a purpose-built service which enables customers to define and orchestrate their model building steps, now supports resuming execution of a failed/stopped pipeline, and retry policies for pipeline steps.

    SageMaker Pipelines provides a variety of steps (e.g. processing, training, register model, callback etc.). Using these steps customers can productionize ML model building workflow as SageMaker Pipelines. Now, with these newly launched features, customers can exercise more operational control and flexibility in executing their SageMaker Pipelines.

    Previously, customers had to start a new execution if the pipeline failed or stopped. Now, they can resume a failed/stopped pipeline from the previously failed/stopped steps. This feature makes it easier for customers to debug their pipelines and saves them time/resources by not re-executing previously successful steps.

    Customers can also now configure retry policies for pipeline steps using the following parameters: max retry attempts, the time interval between retry attempts, rate of retry intervals, and max time-span of retry. These parameters can be configured at the pipeline/steps granularity and can be optionally customized for specific error types. Using this feature, customers can operationalize their model building pipelines and incorporate fail-safe policies for transient/intermittent errors.

    These features are available in all AWS regions where Amazon SageMaker is available. To get started, create a new SageMaker Pipeline from the Amazon SageMaker SDK or Studio and visit our documentation pages on resume and retry policies.

    » AWS Backup adds support for Amazon DocumentDB (with MongoDB compatibility)

    Posted On: Nov 8, 2021

    AWS Backup announces support for Amazon DocumentDB (with MongoDB compatibility), allowing you to centrally manage data protection of your DocumentDB clusters along with other supported AWS services for database, storage, and compute.

    Amazon DocumentDB (with MongoDB compatibility) is a scalable, highly durable, and fully-managed database service for operating mission critical MongoDB workloads. AWS Backup enables you to centralize and automate data protection policies across AWS services based on your organizational best practices and regulatory standards. Now, you can use a single data protection policy in AWS Backup to automate the creation of independent, immutable, and protected snapshots of DocumentDB clusters across AWS Regions or accounts and you can restore your DocumentDB clusters from the snapshots with a single click. 

    AWS Backup support for Amazon DocumentDB is available in the following AWS Regions: US-East (N. Virginia), US-East (Ohio), US-West (Oregon), Asia-Pacific (Mumbai), Asia-Pacific (Seoul), Asia-Pacific (Singapore), Asia-Pacific (Sydney), Asia-Pacific (Tokyo), Canada (Central), Europe (Frankfurt), Europe (Ireland), Europe (London), Europe (Paris), South America (Sao Paulo), AWS GovCloud (US-West). To learn more about AWS Backup visit the product page, documentation, and pricing page. To get started, visit the AWS Backup console.

    » Amazon Chime SDK meetings now offer API endpoints in Oregon, Frankfurt and Singapore

    Posted On: Nov 8, 2021

    The Amazon Chime SDK now has meeting API endpoints in the US West (Oregon), Europe (Frankfurt) and Asia Pacific (Singapore) AWS Regions, providing customers a choice of which AWS Region they use to create and manage meetings which can be hosted in any of the 18 Amazon Chime media regions.

    The Amazon Chime SDK lets developers add real-time audio, video, screen share, and messaging to their web applications. Customers can reduce API latency by using the endpoint nearest to their application, and implement high availability architectures by using multiple endpoints. Customers who require endpoints with FIPS 140-2 validated cryptographic modules, now have a choice of US East (Northern Virginia) and US West (Oregon) AWS Regions.

    The AWS SDK includes a new namespace to use the new meeting API endpoints. To learn more about the Amazon Chime SDK meetings, how they use AWS Regions and migrating to the new namespace, review the following resources:

  • Amazon Chime SDK
  • Meeting Regions in the Amazon Chime SDK Developer Guide
  • Migrating to the Amazon Chime SDK Meetings namespace in the Amazon Chime SDK Developer Guide
  • » AWS Backup adds support for Amazon Neptune

    Posted On: Nov 8, 2021

    AWS Backup announces the addition of Amazon Neptune to its portfolio of supported services. This is a new functionality in AWS Backup that allows you to create automated periodic snapshots of Amazon Neptune clusters using your centralized data protection policy across the supported AWS services for database, storage, and compute.

    Amazon Neptune is a fast, reliable, fully managed graph database service that makes it easy to build and run applications. You can now use a single data protection policy in AWS Backup to automate the creation of independent, immutable, and protected snapshots of your Neptune clusters across AWS Regions or accounts, and you can restore your Neptune clusters from the snapshots with a single click.

    AWS Backup support for Amazon Neptune is available in the following AWS Regions: US-East (N. Virginia), US-East (Ohio), US-West (N. California), US-West (Oregon), Asia-Pacific (Hong Kong), Asia-Pacific (Mumbai), Asia-Pacific (Seoul), Asia-Pacific (Singapore), Asia-Pacific (Sydney), Asia-Pacific (Tokyo), Canada (Central), Europe (Frankfurt), Europe (Ireland), Europe (London), Europe (Paris), Europe (Stockholm), Middle East (Bahrain), South America (Sao Paulo), AWS GovCloud (US-East), AWS GovCloud (US-West). To learn more about AWS Backup visit the product page, documentation, and pricing page. To get started, visit the AWS Backup console.

    » AWS Polly now offers Neural Text-to-Speech voices in Spanish and Italian

    Posted On: Nov 8, 2021

    Amazon Polly, a service that turns text into speech (TTS), launches 2 new neural TTS voices. You can now use Lucia for Castilian Spanish and Bianca for Italian. With this launch, we now offer 22 neural TTS voices across 12 languages. With these voices, you can create applications that talk, and build entirely new categories of speech-enabled products.

    To get started, log into the Amazon Polly console and give Lucia and Bianca a try. For more details, please visit the Amazon Polly documentation and review our full list of text-to-speech voices.

    » Amazon Translate now supports AWS KMS Encryption

    Posted On: Nov 5, 2021

    Amazon Translate is a neural machine translation service that delivers fast, high quality, affordable, and customizable language translation. Starting today, you can use your own encryption keys from the AWS Key Management Service (KMS) to encrypt data placed in your S3 bucket. Up until now, Amazon Translate used Amazon S3-SSE to encrypt your data. AWS KMS makes it easy for you to create and manage keys, while controlling the use of encryption across a wide range of AWS services and in your applications. AWS KMS is a secure and resilient service that uses FIPS 140-2 validated hardware security modules to protect your keys. AWS KMS is integrated with AWS CloudTrail to provide you with logs of all key usage to help meet your regulatory and compliance needs. The feature can be configured via the AWS Management console or SDK and supports Amazon Translate’s asynchronous batch translation jobs.

    Batch Translation is now available in seven AWS regions - US East 1 (Northern Virginia), US East 2 (Ohio), US West 2 (Oregon), EU West 1 (Ireland), EU West 2 (London), EU Central 1 (Frankfurt), and Asia Pacific North East 2 (Seoul).

    For step by step instructions to run batch translation job, read our blog - "Translating documents, spreadsheets, and presentations in Office Open XML format using Amazon Translate."  For information visit Amazon Translate documentation.

    » AWS Toolkits for Cloud9, JetBrains and VS Code now support interaction with over 200 new resource types

    Posted On: Nov 5, 2021

    AWS Toolkits for JetBrains, VS Code and Cloud9 now provide customers with the ability to select and view from a list of 245 resource types across 94 services without leaving their IDEs. With this release, in addition to accessing AWS services that are listed by default in the AWS Explorer pane, customers can choose from hundreds of resources to interact with. This feature uses the AWS Cloud Control API enabling the Toolkit to continually and rapidly add new resource types in the future. 

    To view the expanded list of resource types, users of JetBrains IDEs, VS Code, and Cloud9, can simply click on the ‘Resources’ option listed in the AWS Explorer pane, and select the resources to add to the Explorer. They will then see the new selections appear in a refreshed Explorer view in their IDEs. Some examples of commonly requested service resources that are now available in the explorer are Amazon AppFlow, Amazon Kinesis Data Streams and Amazon CloudFront Distributions. After adding new resources to the Explorer pane, customers can interact with them by previewing a read-only version of the JSON file that describes a resource, copy the resource identifier for the resource, view the AWS documentation that explains the purpose of the resource type, and the schema (in JSON format) for modeling the selected resource. 

    Resource modification, such as editing, creating or deleting resources, are disabled by default, and customers can enable them through the Experimental features option in the settings for each IDE. Check out the user guide for the AWS Toolkit for Cloud9JetBrains and VS Code to learn more.

    Cloud9 customers see the new features built-into the explorer pane. Users of JetBrains and VS Code IDEs, install the AWS Toolkit for JetBrains and VS Code, or update to the latest version to use this feature and submit issues or fefature requests to open source GitHub repos for the Toolkit for JetBrains and the Toolkit for VS Code.

    » Amazon EC2 Fleet and Spot Fleet now support automatic instance termination with Capacity Rebalancing

    Posted On: Nov 5, 2021

    Starting today, you can configure EC2 Fleet and Spot Fleet to automatically terminate a Spot Instance when using Capacity Rebalancing. With Capacity Rebalancing, EC2 Fleet and Spot Fleet attempt to replace a Spot Instance when it is at an elevated risk of interruption as indicated by the EC2 Instance rebalance recommendation signal. Until now, EC2 Fleet or Spot Fleet launched a replacement Spot Instance without terminating the Spot Instance that received a rebalance recommendation, meaning you needed to either manually terminate the instance once workload rebalancing was completed, or let the instance run until it was interrupted by EC2. Now, you can set up EC2 Fleet or Spot Fleet to automatically terminate the instance that receives a rebalance recommendation with a specified termination delay.

    Automatic instance termination is useful for workloads where you can estimate the amount of time it takes to complete the job and set it as a termination delay, e.g., batch jobs with the same processing time. Please note that Amazon EC2 can still interrupt your Spot Instance with a standard two-minute notification before the termination delay ends. With automatic instance termination you do not need to build custom instance termination logic. You can also optimize your costs by having EC2 Fleet or Spot Fleet automatically terminate the instances once your workload rebalancing is completed.

    EC2 Fleet and Spot Fleet simplify the provisioning of EC2 capacity across different EC2 instance types, Availability Zones, and purchase models (On-Demand, Reserved Instances, Savings Plans, and Spot) to optimize your application’s scalability, performance, and cost. To learn more about automatic instance termination with Capacity Rebalancing, please see documentation for EC2 Fleet, documentation for Spot Fleet, and AWS Compute Blog post.

    » Amazon Athena adds cost details to query execution plans

    Posted On: Nov 5, 2021

    Amazon Athena now displays the computational cost of your queries alongside their execution plans. With the release of the EXPLAIN ANALYZE statement, Athena can now execute your specified query and return a detailed breakdown of its execution plan along with the CPU usage of each stage and the number of rows processed.

    In addition to understanding a query’s execution plan, you can now see the time spent within each operator to better assess the performance profiles of query clauses and their chosen ordering. With row input and output counts, you can also validate the impact of query predicates, especially over large datasets. Administrators will also find the scanned data counts useful in planning the financial impact of their users’ workloads and identifying queries that could benefit from further optimization or that should be governed to control costs using Athena’s data usage controls.

    The EXPLAIN ANALYZE statement executes your queries to produce its results, so you will incur charges for use. To learn more, see Amazon Athena pricing. For more information on using execution plans and their detailed data, see Using Explain Plan in Athena and Understanding Athena Explain Plan Results.

    » Amazon DevOps Guru now Supports Multi-Account Insight Aggregation with AWS Organizations

    Posted On: Nov 5, 2021

    We are pleased to announce that you can now view the insights generated across all the accounts in your organization from a single delegated administrator account. Insights are alerts generated when Amazon DevOps Guru detects operational issues while monitoring your applications. These insights identify active or impending application issues, point to the likely cause of the issue, and recommend remedial steps to help you prevent customer-impacting events.

    Previously, an account could only view insights generated for that account in Amazon DevOps Guru. Today, in line with AWS’ best practices on well-architected multi-account environments, DevOps Guru has enabled multi-account support with AWS Organizations. With multi-account support enabled, you can designate a member account to manage insights across your entire organization. This delegated administrator can then view, sort, and filter insights from all accounts within your organization to develop an org-wide view of the health of all monitored applications—without the need for any additional customization.

    Amazon DevOps Guru is a Machine Learning (ML) powered service that makes it easy to improve an application’s operational performance and availability. By analyzing application metrics, logs, events, and traces, DevOps Guru identifies behaviors that deviate from normal operating patterns and creates an insight that alerts developers with issue details. When possible DevOps Guru, also provides proposed remedial steps via Amazon Simple Notification Service (SNS) and partner integrations like Atlassian Opsgenie and PagerDuty. To learn more, visit the DevOps Guru product and documentation pages, or post a question to the Amazon DevOps Guru forum.

    AWS Organizations helps you centrally manage and govern your environment as you grow and scale your AWS resources. Using AWS Organizations, you can programmatically create new accounts and allocate resources, simplify billing by setting up a single payment method for all of your accounts, create groups of accounts to organize your workflows, and apply policies to these groups for governance. In addition, AWS Organizations is integrated with other AWS services so you can define central configurations, security mechanisms, and resource sharing across accounts in your organization. To learn more about AWS Organizations, please visit the AWS Organizations product page.

    » Amazon Lex launches support for Austrian German

    Posted On: Nov 5, 2021

    Today, Amazon Lex announces language support for Austrian German. Amazon Lex is a service for building conversational interfaces into any application using voice and text. Amazon Lex provides deep learning powered automatic speech recognition (ASR) for converting speech to text, and natural language understanding (NLU) to recognize the intent of the text so you can build applications with highly engaging user experiences and lifelike conversational interactions. Now you can deliver a robust and localized conversational experience that understands Austrian German.

    Amazon Lex can be applied to a diverse set of use cases such as virtual agents, interactive voice response systems, self-service chatbots, or application bots. Language support for Austrian German is available in all AWS Regions where Amazon Lex operates. To learn more, visit the Amazon Lex documentation page.

    » AWS IoT Core for LoRaWAN supports managed Firmware Over-the-Air Update

    Posted On: Nov 5, 2021

    AWS IoT Core for LoRaWAN is a fully managed LoRaWAN Network Server (LNS) of AWS IoT Core that lets wireless devices that use low-power long-range wide area network (LoRaWAN) technology connect to the AWS cloud. Now, AWS IoT Core for LoRaWAN supports Firmware Over-the-Air Update (FUOTA) that allows customers to deliver secure and reliable firmware updates to the devices in the field using LoRaWAN multicast and fragmentation mechanisms defined by the LoRa Alliance. These mechanisms aim to minimize the device’s battery consumption and handle large file transfers (few hundred KBs).

    Customers can provide a firmware image, set up a multicast group with a target list of devices, and initiate a FUOTA task for the group of devices. With the FUOTA feature, AWS IoT Core for LoRaWAN customers can enable devices deployed in the field to be remotely updated for bug fixes, patching security vulnerabilities, and supporting new features. It eliminates the need to replace devices or perform software updates by visiting the sites in person thus reducing the maintenance and operational costs. This will also allow customers to focus on building applications, rather than doing undifferentiated heavy lifting of developing and maintaining their own device management solution.

    The FUOTA feature is supported in all AWS Regions where AWS IoT Core for LoRaWAN is available. To learn more about FUOTA, please see AWS IoT for LoRaWAN product page.

    » AWS Lambda now supports cross-account container image pulling from Amazon Elastic Container Registry

    Posted On: Nov 4, 2021

    AWS Lambda now allows you to create or update your functions with container images stored in an Amazon ECR repository in a different AWS account than that of your AWS Lambda function. Previously, you could only access container images stored in an Amazon ECR repository in the same AWS account as your AWS Lambda functions. If you used a centralized account for your Amazon ECR repositories, you needed to copy your container images into an Amazon ECR repository in the same account as your Lambda function. You can now simplify this workflow by accessing the container image stored in an Amazon ECR repository in a different account. 

    To access container images in an Amazon ECR repository in a different account, you can grant permissions to the AWS Lambda resource and the AWS Lambda service principal. To learn more about how to provide the required permissions for cross-account access, see AWS Lambda documentation

    This feature is available in the following regions: US East (Ohio), US East (N. Virginia), US West (N. California), US West (Oregon), Africa (Cape Town), Asia Pacific (Hong Kong), Asia Pacific (Mumbai), Asia Pacific (Osaka), Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Canada (Central), Europe (Frankfurt), Europe (Ireland), Europe (London), Europe (Milan), Europe (Paris), Europe (Stockholm), Middle East (Bahrain), and South America (São Paulo).

    » Simplify CI/CD Configuration for AWS Serverless Applications and your favorite CI/CD system – General Availability

    Posted On: Nov 4, 2021

    You can now create secure continuous integration and deployment (CI/CD) pipelines that follow your organization’s best practices with a new pipeline configuration capability for serverless applications. AWS Serverless Application Model Pipelines (AWS SAM Pipelines) is a new feature of AWS SAM CLI that gives you access to benefits of CI/CD in minutes, such as accelerating deployment frequency, shortening lead time for changes, and reducing deployment errors. AWS SAM Pipelines comes with a set of default pipeline templates for popular CI/CD systems such as CloudBees CI/Jenkins, GitLab CI/CD, GitHub Actions, Bitbucket Pipelines, and AWS CodeBuild/CodePipeline that follow AWS’ deployment best practices. The AWS SAM CLI is a developer tool that makes it easier to build, locally test, package, and deploy serverless applications.

    Creating pipelines that can deploy software safely and follow an organization’s governance requirements is a complex and time-consuming task that must be performed for each new application. For example, pipelines have to distribute deployment artifacts across multiple accounts and regions, ensure that deployments cannot make unsafe infrastructure changes, prevent unauthorized sources from injecting code in the deployment process, and incorporate approval stages for production releases. To minimize the amount of time development teams spend creating pipelines, large organizations invest in tools that automate these tasks – a significant upfront investment that takes many iterations to refine.

    AWS SAM Pipelines helps organizations create pipelines for their preferred CI/CD systems in minutes so that they can realize the benefits of CI/CD on day one of their projects. AWS SAM Pipelines comes with a set of default pipeline templates that encapsulate AWS’ deployment best practices, supports AWS CodeBuild/CodePipeline and third-party offerings, and uses standard JSON/YAML pipeline formats. The built-in best practices help perform multi-account and multi-region deployments and verify that pipelines cannot make unintended changes to infrastructure. Organizations can also supply their custom pipeline templates via Git repositories to standardize custom pipelines across hundreds of application development teams.

    AWS SAM Pipelines is available immediately. To learn more about AWS SAM Pipelines, see the tutorial on the AWS Compute Blog and instructional videos for CI/CD systems on ServerlessLand.com. You can install the AWS SAM CLI by following the instructions in the documentation.

    » AWS DataSync can now copy data between Hadoop Distributed File Systems (HDFS) and AWS Storage services

    Posted On: Nov 4, 2021

    AWS DataSync now supports transferring data between Hadoop Distributed File Systems (HDFS) and Amazon S3, Amazon Elastic File System (EFS), or Amazon FSx for Windows File Server. Using DataSync, you can quickly, easily, and securely migrate files and folders from HDFS on your Hadoop cluster to AWS Storage. You can also use DataSync to replicate data on your Hadoop cluster to AWS for business continuity, copy data to AWS to populate your data lakes, or transfer data between your cluster and AWS for analysis and processing.

    AWS DataSync is an online data transfer service that provides you with a simple way to automate and accelerate copying data over the internet or with AWS Direct Connect. DataSync is feature rich with built-in scheduling, monitoring, encryption, and data integrity validation. DataSync simplifies and automates the process of copying your data to and from AWS, all with pay-as-you-go pricing. In addition to support for HDFS, DataSync also supports copying data between Network File System (NFS) shares, Server Message Block (SMB) shares, self-managed object storage, AWS Snowcone, Amazon Simple Storage Service (Amazon S3) buckets, Amazon Elastic File System (Amazon EFS) file systems, and Amazon FSx for Windows File Server file systems. DataSync agents run external to your Hadoop cluster so you can accelerate your migrations and simplify data transfers between your cluster and AWS, without consuming compute and memory resources or impacting your business processes.

    AWS DataSync is available in 23 AWS Regions. You can learn more about the service in the DataSync documentation, or you can log in to the AWS DataSync console to get started.

    » AWS Amplify launches further data management capabilities in the Admin UI

    Posted On: Nov 4, 2021

    AWS Amplify Admin UI now allows generating seed data with Faker, and downloading data to a CSV file. This simplifies creating and managing your data in Amplify, and allows for more realistic demo data that is quickly shareable. 

    Using the Amplify Admin UI, developers or non-developers never have to touch the AWS console to view their app’s data. They can simply create a data model directly from the Admin UI, and manage their data in the same place. Modifying your data or your data model is just a few clicks away, and with our new features it’s even simpler than before to manage your app’s data.

    To seed data that looks realistic and matches your needs, simply navigate to the Content tab in your Admin UI. Select “Seed data” from the Actions dropdown, and choose your constraints. To download this data, you can either select your desired rows, or choose “Download results” in the Actions tab to download all rows of data. After selecting this a CSV download will begin.

    These two new data management capabilities for the Amplify Admin UI is fully hosted and available in 17 AWS Regions: US East (N. Virginia), US East (Ohio), US West (Oregon), US West (N. California), Asia Pacific (Mumbai), Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Canada (Central), EU (Frankfurt), EU (Ireland), EU (London), EU (Paris), EU (Stockholm), South America (São Paulo), and Middle East (Bahrain).

    Get started by reading the blog post, the documentation, or by trying it out yourself in the “Content” tab of your Amplify Admin UI.

    » AWS Snowcone SSD is now available in the US East (Ohio), US West (San Francisco), Asia Pacific (Singapore), Asia Pacific (Sydney) and AWS Asia Pacific (Tokyo) regions

    Posted On: Nov 4, 2021

    The AWS Snowcone solid state drive (SSD) is now available in the US East (Ohio), US West (San Francisco), Asia Pacific (Singapore), Asia Pacific (Sydney) and AWS Asia Pacific (Tokyo) regions adding to our growing list of regions already offering Snowcone SSD including, EU (Frankfurt), EU (Ireland), US East (N. Virginia), and US West (Oregon). AWS Snowcone is the smallest member of the AWS Snow Family of edge computing, edge storage, and data transfer devices. Snowcone is available in both hard disk drive (HDD) and solid state drive (SSD). Both device models are portable, rugged, and secure – small and light enough to fit in a backpack, and are able to withstand harsh environments. Customers use Snowcones to deploy applications at the edge, and to collect data, process it locally, and move it to AWS either offline by shipping the device to AWS, or online by using AWS DataSync on Snowcone to send the data to AWS over the network.

    Edge locations often lack the space, power, and cooling needed for data center IT equipment to run applications and data migrations. With 2 CPUs, 4 GB of memory, 8 TB of usable storage on Snowcone HDD, 14 TB of usable storage on Snowcone SSD, and wired networking, Snowcone runs edge computing workloads with select Amazon EC2 instances or AWS IoT Greengrass to migrate data to AWS. Snowcone is small, approximately 9 inches x 6 inches x 3 inches, weighs 4.5 pounds (lbs.), and supports operation via battery for mobility.

    To setup and manage Snowcone devices, you can use AWS OpsHub, a graphical user interface that enables you to rapidly deploy edge computing workloads and simplify data migration to the cloud. AWS DataSync comes pre-installed on the device to move data online to and from Amazon S3, Amazon EFS, or Amazon FSx for Windows File Server, as well as between AWS Storage services.

    Access the AWS Region services list to see where AWS Snowcone is available. To learn more, visit the AWS Snowcone documentation and AWS Snowcone product page. To get started, order Snowcone in the AWS Snow Family console

    » Amazon SageMaker now supports inference testing with custom domains and headers from SageMaker Studio

    Posted On: Nov 4, 2021

    Amazon SageMaker Studio now enables customers to make test inference requests to endpoints with a custom URL and endpoints that require specific headers. Amazon SageMaker helps data scientists and developers to prepare, build, train, and deploy high-quality machine learning (ML) models quickly by bringing together a broad set of capabilities purpose-built for ML. Amazon SageMaker Studio provides a single, web-based visual interface where you can perform all ML development steps.

    Customers can use Amazon API Gateway or other services in front of the SageMaker endpoints to provide additional capabilities such as custom authorization, and custom domain names for the endpoint. For example, these capabilities can be used to create a publicly facing inference endpoint that’s protected with JSON Web Token (JWT) and branded with a custom domain. In these cases, customers need the flexibility to add headers needed by their custom authorizer and provide a custom URL for the inference request. Previously, customers could only make inference requests from SageMaker Studio to the default SageMaker real-time endpoint URL but could not customize headers or change the endpoint URL. Now, customers can specify headers and a custom endpoint URL for the test inference request. In addition, customers can generate the equivalent curl command from SageMaker Studio with the click of a button once the inference request has finished. This is useful for sharing with others that may not have access to the UI and for fine tuning other properties of the inference request.

    This feature is generally available in all regions where SageMaker and SageMaker Studio is available. To see where SageMaker is available, review the AWS region table. To learn more about this feature, please see our documentation. To learn more about SageMaker, visit our product page.

    » AWS Backup Vault Lock is now available in the AWS China (Beijing) Region and AWS China (Ningxia) Region

    Posted On: Nov 3, 2021

    AWS Backup Vault Lock is now available in the Amazon Web Services China (Beijing) Region, operated by Sinnet, and Amazon Web Services China (Ningxia) Region, operated by NWCD. AWS Backup enables customers to centralize and automate data protection across AWS services through a fully managed and cost-effective solution.

    To learn more about AWS Backup, visit the product page and documentation. To get started, visit the AWS Backup console.

    » Amazon Aurora Global Database Expands Availability to AWS GovCloud (US) Regions

    Posted On: Nov 3, 2021

    Amazon Aurora Global Database is a feature of Amazon Aurora. It is designed for applications with a global footprint. It allows a single Aurora database to span multiple AWS Regions, with fast replication to enable low-latency global reads and disaster recovery from Region-wide outages. With today’s launch, Amazon Aurora Global Database is available in the AWS GovCloud (US-East and US-West) Regions. Amazon Aurora Global Database customers will now be able to replicate across AWS GovCloud (US-East) and AWS GovCloud (US-West) Regions.

    Aurora Global Database replicates writes from the primary to the secondary AWS Regions with a typical latency of less than one second, enabling both fast failover with minimal data loss and low latency global reads. In unplanned disaster recovery situations, you can promote any secondary AWS Region to take full read-write responsibilities in under a minute. Aurora Global Database is available for both the MySQL-compatible and PostgreSQL-compatible editions of Aurora in 20 AWS Regions. See Amazon Aurora Pricing for a complete list.

    You can create an Aurora Global Database with just a few clicks in the Amazon RDS Management Console or download the latest AWS SDK or CLI. To learn more, read the Aurora Global Database documentation.

    Amazon Aurora combines the performance and availability of high-end commercial databases with the simplicity and cost-effectiveness of open source databases. To get started with Aurora, take a look at our getting started page.

    » AWS Lake Formation now supports AWS PrivateLink

    Posted On: Nov 3, 2021

    AWS Lake Formation now support managed VPC endpoints (powered by AWS PrivateLink) to access a data lake in a Virtual Private Cloud (VPC).  With AWS Lake Formation-managed endpoints, you can now authorize access to the data lake for client applications and services inside of your VPC and on-premises using private IP connectivity. You can also configure VPC endpoint policies to have finer grained control over how services access AWS Lake Formation.

    AWS Lake Formation is a service that makes it easy to set up a secure Amazon S3 data lake in days. A data lake is a centralized, curated, and secured repository that stores all your data, both in its original form and prepared for analysis. A data lake enables you to break down data silos and combine different types of analytics to gain insights and guide better decisions.

    AWS PrivateLink support for AWS Lake Formation is available in the same AWS regions as AWS Lake Formation. To learn more, see Data Lakes and Analytics on AWS and visit the AWS Lake Formation Developer Guide.

    » AWS Security Hub adds support for AWS PrivateLink for private access to Security Hub APIs

    Posted On: Nov 3, 2021

    AWS Security Hub now supports Amazon Virtual Private Cloud (VPC) endpoints via AWS PrivateLink so that you can securely initiate API calls to Security Hub from within your VPC without requiring those calls to traverse across the Internet. AWS PrivateLink support for Security Hub is now available in all AWS Regions where Security Hub is available. To try the new feature, you can go to the VPC console, API, or SDK to create a VPC endpoint for Security Hub in your VPC. This creates an elastic network interface in your specified subnets. The interface has a private IP address that serves as an entry point for traffic that is destined for Security Hub. You can read more about Security Hub’s integration with PrivateLink here.

    AWS Security Hub is available globally and is designed to give you a comprehensive view of your security posture across your AWS accounts. With Security Hub, you now have a single place that aggregates, organizes, and prioritizes your security alerts, or findings, from multiple AWS services, including Amazon GuardDuty, Amazon Inspector, Amazon Macie, AWS Firewall Manager, AWS Systems Manager Patch Manager, AWS Chatbot, AWS Config, and AWS IAM Access Analyzer. You can also receive and manage findings from over 60 AWS Partner Network (APN) solutions. You can also continuously monitor your environment using automated security checks that are based on standards, such as AWS Foundational Security Best Practices, the CIS AWS Foundations Benchmark, and the Payment Card Industry Data Security Standard.

    You can take action on findings by investigating them in Amazon Detective or sending them to AWS Audit Manager via Security Hub’s automated integrations with those services. You can also use Amazon EventBridge rules to send the findings to ticketing, chat, Security Information and Event Management (SIEM), response and remediation workflows, and incident management tools.

    You can enable your 30-day free trial of AWS Security Hub with a single click in the AWS Management console. To learn more about AWS Security Hub capabilities, see the AWS Security Hub documentation. To start your 30-day free trial, see the AWS Security Hub free trial page.

    » Amazon SageMaker launches fully-managed RStudio Workbench

    Posted On: Nov 3, 2021

    Today we are excited to announce the launch of RStudio on Amazon SageMaker, the industry’s first fully managed RStudio integrated development environment (IDE). You can easily bring your current RStudio license and migrate self-managed RStudio environment to Amazon SageMaker in a few simple steps.

    RStudio is the most popular IDE among R developers for data science, statistical analysis, and machine learning. However, deploying, securing, scaling, and maintaining RStudio yourself can be tedious and cumbersome. With RStudio on SageMaker, you can quickly and easily migrate your self-managed RStudio environments. You can bring your RStudio license to SageMaker through AWS License Manager that makes it easy to manage your licenses at no additional charge. In addition, you can easily secure your RStudio environment using several built-in SageMaker capabilities that let you apply fine-grained access controls using AWS Identity and Access Management (IAM) policies, restrict network traffic to your Amazon Virtual Private Cloud (VPC), and automatically encrypt data at rest. Once set up is complete, developers can launch RStudio IDE in a single click and start authoring code in the familiar RStudio interface with access to on demand compute. The fully elastic compute can be dialed up and down without leaving the interface, significantly improving developer productivity.

    Amazon SageMaker already comes with SageMaker Studio Notebooks for preparing data, and building and deploying machine learning models. With addition of RStudio on SageMaker, you can now unify your Python and R data science teams in one single place, enabling closer collaboration and efficient administration of your data science organization. Moreover, developers proficient with both R and Python languages can freely switch between RStudio and SageMaker Studio Notebooks. All of your work, including code, datasets, repositories, and other artifacts are synchronized between the two environments through the common underlying Amazon EFS storage.

    RStudio on SageMaker is now generally available in sixteen AWS Regions. For more details, please visit our documentation.

    » Amazon RDS now supports cross account KMS keys for exporting RDS Snapshots

    Posted On: Nov 3, 2021

    Amazon Relational Database Service (Amazon RDS) now offers the ability to specify an AWS Key Management Service (KMS) customer managed key (CMK) from a different account when exporting an Amazon RDS Snapshot to Amazon S3. This option helps customers organize and consolidate their KMS keys by eliminating the need to create keys in each account that has snapshots.

    Snapshot export extracts data from snapshots and stores it in an Amazon S3 bucket in Apache Parquet format. Exported data can be analyzed using tools such as Amazon Athena. RDS secures the exported data by encrypting it with a KMS key while exporting to S3. Now, when you setup the task for exporting the snapshot data, you can specify a KMS key that is shared with the account where the snapshot currently resides. This can help you organize KMS keys in a centralized account. For more details, refer to the documentation.

    Cross account KMS keys for snapshot exports is available in all AWS regions that snapshot export is generally available. To learn more about these keys and how to configure them, see the Allowing users in other accounts to use a CMK topic in the AWS Key Management Service Developer Guide.

    » Introducing ability to connect to EMR clusters in different subnets in EMR Studio

    Posted On: Nov 3, 2021

    Amazon EMR Studio is an integrated development environment (IDE) that makes it easy for data scientists and data engineers to develop, visualize, and debug big data and analytics applications written in R, Python, Scala, and PySpark. Today, we are excited to announce that EMR Studio Workspaces now support connecting to EMR clusters in different subnets that are associated with EMR Studio.

    Previously, you had to select a subnet while creating a Workspace. This meant that the Workspace could access EMR clusters only within that subnet. With this feature, you no longer have to select a subnet for your Workspace. This gives you the flexibility to attach your Workspace to EMR clusters in any of the subnets specified for your Studio. We recommend that during EMR Studio creation, you provide multiple subnets in different Availability Zones to give EMR Studio users access to clusters in different Availability Zones.

    EMR Studio is available in US East (Ohio), US East (N. Virginia), US West (Oregon), Canada (Central), Europe (Ireland), Europe (Frankfurt), Europe (London), Europe (Stockholm), Asia Pacific (Mumbai), Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific (Sydney) , and Asia Pacific (Tokyo) regions.

    To learn more about best practices on setting up subnets for EMR Studio, see our documentation here. To learn more about EMR Studio Workspaces, see out documentation here.

    » Database Activity Streams now supports Graviton2-based instances

    Posted On: Nov 3, 2021

    Database Activity Streams now supports Graviton2-based instances for Amazon Aurora PostgreSQL-Compatible Edition and Amazon Aurora MySQL-Compatible Edition. Database Activity Streams for Amazon Aurora provides a near real-time stream of database activities in your relational database for auditing and compliance purposes. When integrated with third party database activity monitoring tools, Database Activity Streams can monitor and audit database activity to provide safeguards for your database and help you meet compliance and regulatory requirements.

    AWS Graviton2 processors are custom built by Amazon Web Services using 64-bit Arm Neoverse cores and deliver several performance optimizations over first-generation AWS Graviton processors. This includes 7x the performance, 4x the number of compute cores, 2x larger private caches per core, 5x faster memory, and 2x faster floating-point performance per core. Additionally, the AWS Graviton2 processors feature always-on fully encrypted DDR4 memory and 50% faster per core encryption performance. These performance improvements make Graviton2 database instances a great choice for database workloads.

    Solutions built on top of Database Activity Streams can help protect your database from internal and external threats. The collection, transmission, storage, and processing of database activity is managed outside your database, providing access control independent of your database users and admins. Your database activity is encrypted and asynchronously pushed to Amazon Kinesis data stream provisioned on behalf of your Aurora cluster.

    Integrations with IBM Security Guardium and Imperva provide a seamless integration with the Kinesis stream. These partner applications can generate alerts and audit all activity in your Aurora database.

    Click here to learn more about Database Activity Streams.

    » Amazon EC2 now supports access to Red Hat Knowledgebase

    Posted On: Nov 3, 2021

    Starting today, customers running subscription included Red Hat Enterprise Linux on Amazon EC2 can seamlessly access Red Hat Knowledgebase at no additional cost. The Knowledgebase is a library of articles, frequently asked questions (FAQs), and best-practice guides to help customers solve technical issues.

    Previously, subscription included RHEL customers on AWS had to contact AWS Premium Support in order to access Red Hat Knowledgebase. Now, AWS has partnered with Red Hat to provide one-click access to Knowledgebase for all subscription included RHEL customers. Customers can access Knowledgebase content in one of the three ways: by clicking on a link inside the Fleet Manager functionality in AWS System Manager, by using sign in with AWS option on Red Hat Customer portal, or via a link provided by AWS support.

    This Red Hat Knowledgebase feature on Amazon EC2 is available in all commercial AWS Regions today except the two regions in China. Customers can learn more about purchase options and pricing here. To learn more about Red Hat Enterprise Linux on EC2, check out the frequently asked questions page.

    » Amazon CloudFront now supports configurable CORS, security, and custom HTTP response headers

    Posted On: Nov 2, 2021

    Today, Amazon CloudFront is launching support for response headers policies. You can now add cross-origin resource sharing (CORS), security, and custom headers to HTTP responses returned by your CloudFront distributions. You no longer need to configure your origins or use custom Lambda@Edge or CloudFront functions to insert these headers. 

    You can use CloudFront response headers policies to secure your application’s communications and customize its behavior. With CORS headers, you can specify which origins a web application is allowed to access resources from. You can insert any of the following security headers to exchange security-related information between web applications and servers: HTTP Strict Transport Security (HSTS), X-XSS-Protection, X-Content-Type-Options, X-Frame-Options, Referrer-Policy and Content-Security-Policy. For example, HSTS enforces the use of encrypted HTTPS connections instead of plain-text HTTP. You can also add customizable key-value pairs to response headers using response headers policies, to modify a web applications behavior. Response headers you insert are also accessible to Lambda@Edge functions and CloudFront functions, enabling more advanced custom logic at the edge.

    With this release, CloudFront is also providing several pre-configured response headers policies. These include policies for default security headers, a CORS policy allowing resource sharing from any origin, a pre-flight CORS policy allowing all HTTP methods, and policies combining default security headers with CORS or pre-flight CORS. You can also create your own custom policies for various content and application profiles and apply them to any CloudFront distribution’s cache behavior that may have similar characteristics.

    CloudFront response headers policies are available for immediate use via the CloudFront Console, the AWS SDKs, and the AWS CLI. For more information, refer to the CloudFront Developer Guide. There is no additional fee for using the CloudFront response headers policies.

    » Amazon Pinpoint launches in-app messaging as a new communications channel

    Posted On: Nov 2, 2021

    In-app messaging enables customers to display targeted messages in mobile or web applications, and provide a personalized user experience. When an end user is engaged with a mobile or web application, customers can use in-app messaging to show relevant content to drive high-value user actions such as: repeat purchases, key content promotion, and user onboarding. After initial implementation these messages can be created and launched through the Pinpoint console, without the need to make code changes. 

    In-app messaging allows customers to design messages with a visual preview using pre-built templates, and options to A/B test up to five messages in one campaign. Amazon Pinpoint segmentation is used to select target users for the campaign based on attributes such as historical spend or last appointment visit. Customers can pick where to display the message within the application, along with how many times it should be viewed. Finally, the results of the campaign can be shown in Amazon Pinpoint analytics to optimize future campaigns, and allow messaging or creative to be fine-tuned.

    In-app messaging is generally available in the following regions: US East (N. Virginia), US West (Oregon), Asia Pacific (Mumbai), Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Canada (Central), Europe (Frankfurt), Europe (Ireland), Europe (London), and AWS GovCloud (US-West).

    For more information about in-app messaging, see In-app messaging channel in the Amazon Pinpoint User Guide.

    » Amazon Redshift announces native support for SQLAlchemy and Apache Airflow open-source frameworks

    Posted On: Nov 2, 2021

    Native support for open source SQLAlchemy (sqlalchemy-redshift) and Apache Airflow frameworks are now available for Amazon Redshift. The updated Amazon Redshift dialect for SQLAlchemy supports the Amazon Redshift open source Python driver. With this release you can use single sign-on with your Identity Provider (IdP) to connect to Redshift clusters and avoid credential management pains. You can also use new Amazon Redshift features such as using TIMESTAMPTZ and TIMETZ datatypes when you migrate to the latest Redshift dialect for SQL Alchemy and Apache Airflow. These features are available in sqlalchemy-redshift version 0.8.6 and later.

    Apache Airflow has added RedshiftSQLHook and RedshiftSQLOperator that allows Airflow users to execute Amazon Redshift operations. RedshiftSQLHook leverages the Amazon Redshift open source Python driver (redshift_connector) that supports authenticating via IAM or your Identity Provider supported in SQLAlchemy. The integration of Apache Airflow with SQLAlchemy leverages the updated sqlalchemy-redshift.

    The Github repositories for these projects can be found at:

  •   https://github.com/sqlalchemy-redshift/sqlalchemy-redshift
  •   https://github.com/apache/airflow
  • If you use SQL Alchemy or Apache Airflow we recommend that you update to the latest version so that you can benefit from the latest features in Amazon Redshift. You can read Redshift cluster management guide to know about Amazon Redshift Python driver.

    » Amazon Corretto 17 Support Roadmap Announced

    Posted On: Nov 2, 2021

    On September 16th we announced GA of Corretto 17. Today, we are pleased to announce that we will be providing Long-Term Support (LTS) for Corretto 17 until September 2028. We will also be moving to a new 2-year cadence for Corretto LTS releases, along with the rest of the OpenJDK community, as of this release. Please read our Corretto 17 Announcement post on the AWS Open Source blog for more details. Corretto 17 is available from our downloads page.

    Amazon Corretto is a no-cost, multi-platform, production-ready distribution of OpenJDK. Corretto is distributed by Amazon under an open source license.

    » Amazon DevOps Guru increases coverage of Amazon EKS metrics and adds metric view by cluster

    Posted On: Nov 2, 2021

    Amazon DevOps Guru now supports additional metrics at the node and pod-level for clusters managed by Amazon Elastic Kubernetes Service (EKS).

    Amazon DevOps Guru is a Machine Learning (ML) powered service that makes it easy to improve an application’s operational performance and availability. When Amazon DevOps Guru detects anomalous behavior in these metrics, it creates an insight that contains recommendations and lists of metrics and events that are related to the issue to help you diagnose and address the anomalous behavior.

    These node-level metrics help pinpoint specific nodes that may have high memory, CPU, or filesystem utilization, instead of relying on cluster-level aggregates. Pod-level metrics, which include pod_cpu_utilization_over_pod_limit and pod_memory_utilization_over_pod_limit, will help identify which pods are going over soft limits, and therefore are in danger of hitting hard resource constraints and are at a risk of producing errors due to resource exhaustion. Amazon DevOps Guru now also tracks container restarts and notifies you of issues with pulling images or issues with application startup. We will also be continuing to expand Amazon DevOps Guru support for containers.

    We are also introducing a new console view that will show Amazon EKS insights grouped together by metric at the cluster level in the Amazon DevOps Guru console. This view provides you more visibility into where a potential problem lies within the EKS cluster. For example, if a node is having network connectivity issues or is experiencing disk pressure, you will see the node and namespace anomalies appear grouped together under that metric by cluster which will help you identify the specific node or namespace with the issue.

    To use these new features, you will need to enable Container Insights on Amazon EKS.

    You can get started with Amazon DevOps Guru by selecting coverage from your CloudFormation stacks or your AWS account. To learn more, visit the DevOps Guru product page and the documentation pages, or post a question to the Amazon DevOps Guru forum

    » Amazon Time Sync Service now makes it easier to generate and compare timestamps

    Posted On: Nov 2, 2021

    Amazon Time Sync Service now allows you to easily generate and compare timestamps from Amazon EC2 instances with ClockBound, an open source daemon and library. This information is valuable to determine order and consistency for events and transactions across EC2 instances, independent from the instances’ respective geographic locations. ClockBound calculates your Amazon EC2 instance’s clock error bound to measure its clock accuracy and allows you to check if a given timestamp is in the past or future with respect to your instance’s current clock. On every call, ClockBound simultaneously returns two pieces of information: the current time and the associated absolute error range. This means that the actual time of a ClockBound timestamp is within a set range.

    To get started, first make sure you are using Chrony. Then install the ClockBound daemon and library, or build your own library to integrate ClockBound into your application. For the best clock accuracy, we also recommend using the Amazon Time Sync Service. The Amazon Time Sync Service and Chrony are configured by default on Amazon Linux 2 instances.

    To learn more about ClockBound, including installation instructions, see ClockBound on GitHub.

    To learn more about the Amazon Time Sync Service, see setting your time in the EC2 user guide.

    To read more about clock accuracy and clock error bound, refer to this blog post.

    » AWS Secrets Manager increases secrets limit to 500K per account

    Posted On: Nov 2, 2021

    AWS Secrets Manager now supports a limit of up to 500,000 secrets per account per region, up from 40,000 secrets in the past. This simplifies secrets management for software as a service (SaaS) or platform as a service (PaaS) applications that rely on unique secrets for large numbers of end customers.

    For existing customers, this increased secrets per account limit will be reflected in your accounts automatically. No action is required on your end to benefit from this improvement. This increased secrets limit is available in all regions where the service operates. For a list of regions where Secrets Manager is available see the AWS Regions Table. To learn more about AWS Secrets Manager limits, visit the AWS Secrets Manager Developer Guide.

    » AWS Graviton2 based T4g instances are now available in AWS GovCloud (US-West) Region

    Posted On: Nov 2, 2021

    Starting today, Amazon EC2 T4g instances are available in the AWS GovCloud (US-West) Region. T4g instances are powered by Arm-based AWS Graviton2 processors and deliver up to 40% better price performance over T3 instances. These instances provide a baseline level of CPU performance with the ability to burst CPU usage at  any time for as long as required. They offer a balance of compute, memory, and network resources for a broad spectrum of general purpose workloads, including large scale micro-services, caching servers, search engine indexing, e-commerce platforms, small and medium databases, virtual desktops, and business-critical  applications.

    AWS Graviton2 processors are custom-built by AWS using 64-bit Arm Neoverse N1 cores to enable the best price performance for cloud workloads running in Amazon EC2. They deliver a major leap in performance and capabilities over first-generation AWS Graviton processors, with 7x performance, 4x the number of compute cores, 2x larger caches, and 5x faster memory. AWS Graviton2 processors feature always-on 256-bit DRAM encryption and 50% faster per core encryption performance compared to the first-generation AWS Graviton processors.

    Amazon EC2 T4g instances are built on the AWS Nitro System, a collection of AWS-designed hardware and software innovations that enable the delivery of efficient, flexible, and secure cloud services with isolated multi-tenancy, private networking, and fast local storage. T4g instances provide up to 5 Gbps networking bandwidth, and 2,780 Mbps of Elastic Block Store (EBS) bandwidth. They are available in 7 sizes, providing up to 8 vCPUs, and are purchasable as part of Savings Plans, On-Demand, as Reserved instances, or as Spot instances. They are supported by a broad and growing ecosystem of operating systems and services from Independent Software Vendors (ISVs) as well as AWS.

    With this regional expansion, Amazon EC2 T4g instances are now available in the AWS US East (N. Virginia), US East (Ohio), US West (Oregon), US West (San Francisco), Canada (Central), South America (Sao Paulo), Asia Pacific (Mumbai), Asia Pacific (Tokyo), Asia Pacific (Hong Kong), Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific (Sydney), Europe (Frankfurt), Europe (Ireland), Europe  (London), Europe (Stockholm), Europe (Paris), Europe (Milan) regions, China (Ningxia) China (Beijing) regions, and AWS GovCloud (US-West) Regions.

    To get started with AWS Graviton2-based Amazon EC2 T4g instances visit the AWS Management Console, AWS Command Line Interface (CLI), and AWS SDKs. To learn more, visit the AWS Graviton page or the Getting Started Github page.

    » Amazon RDS on AWS Outposts now supports exporting database logs to Amazon CloudWatch

    Posted On: Nov 2, 2021

    Amazon Relational Database Service (Amazon RDS) on AWS Outposts can now export database logs to Amazon CloudWatch. You can now monitor all of your Amazon RDS on AWS Outposts database instances from the same single pane of glass as your Amazon RDS database instances in our AWS Regions.

    Amazon CloudWatch is a monitoring and observability service built for DevOps engineers, developers, and IT managers. CloudWatch provides you with data and actionable insights to monitor your applications, respond to system-wide performance changes, optimize resource utilization, and get a unified view of operational health. CloudWatch collects monitoring and operational data in the form of logs, metrics, traces, and events, providing you with a unified view of AWS resources, applications, and services that run on AWS and on-premises servers. You can use CloudWatch to help detect anomalous behavior in your environments, set alarms, visualize logs and metrics side by side, take automated actions, troubleshoot issues, and discover insights to keep your applications running smoothly.

    In your monthly bill, Amazon RDS on AWS Outposts logs will incur the same charges as they do for RDS database logs in CloudWatch in an AWS region.

    For more information, see the Amazon Cloudwatch Pricing page and our RDS out Outposts documentation. Get started exporting your Amazon RDS on AWS Outposts database logs by following this how-to

    » AWS DeepRacer introduces multi-user account management

    Posted On: Nov 1, 2021

    With an AWS DeepRacer multi-user account set up, organizers (aka DeepRacer Account Administrators) can now provide racers access to the AWS DeepRacer service under their AWS account ID, monitor spending on training and storage, enable/disable training, and view/manage models for every user using their AWS account from the AWS DeepRacer console.

    Until now, organizations hosting an AWS DeepRacer event for their employees, students, or customers had to create and manage an AWS account for each of their users. This was cumbersome and time consuming, often limiting the number of users able to engage with the AWS DeepRacer console. With each individually managed AWS account, organizers had little control of or visibility into usage, leading to companies spending more money on training their employees than they were expecting to spend. Now, anyone hosting an AWS DeepRacer event can activate the AWS DeepRacer multi-user account feature in the AWS DeepRacer console, apply policies that enable thousands of DeepRacer-only user profiles to be created under their AWS account ID, monitor usage of all user profiles under their AWS account, and stop/resume training charges at any time from a single dashboard in AWS DeepRacer console.

    Learn more in our documentation here.

    » Amazon MemoryDB for Redis now supports AWS CloudFormation

    Posted On: Nov 1, 2021

    Amazon MemoryDB for Redis now supports AWS CloudFormation, enabling you to manage MemoryDB resources using CloudFormation templates. Amazon MemoryDB for Redis is a Redis-compatible, durable, in-memory database service that delivers ultra-fast performance. AWS CloudFormation lets you model, provision, and manage AWS and third-party resources by treating infrastructure as code. CloudFormation makes it easier for you to create and manage MemoryDB resources without having to configure MemoryDB separately through the console. For example, you can create MemoryDB clusters, subnet groups, parameter groups, and users using CloudFormation templates. 

    AWS CloudFormation support for Amazon MemoryDB is available in all AWS regions where Amazon MemoryDB is available. To get started with AWS CloudFormation for Amazon MemoryDB, see the AWS CloudFormation user guide. To learn more about Amazon MemoryDB, visit the Amazon MemoryDB product page or documentation. Have questions or feature requests? Email us at: memorydb-help@amazon.com.

    » AWS Transit Gateway Network Manager launches new APIs to simplify network and route analysis in your global network

    Posted On: Nov 1, 2021

    Today, AWS Transit Gateway Network Manager launched new APIs that enable you to perform automated analysis of your global network and allow you to build your own topological views for visualization purposes. You can get an aggregated view of your global network resources, analyze routes, and retrieve telemetry data across AWS regions using the following APIs:

  • Describe the network resources for the global network (GetNetworkResources)
  • Get the network health information of the global network (GetNetworkTelemetry)
  • Get the network routes of a specific route table (GetNetworkRoutes)
  • Get the network resource relationships of a specific resource (GetNetworkResourceRelationships)
  • Get the count of network resources for the global network (GetNetworkResourceCounts)
  • In addition, you can now perform route analysis in your transit gateway route tables by calling the following APIs:

  • Start analyzing the routing path between the source and destination (StartRouteAnalysis)
  • Get route analysis results (GetRouteAnalysis)
  • Update the resource metadata for the global network (UpdateNetworkResourceMetadata)
  • Using these APIs, you can programmatically verify that the transit gateway route table configuration will work as expected before you start sending live traffic, validate your existing route configuration, and diagnose route-related issues that are causing traffic disruption in your global network.

    To get started using the AWS Transit Gateway Network Manager APIs, please refer to the API reference documentation.

    » Amazon Simple Email Service now offers a new console experience

    Posted On: Nov 1, 2021

    Amazon Simple Email Service (Amazon SES) is pleased to announce the launch of the newly redesigned service console. With its streamlined look and feel, the new console makes it even easier for customers to leverage the speed, reliability, and flexibility that Amazon SES has to offer.

    Amazon SES now offers a new, optimized console to provide customers with a simpler, more intuitive way to create and manage their resources, collect sending activity data, and monitor reputation health. It also has a more robust set of configuration options and new features and functionality not previously available in the classic console such as Account-level suppression list, assigning a Default configuration set, 2048-bit DKIM keys, and the Account dashboard.

    With this launch, the new Amazon SES console will be the default service console in the AWS regions where Amazon SES is available, other than AWS GovCloud (US-West). For a complete list of all of the regional endpoints for Amazon SES, see AWS Service Endpoints in the AWS General Reference.

    Amazon SES is a scalable, cost-effective, and flexible cloud-based email service that allows digital marketers and application developers to send marketing, notification, and transactional emails from within any application. To learn more about Amazon SES, visit this page.

    »

    Page 1|Page 2|Page 3|Page 4