Contents of this page is copied directly from AWS blog sites to make it Kindle friendly. Some styles & sections from these pages are removed to render this properly in 'Article Mode' of Kindle e-Reader browser. All the contents of this page is property of AWS.

Page 1|Page 2|Page 3|Page 4

AWS announces phone number enrichments for Amazon Fraud Detector Models

Posted On: Dec 29, 2021

We are excited to announce the launch of phone number enrichments for Amazon Fraud Detector machine  learning (ML) models. Amazon Fraud Detector (AFD) is a fully managed service that makes it easy to identify potentially fraudulent online activities, such as the creation of fake accounts or online payment fraud. Using ML under the hood and based on over 20 years of fraud detection expertise from AFD automatically identifies potentially fraudulent activity in milliseconds—with no ML expertise required.

As part of the model training process, Amazon Fraud detector enriches raw data elements like IP address and Bank Identification (BIN) number of payment instruments with data such as the geolocation of the IP address or the issuing bank for a credit card. Augmenting users’ data with such enrichments ensures best-in-class performance from AFD models. Starting today, Amazon Fraud Detector now enriches phone number data with additional information such as geolocation, and the original carrier. This new enrichment boosts performance for models that use phone number, enabling these models to capture up to 16% more fraud at a 4% false positive rate.

Phone number enrichments are automatically enabled for AFD’s Online Fraud Insights (OFI) and Transaction Fraud Insights (TFI) model types in all regions where AFD is available. AFD customers can make use of this new enrichment by retraining their AFD models that use phone number as one of the event variables. For additional details, see our documentation page.

» AWS Secrets Manager now automatically enables SSL connections when rotating database secrets

Posted On: Dec 22, 2021

AWS Secrets Manager now transparently supports SSL connections when rotating database secrets for Amazon RDS MySQL, MariaDB, SQL Server, PostgreSQL, and MongoDB. You can now enforce SSL to be always enabled for these databases, without first modifying AWS Lambda resources provided by AWS Secrets Manager. 

Secrets Manager has always supported SSL connections to databases, but customers were responsible for updating their rotation Lambda code to include necessary certificates for Amazon RDS. Customers were also responsible for updating rotation code when RDS certificates rotated. With this launch, rotation Lambda code for all RDS databases (except Oracle) now connects to the database using SSL by default for new rotations. All necessary certificates are built-in and automatically updated. 

For new secret rotations, no additional action is needed to benefit from this feature. Simply set up the rotation as explained in the Secrets Manager user guide. For existing rotations, you must upgrade your rotation Lambdas to the latest version. For more details on how to upgrade, see Enabling SSL for Existing Rotations.  

» Amazon S3 on Outposts launches in AWS GovCloud (US) Regions

Posted On: Dec 22, 2021

Amazon S3 on Outposts is now available in both AWS GovCloud (US) Regions. The expansion into the AWS GovCloud (US) Regions allows U.S. government agencies and contractors to move sensitive workloads onto their AWS Outposts by addressing their specific regulatory and compliance requirements for object storage.

Amazon S3 on Outposts helps you meet your low latency, local data processing, and data residency needs by delivering object storage to your Outpost on premises. Using the S3 APIs and features, S3 on Outposts makes it easier to store, secure, tag, retrieve, report on, and control access to the data on your Outposts. AWS Outposts is a fully managed service that extends AWS infrastructure, services, and tools to virtually any data center, co-location space, or on-premises facility for a truly consistent hybrid experience.

For a list of all regions where S3 on Outposts is available, see the Outposts Region table. To learn more about S3 on Outposts, visit our documentation.

» Amazon Connect Chat user interface now supports browser notifications for your customers

Posted On: Dec 22, 2021

The Amazon Connect Chat user interface now supports browser notifications through your customers’ web browser, improving customer satisfaction by letting the customer know when the agent has responded to their message. When your customer receives a new chat message while in another application or browser window, they will receive a notification through their web browser that they can easily click to view the message in the Chat user interface. This feature is supported out-of-the box without the need for manual configuration.

Amazon Connect Chat browser notifications is available in all AWS regions where Amazon Connect chat user interface is offered. To learn more and get started, visit the help documentation or the Amazon Connect website.

» Amazon Connect Customer Profiles now supports pre-configured connectors from Segment and Shopify

Posted On: Dec 21, 2021

Amazon Connect Customer Profiles now supports near real-time data ingestion from Segment and Shopify. Use existing Segment data sources to ingest customer information such as name, email address, and phone number into Amazon Connect. Once your connectors are configured, customer profiles are created and updated in near real-time for new customer actions such as new customer registration, change in contact information, and new order transactions. When a customer calls in, Customer Profiles identifies the customer and presents the agent with latest and most up to date customer information to help them service the customer.

With the Shopify connector, as soon as a customer places an order in the Shopify storefront, Amazon Connect Customer Profiles is designed to update the profile information with the latest order details. When the customer calls in to inquire about a recent order they have placed, the agent can review the order information such as order name, price, order time to help respond to pre-purchase and post-purchase questions without navigating between multiple applications, helping to increase agent productivity.

Get started with the Segment and Shopify connectors by following simple steps in our documentation. Amazon Connect Customer Profiles have pre-built connectors for Salesforce, ServiceNow, Zendesk, Marketo, Amazon S3, Segment, and Shopify in Europe (London), Europe (Frankfurt), Asia Pacific (Sydney), Asia Pacific (Singapore), Asia Pacific (Tokyo), Canada (Central). US West (Oregon), and US East (N. Virginia). To learn more about Amazon Connect Customer Profiles please visit the Customer Profiles website.

» AWS Lambda now supports Internet Protocol Version 6 (IPv6) endpoints for inbound connections

Posted On: Dec 21, 2021

AWS Lambda now supports IPv6 endpoints for inbound connections, allowing customers to invoke Lambda functions over IPv6. This helps customers to meet IPv6 compliance requirements, and removes the need for expensive networking equipment to handle address translation between IPv4 and IPv6.

To use this new capability, configure your applications to use AWS Lambda’s new dual-stack endpoints which support both IPv4 and IPv6. AWS Lambda’s new dual-stack endpoints have the format lambda.region.api.aws. For example, the dual-stack endpoint in US East (N. Virginia) is lambda.us-east-1.api.aws. When you make a request to a dual-stack Lambda endpoint, the endpoint resolves to an IPv6 or an IPv4 address, depending on the protocol used by your network and client.

To connect programmatically to an AWS service, you can specify an endpoint in AWS CLI or AWS SDK. For more information on service endpoints, see AWS Service endpoints. To learn more about Lambda’s service endpoints, see AWS Lambda service endpoints in the AWS documentation.

You can use AWS Lambda’s new dual-stack endpoints for inbound connections at no additional cost. The new dual-stack endpoints are generally available in US East (N. Virginia), US West (N. California), US East (Ohio), US West (Oregon), Europe (Frankfurt), Europe (Ireland), Europe (London), Europe (Paris), Europe (Stockholm), Europe (Milano), Asia Pacific (Singapore), Asia Pacific (Tokyo), Asia Pacific (Sydney), Asia Pacific (Seoul), Asia Pacific (Osaka), Asia Pacific (Mumbai), Asia Pacific (Hong Kong), Canada (Central), Middle East (Bahrain), South America (Sao Paulo), and Africa (Cape Town). For more information on availability, please see the AWS Region table.

» Amazon Chime SDK now supports stereo audio

Posted On: Dec 21, 2021

Amazon Chime SDK lets developers add real-time audio, video, and screen share to their web and mobile applications. Amazon Chime SDK meetings now support stereo audio, with 48kHz sampling and 128kbps encoding. These capabilities enable developers to capture live instruments or stereo microphone audio in their applications as well as sharing of pre-recorded music or other stereo content with stereo playback to users.

The Amazon Chime SDK service supports 48kHz stereo audio, and the capabilities of the individual client libraries are as follows:

  • JavaScript: 48kHz audio, stereo capture at 128kbps, stereo playback
  • iOS and Android: 48kHz audio, mono capture at 64kbps, stereo playback
  • Some browsers, operating systems, or devices may limit the audio capture sample rate or be unable to play stereo audio. When limited, audio will be captured at the highest available sample rate, and a down-mixed mono audio will be provided for playback.

    To learn more about Amazon Chime SDK meetings and stereo audio, refer to the following resources:

  • Amazon Chime SDK
  • Amazon Chime SDK Developer Guide
  • » Amazon MSK adds support for Apache Kafka version 2.7.2

    Posted On: Dec 21, 2021

    Amazon Managed Streaming for Apache Kafka (Amazon MSK) now supports Apache Kafka version 2.7.2 for new and existing clusters. Apache Kafka 2.7.2 includes security improvements and bug fixes. To learn more about these fixes you can review the Apache Kafka release notes for 2.7.2

    Amazon MSK is a fully managed service for Apache Kafka and Kafka Connect that makes it easy for you to build and run applications that use Apache Kafka as a data store. Amazon MSK is fully compatible with Apache Kafka, which enables you to quickly migrate your existing Apache Kafka workloads to Amazon MSK with confidence or build new ones from scratch. With Amazon MSK, you can spend more time innovating on applications and less time managing clusters. To learn how to get started, see the Amazon MSK Developer Guide.

    Support for Apache Kafka version 2.7.2 is offered in all AWS regions where Amazon MSK is available.

    » NICE DCV releases version 2021.3 with DCV Connection Gateway

    Posted On: Dec 21, 2021

    NICE DCV version 2021.3 introduces multiple new features such as DCV Connection Gateway and a refreshed DCV Web Client user interface. NICE DCV is a high-performance remote display protocol that helps customers securely access remote desktop or application sessions, including 3D graphics applications hosted on servers with high-performance GPUs.

    NICE DCV version 2021.3 contains the following features and improvements:

  • DCV Connection Gateway - The DCV Connection Gateway is an optional component that enables customers to securely access their DCV sessions through a single IP/hostname, without exposing the entire fleet of DCV servers to the public internet. The DCV Connection Gateway is available at no additional cost to customers and supports both WebSocket (TCP) and QUIC (UDP) connection methods.
  • Web client user interface updates - The new web client user interface introduces a redesigned DCV menu bar with new icons, improved notification messaging, and fixes for many usability paper-cuts that affected user experience.
  • Support for new Amazon EC2 graphics-optimized instances - NICE DCV now supports Amazon EC2 G5 and G5g instances. G5 instances are the latest generation of NVIDIA GPU-based instances and deliver up to 3x better performance for graphics-intensive applications compared to G4dn instances. G5g instances are powered by AWS Graviton2 processors, and are designed to provide the best price performance in Amazon EC2 for graphics workloads such as Android game streaming.
  • Support for Windows 11 and Windows Server 2022 - Customers can now use NICE DCV Windows native client on Windows 11 and NICE DCV server software on Windows Server 2022.
  • NICE DCV is the remote display protocol used by Amazon Appstream 2.0, AWS RoboMaker, and Amazon Nimble Studios. For more information, please see the NICE DCV 2021.3 release notes or visit the NICE DCV webpage to download and get started with DCV.

    » Amazon MSK adds support for Apache Kafka version 2.6.3

    Posted On: Dec 21, 2021

    Amazon Managed Streaming for Apache Kafka (Amazon MSK) now supports Apache Kafka version 2.6.3 for new and existing clusters. Apache Kafka 2.6.3 includes security improvements and bug fixes. To learn more about these fixes you can review the Apache Kafka release notes for 2.6.3.

    Amazon MSK is a fully managed service for Apache Kafka and Kafka Connect that makes it easy for you to build and run applications that use Apache Kafka as a data store. Amazon MSK is fully compatible with Apache Kafka, which enables you to quickly migrate your existing Apache Kafka workloads to Amazon MSK with confidence or build new ones from scratch. With Amazon MSK, you can spend more time innovating on applications and less time managing clusters. To learn how to get started, see the Amazon MSK Developer Guide.

    Support for Apache Kafka version 2.6.3 is offered in all AWS regions where Amazon MSK is available.

    » Amazon Connect Customer Profiles is now PCI compliant and in scope for SOC 1 and SOC 2

    Posted On: Dec 21, 2021

    Amazon Connect Customer Profiles is now Payment Card Industry Data Security Standard (PCI) compliant and in scope for System Organization Controls (SOC 1 and SOC 2). Customer Profiles is designed to automatically bring together customer information from multiple applications and surfaces it to a contact agent at the moment they begin interacting with a customer. 

    Amazon Connect Customer Profiles is available in US East (N. Virginia), US West (Oregon), Canada (Central), Europe (London), Europe (Frankfurt), Asia Pacific (Sydney), Asia Pacific (Singapore), and Asia Pacific (Tokyo). For more information on how to add new data sources with Amazon S3, visit the documentation pages. To learn more, see the API reference guide, help documentation, visit our webpage or read the blog post.

    » Porting Assistant for .NET adds support for .NET 6 and conversion of ASP.NET Web Forms applications to ASP.NET Core Blazor

    Posted On: Dec 21, 2021

    Porting Assistant for .NET now supports .NET 6 as a target framework and porting of certain ASP.NET Web Forms application configurations to ASP.NET Core Blazor framework. With this release, Porting Assistant will translate select Web Forms view/UI layer files, page life-cycle events, and application life cycle events to the cross-platform Blazor Framework. Developers can use the Porting Assistant for .NET standalone tool or Porting Assistant for .NET Visual Studio IDE extension to modernize their Web Forms applications. This release also enables developers to port their applications to the latest Long Term Support release .NET 6, in addition to .NET Core 3.1 and .NET 5 target versions.

    Porting Assistant for .NET is an open source analysis tool that reduces the manual effort and guesswork involved in porting .NET Framework applications to .NET Core 3.1, .NET 5, or .NET 6, helping customers move to Linux faster. It identifies incompatibilities, generates an assessment report with known replacement suggestions, and assists with porting. By modernizing .NET applications to Linux, customers can take advantage of the improved performance, increased security, reduced cost, and the robust ecosystem of Linux.

    Learn more about Porting Assistant for .NET in our documentation here.

    » Amazon RDS for Oracle now supports Oracle Connection Manager (CMAN)

    Posted On: Dec 21, 2021

    Amazon Relational Database Service (Amazon RDS) for Oracle now supports additional use cases for Oracle Connection Manager (CMAN). CMAN is a proxy server that forwards connection requests to database servers or other proxy servers.

    The primary functions of Oracle Connection Manager are access control and session multiplexing. CMAN allows rule-based Access Control configuration to filter out user-specified client requests. This allows CMAN to reject connections from unknown clients. CMAN is used to configure session multiplexing by funneling multiple client sessions through a network connection to a shared server destination. Session multiplexing in CMAN reduces operating system and network resource requirements by minimizing the number of network connections made to a server.

    CMAN supports all supported versions of Oracle Database Enterprise Edition (EE). To learn more about how to configure Oracle Connection Manager (CMAN), please refer to Configuring CMAN documentation.

    Amazon RDS for Oracle makes it easy to set up, operate, and scale Oracle Database deployments in the cloud. See Amazon RDS for Oracle Database Pricing for regional availability.

    » NICE DCV releases web client SDK 1.0.4 with multiple connection support

    Posted On: Dec 21, 2021

    NICE DCV web client software development kit (SDK) version 1.0.4 introduces support for multiple concurrent connections. Developers can now use the web client SDK to build applications with multiple side-by-side streaming views that show different DCV sessions in a single webpage.

    NICE DCV is a high-performance remote display protocol that helps users to securely access remote desktop or application sessions, including 3D graphics applications hosted on servers with high-performance GPUs. The NICE DCV web client SDK is an optional JavaScript SDK component that enables developers and independent software vendors (ISVs) to integrate a customized NICE DCV web client into their web applications. Customers can build custom NICE DCV web clients using custom user interface components, and the core NICE DCV streaming features in the SDK, delivering unique experiences tailored to their own use cases.

    The NICE DCV web client SDK is designed to be used in conjunction with NICE DCV software. Visit the NICE DCV documentation page to learn more and the NICE DCV download page to get started.

    » AWS Transfer Family is now FedRAMP compliant

    Posted On: Dec 21, 2021

    AWS Transfer Family is now authorized as FedRAMP Moderate in US East (N. Virginia), US East (Ohio), US West (N. California), US West (Oregon) and as FedRAMP High in GovCloud(US-West) and GovCloud(US-East).

    AWS Transfer Family provides fully managed file transfers over SFTP, FTPS, and FTP for Amazon S3 and Amazon EFS. The Federal Risk and Authorization Management Program (FedRAMP) is a US government-wide program that delivers a standard approach to the security assessment, authorization, and continuous monitoring for cloud products and services. FedRAMP uses the National Institute of Standards and Technology (NIST) Special Publication 800 series and requires cloud service providers to receive an independent security assessment conducted by a third-party assessment organization (3PAO) to ensure that authorizations are compliant with the Federal Information Security Management Act (FISMA). US Federal agencies and commercial customers working with the US Federal government can now utilize AWS Transfer Family to run sensitive and highly regulated file transfer workloads.

    For more information about FedRAMP compliance for AWS Transfer Family, visit the compliance page. To learn more about AWS Transfer Family’s security and compliance, visit the documentation.

    » EC2 Image Builder adds console support for custom image creation from on-premise images

    Posted On: Dec 21, 2021

    Now customers can use EC2 Image Builder to build custom images from their on-premise images in a single console-based experience. This capability makes it easier for customers to incorporate on-premise images stored in S3 (in OVA, VHD, VHDX, VMDK and raw format) in EC2 Image Builder pipelines. Those customers can now leverage existing EC2 Image Builder capabilities, such as process automation, build security and image distribution, to build images via an intuitive console.

    You can easily bring in on-premise images in S3 as your base image in the image build pipeline and use EC2 Image Builder workflows to create and distribute compliant output images. You can also quickly evaluate the image import errors in the EC2 Image Builder Console and reduce the debugging efforts in the image build process.

    Get started from the EC2 Image Builder Console, CLI, API, Cloud Formation, or CDK, and learn more in the EC2 Image Builder documentation. You can find information about on-premise image support on EC2 Image Builder in the feature documentation page

    You can also learn about upcoming EC2 Image Builder features on the public roadmap.

    » Support for Fujitsu QoS protocol now available in AWS Elemental MediaConnect

    Posted On: Dec 21, 2021

    Starting today, AWS Elemental MediaConnect supports the Fujitsu Quality of Service (QoS) protocol. Fujitsu QoS is one of several transport protocols supported in MediaConnect, a list that includes Zixi, Reliable Internet Stream Transport (RIST), Secure Reliable Transport (SRT), and Real-Time Transport Protocol (RTP). Because MediaConnect translates between protocols, you can build a variety of reliable live video transport applications running inside and outside of AWS.

    AWS Elemental MediaConnect is a reliable, secure, and flexible transport service for live video that enables broadcasters and content owners to build live video workflows and securely share live content with partners and customers. MediaConnect helps customers who run 24x7 TV channels or stream live events transport high-bitrate live video streams into, through, and out of the AWS Cloud in a fraction of the time and cost of satellite or fiber services. MediaConnect can function as a standalone service or within a larger video workflow that includes other AWS services, including AWS Elemental Media Services.

    Visit the AWS Region Table for a full list of AWS Regions where MediaConnect is available. To learn more, visit the MediaConnect overview page.

    » AWS Cost Management now supports hourly granularity in Savings Plans Utilization and Coverage reports

    Posted On: Dec 21, 2021

    Starting today, customers can track Savings Plans Utilization and Coverage with hourly granularity in the AWS Cost Management Console. Savings Plans is a flexible pricing model that offers savings of up to 72% on your Amazon EC2, AWS Lambda, and Amazon ECS with AWS Fargate type usage, in exchange for a commitment to a consistent amount of compute usage (measured in $/hour) for a 1 or 3 year term. Savings Plans discounts are calculated and applied at hourly basis. Hourly granularity in Savings Plans Utilization report helps you to visualize and understand how well you are using your existing Savings Plans subscriptions per hour. Similarly, hourly granularity in Savings Plans Coverage report helps you visualize and understand how much of your total Savings Plans eligible spend is covered by Savings Plans per hour.

    With hourly granularity in Savings Plans Utilization and Coverage reports, you will be able to determine whether you are utilizing more resources than your Savings Plans commitments in one or many hours, resulting in charges at on-demand rate and low Savings Plans coverage. Similarly, you can determine if you are underutilizing your Savings Plans Commitments in one or many hours, resulting in low Savings Plans utilization. With hourly utilization and coverage metrics, you can pinpoint when exactly your usage was more or less than your Savings Plans commitment. This helps you understand your Savings Plans’ usage pattern, why you are being charged on-demand rates, and when you are not fully utilizing your Savings Plan commitment.

    Hourly Granularity in Savings Plans Utilization and Coverage is available for all customers who have enabled hourly granularity through AWS Cost Explorer. Hourly granularity is a paid feature in Cost Explorer. If you have enabled hourly granularity in Cost Explorer, you won’t be charged more to use hourly granularity in Savings Plans Utilization and Coverage. Once hourly granularity is enabled, you can view hourly data for the past 14 days through Savings Plans Utilization and Savings Plans Coverage reports on Cost Management Console or through Cost Explorer SDK API.

    To learn more about Savings Plans Utilization and Coverage reports, please refer to Savings Plans user guide.

    » Amazon Translate announces profanity masking

    Posted On: Dec 20, 2021

    Amazon Translate is a neural machine translation service that delivers fast, high-quality, affordable, and customizable language translation. Starting today, you have the ability to mask commonly understood profane terms and prevent them from appearing in your translations. By default, Amazon Translate chooses clean words for your translation output. In cases where profane words appear in the translated output, you can now choose to mask the profane words and phrases with a grawlix string “?$#@$”. This 5-character sequence is used for each profane word or phrase, regardless of the length or number of characters. 

    Profanity masking can be applied to both real-time and asynchronous batch translation jobs in all commercial AWS regions where Amazon Translate is available. To learn more, please read the Amazon Translate documentation on profanity masking.

    » Amazon Connect launches AWS CloudFormation support for contact flow and contact flow module resources

    Posted On: Dec 20, 2021

    Amazon Connect now supports AWS CloudFormation on two new resources: contact flow and contact flow module. You can now use AWS CloudFormation templates to help you deploy these Amazon Connect resources—along with the rest of your AWS infrastructure—in a secure, efficient, and repeatable way. Additionally, you can use these templates to maintain consistency across Amazon Connect instances. For more information, see Amazon Connect Resource Type Reference in the AWS CloudFormation User Guide.

    AWS CloudFormation support for Amazon Connect contact flow and contact flow module resources is available in all AWS regions where Amazon Connect is offered. To learn more about Amazon Connect, the AWS contact center as a service solution on the cloud, please visit the Amazon Connect website.

    » AWS Trusted Advisor adds three optimization checks for Microsoft SQL Server on Amazon EC2

    Posted On: Dec 20, 2021

    AWS Trusted Advisor now supports new recommendations that help you simplify your SQL Server optimization on Amazon EC2. The checks inspect your SQL Server workloads and automatically lists your SQL Server instances that need optimization. You can then take recommended actions to reduce costs and improve security. You can find the details of the three checks below.

    1. Amazon EC2 Instances with Microsoft SQL Server End of Support - Checks the SQL Server versions for Amazon EC2 instances and alerts you if the versions are near or have reached the end of support. For example, SQL Server 2012 Extended Support will end on July 12, 2022. You can find the flexible migration and upgrade options on AWS through check recommendations.
    2. Amazon EC2 Instances Over-provisioned for Microsoft SQL Server - Checks your Amazon EC2 instances that are running SQL Server and alerts you if an instance exceeds the SQL Server software vCPU limit. For example, An instance with SQL Server Standard edition can use up to 48 vCPUs. An instance with SQL Server Web can use up to 32 vCPUs.
    3. Amazon EC2 Instances Consolidation for Microsoft SQL Server - Checks your Amazon EC2 instances and alerts you if your instance has less than the minimum number of SQL Server licenses. You can consolidate smaller SQL Server instances to help lower costs.

    AWS Trusted Advisors provides recommendations that help you follow AWS best practices. The new SQL Server checks are available in the AWS Business Support and AWS Enterprise Support plans. Customers can view in the AWS Trusted Advisor Console, and accessible through the AWS Support API. To learn more about setting up alarms using Amazon CloudWatch, see Creating Trusted Advisor alarms using CloudWatch. For more information, visit the AWS Trusted Advisor webpage and check reference documentation.

    » AWS Well-Architected Tool adds four new Trusted Advisor checks

    Posted On: Dec 20, 2021

    AWS Well-Architected now supports four new AWS Trusted Advisor checks as a unified home of best practice recommendations to identify the most impactful risks and take action to mitigate them. The new checks are:

    1. AWS Well-Architected high risk issues for cost optimization
    2. AWS Well-Architected high risk issues for performance efficiency
    3. AWS Well-Architected high risk issues for security
    4. AWS Well-Architected high risk issues for reliability

    Customers use Well-Architected to review their workloads against AWS best practices. As an output of the review, Well-Architected provides customers with a list of high and medium risk issues based on best practices defined by the Well-Architected framework for each pillar. The Trusted Advisor checks sourced from Well-Architected will contain an aggregated count of the number of HRIs (High Risk Issues) discovered per pillar per workload based on the self-assessment that customers performed during the Well-Architected review. AWS customers will access the Trusted Advisor checks sourced from Well-Architected to identify and prioritize workloads with high HRI count, select the workloads that are critical to them, and dig deeper into specific HRIs. The checks will include timestamps for review start date, recent review date, and the workload type to distinguish production and non-production workloads.

    The new checks are available to view in the AWS Trusted Advisor Console and accessible via the AWS Support API. Customers can set up alerts based on the results of Trusted Advisor checks. To learn more about setting up alarms using Amazon CloudWatch, see Creating Trusted Advisor alarms using CloudWatch. For a full set of Trusted Advisor Best Practice Checks, see AWS Trusted Advisor best practice checklist. Go here for AWS Well-Architected Framework.

    » AWS DataSync can now copy data to and from Amazon FSx for Lustre

    Posted On: Dec 20, 2021

    AWS DataSync now supports copying data to and from Amazon FSx for Lustre, a fully managed service that provides cost-effective, high-performance, scalable storage for compute workloads. Using DataSync, you can quickly and securely perform data movement tasks such as moving data from one FSx for Lustre file system to another, migrating your on-premises data to your FSx for Lustre file system, or copying data between your FSx for Lustre file system and other AWS Storage services such as Amazon S3, Amazon Elastic File System (EFS), or Amazon FSx for Windows File Server. You can also use DataSync for ongoing data transfers between on-premises storage and AWS for processing.

    DataSync uses a purpose-built network protocol and scale-out architecture to accelerate data transfer between storage systems and AWS services. It handles moving files and objects, scheduling data transfers, monitoring the progress of transfers, encryption of data, verification of data transfers, and notification of issues. DataSync integrates with Amazon CloudWatch to provide performance metrics, logging, and events. As a fully-managed service, DataSync securely and seamlessly connects to your Amazon FSx for Lustre file system, making it easy for you to move millions of files and petabytes of data without the need for deploying or managing infrastructure in the cloud. In addition, DataSync can help speed up critical hybrid cloud storage workflows in industries that need to move active files into AWS quickly. This includes machine learning in life sciences, video production in media and entertainment, big data analytics in financial services, and seismic research in oil and gas.

    This new capability can be used in all regions where AWS DataSync and Amazon FSx for Lustre are available. Learn more by reading the DataSync User Guide and DataSync website. Sign in to the AWS DataSync console to get started.

    » Amazon Chime SDK media capture pipelines supports Amazon S3 server side encryption with AWS Key Management Service

    Posted On: Dec 20, 2021

    The Amazon Chime SDK lets developers add real-time audio, video, screen share, and messaging capabilities to their web or mobile applications. With media capture pipelines, developers can capture the contents of their Amazon Chime SDK meetings to the Amazon Simple Storage Service (Amazon S3) bucket of their choice. Developers can now use media capture pipelines with Amazon S3 buckets which use server side encryption with customer managed keys using Server-Side Encryption with AWS Key Management Service (SSE-KMS), to help support your encryption requirements.

    To use the SSE-KMS to protect your media capture artifacts, you do not have to make any code or application modification to encrypt your data. In your existing or new Amazon S3 bucket that is to be used for storing media capture artifacts, you can simply configure the Amazon S3 bucket level server side encryption with customer managed keys (CMK) and provide the Amazon Chime SDK service proper permission on the CMK’s key policy. 

    To learn more about the Amazon Chime SDK and media capture pipeline with SSE-KMS, review the following resources:

    * Amazon Chime SDK
    * Using SSE-KMS to protect your media capture artifacts in the Amazon Chime SDK Developer Guide

    » Amazon Detective simplifies account management with the support for AWS Organizations

    Posted On: Dec 20, 2021

    Amazon Detective has added support for AWS Organizations to simplify account management for security operations and investigations across all existing and future accounts in an organization. With this launch, new and existing Detective customers can onboard and centrally manage the Detective graph database for up to 1,200 AWS accounts. This support is available today in all Detective supported AWS Regions. To learn more, see the Amazon Detective Administration Guide.

    To get started, the organization management account can designate any member account as the Detective administrator. Detective recognizes when you’ve designated an account to administer other AWS security services such as Amazon GuardDuty or AWS Security Hub, and recommends that you choose that account as the administrator account for Detective. The administrator account enables organization accounts as member accounts in Detective. The administrator account can then centrally conduct security investigations across the organization. Existing Detective customers can also transition to this feature without disrupting their security operations. See the Detective Administration Guide for instructions.

    AWS Organizations helps you to centrally manage and govern your environment as you grow and scale your AWS resources.
    Using AWS Organizations, you can programmatically create new accounts and allocate resources, simplify billing by setting up a single payment method for all of your accounts, create groups of accounts to organize your workflows, and apply policies to these groups for governance. Amazon Detective makes it easy to analyze, investigate, and quickly identify the root cause of potential security issues. It automatically collects log data from your AWS resources and uses machine learning, statistical analysis, and graph theory to build a linked set of data that enables you to easily conduct faster and more efficient security investigations. 

    You can enable your 30-day free trial of Amazon Detective with a single click in the AWS Management console. See the AWS Regions page for all the Regions where Detective is available. To learn more, see the Detective documentation. To start your 30-day free trial, see Amazon Detective Free Trial.

    » AWS Ground Station Launches a New Antenna Location in Punta Arenas in Preview

    Posted On: Dec 16, 2021

    Amazon Web Services (AWS) announces expansion of AWS Ground Station to the South America (São Paulo) Region with an AWS Ground Station antenna location in Punta Arenas, Chile in Preview. This is the 10th AWS Ground Station connected to the AWS Global Network. AWS Ground Station is a fully managed service that lets you control satellite communications, process satellite data, and scale your satellite operations. Global expansion to Punta Arenas now enables satellite owners and operators to connect with their satellites and process their space workloads more frequently. An additional AWS Ground Station location in the Southern Hemisphere reduces the time between contacts for Low-Earth Orbit satellites. Customers who operate from this region now have access to even lower latency processing capabilities. Governments, businesses, and universities can benefit from this more timely satellite data to make precise, data driven decisions. 

    With AWS Ground Station, you pay only for the antenna time that you use. Cross-region data delivery is included in our pricing, enabling customers to either stream satellite data from any of our antennas to Amazon EC2 for real-time processing or instead directly store data in Amazon S3. Additionally, customers can easily integrate their space workloads with other AWS services in near real-time using Amazon’s low-latency, high-bandwidth global network. For example, customers who downlink terabytes of data daily can easily access AWS Services such as Amazon SageMaker to quickly derive useful information. 

    Customers can transmit and receive data using AWS Ground Station antennas in the following locations: US (Oregon), US (Ohio), Middle East (Bahrain), Europe (Stockholm), Asia Pacific (Sydney), Europe (Ireland), Africa (Cape Town), US (Hawaii), Asia Pacific (Seoul), and South America (Punta Arenas). Customers can deliver data and configure their contacts with the AWS Ground Station console in the following regions: US West (Oregon), US East (Ohio), Middle East (Bahrain), Europe (Stockholm), Asia Pacific (Sydney), Europe (Ireland), Africa (Cape Town), US East (N. Virginia), Europe (Frankfurt), Asia Pacific (Seoul), and South America (São Paulo). More regions and antenna locations coming soon!

    To learn more about AWS Ground Station, visit here. To get started with AWS Ground Station, visit the AWS Management console here.

    » Amazon Lex launches support for Portuguese, Brazilian Portuguese, and Mandarin Chinese

    Posted On: Dec 16, 2021

    Today, Amazon Lex announces language support for Portuguese, Brazilian Portuguese, and Mandarin Chinese. Amazon Lex is a service for building conversational interfaces into any application using voice and text. Amazon Lex provides deep learning powered automatic speech recognition (ASR) for converting speech to text, and natural language understanding (NLU) to recognize the intent of the text, to enable you to build applications with highly engaging user experiences and lifelike conversational interactions. With these new languages, you can build and expand your conversational experiences to better understand and engage your customer base.

    Amazon Lex can be applied to a diverse set of use cases such as virtual agents, interactive voice response systems, self-service chatbots, or application bots. Language support for Portuguese, Brazilian Portuguese, and Mandarin Chinese is available in all AWS Regions where Amazon Lex operates. To learn more, visit the Amazon Lex documentation page.

    » Amazon Nimble Studio adds new features to support Linux, Usage Based Licensing, and Los Angeles Local Zone

    Posted On: Dec 14, 2021

    Starting today, Amazon Nimble Studio has added new features for customers deploying or updating their cloud-based studios. With additional support for Usage Based Licensing (UBL) from AWS Thinkbox Deadline, deeper Linux integration, and the Los Angeles Local Zone, Amazon Nimble Studio provides customers added functionality when deploying their cloud-based content creation studio.

    These updates provide customers the ability to deploy infrastructure necessary to use UBL from the AWS Thinkbox Marketplace directly from StudioBuilder. Customers will be able to purchase render licenses for popular digital content creation (DCC) software applications and consume them for Deadline-based render tasks. Support for multiple instance types used for rendering on the farm can now be set in StudioBuilder.

    Additionally, Linux support has been added to StudioBuilder enabling customers to deploy the necessary infrastructure to utilize a Linux operating system for content creation. This Linux support makes it easier for customers to deploy FSx for Lustre and the infrastructure needed to support home directories and user profiles when using Linux workstations. These updates can now be deployed by customers into the Los Angeles Local Zone providing lower latency for users, providing the ability to create content using virtual workstations and storage.

    The UBL and Linux support updates are now available in all regions where Nimble Studio is available, which includes US East (N. Virginia), US West (Oregon), Canada (Central), Europe (London), Asia Pacific (Sydney), and Los Angeles (Local Zone). To learn more, visit the Nimble Studio documentation page.

    » AWS Direct Connect announces two new locations in Indonesia

    Posted On: Dec 14, 2021

    Today, AWS announced the opening of two new Direct Connect locations in Jakarta, Indonesia. AWS customers in Indonesia can now establish dedicated network connections from their Indonesia premises to AWS to gain high-performance, secure access to other AWS Region (except Regions in China). With the announcement of Direct Connect locations in Indonesia, the Direct Connect Management Console and related documentation for Direct Connect have been localized to support the Bahasa language for Indonesia customers.

    AWS Direct Connect enables you to create private connections between AWS and your data center, office, or colocation environment. These private connections can reduce your network costs, increase bandwidth throughput, and provide a more consistent network experience than connections over the public internet. 

    Please see the AWS Direct Connect website for a list of current Direct Connect locations, associated AWS Regions, and pricing. AWS Direct Connect can be used access to all global AWS Regions (except China), as shown in our Region table. Customers in Indonesia can get dedicated 1 Gbps and 10 Gbps connections or work with an AWS Direct Connect delivery partner for hosted connections with bandwidth varying from 50 Mbps to 10 Gbps. 

    To get started with planning your connectivity to AWS, visit our Getting Started page. Sign in to the AWS Management Console to order AWS Direct Connect today!

    » Amazon FinSpace now provides Quick Setup with pre-configured data catalog, sample data, and improved data loading

    Posted On: Dec 13, 2021

    Amazon FinSpace now makes it even easier to start analyzing data with newly included financial services sample data, catalog configurations with taxonomy and metadata, capabilities to load your data into Amazon FinSpace, and the ability to run multiple Spark jobs in parallel.

    A preinstalled capital markets data bundle includes a sampling of data from Exchange Data International (EDI), the U.S. Department of the Treasury, and Algoseek. Also included is a 6 month sample of equity trade and quote (TAQ) data for 14 symbols, which you can use to evaluate the FinSpace time series framework or use with your own custom analytics. The bundle is automatically included for all new FinSpace environments.

    Loading your own data is now more easily done with the FinSpace web application’s improved data format detection for comma-separated value (.csv) files, which are commonly used to store financial industry data sets. Once the data is loaded, a new option on the home page lets you view recently loaded datasets with a single click. After loading your data, you can now run multiple Notebooks on a single Spark cluster at the same time, allowing you to analyze more datasets in parallel.

    Amazon FinSpace is a fully managed analytics service for financial services customers that makes it easy for analysts to access and analyze data from multiple locations such as internal data stores like portfolio, actuarial, and risk management systems as well as petabytes of data from third-party data sources, such as historical securities prices from stock exchanges. With Amazon FinSpace, customers can store, catalog, and prepare data at scale, reducing the time it takes to gain insights from months to minutes. To get started with Amazon FinSpace, please see the AWS Amazon FinSpace product page and service documentation.

    » Amazon EC2 C6i instances are now available in 10 additional regions

    Posted On: Dec 13, 2021

    Starting today, Amazon EC2 C6i instances are available in these additional AWS Regions US West (N. California), Asia Pacific (Mumbai), Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Canada (Central), Europe (London), Europe (Paris), and South America (São Paulo).C6i instances are powered by 3rd generation Intel Xeon Scalable processors (code named Ice Lake) with an all-core turbo frequency of 3.5 GHz, offering up to 15% better compute price performance over C5 instances for a wide variety of workloads, and always-on memory encryption using Intel Total Memory Encryption (TME). Designed for compute-intensive workloads, C6i instances are built on the AWS Nitro System, a combination of dedicated hardware and lightweight hypervisor, which delivers practically all of the compute and memory resources of the host hardware to your instances. These instances are an ideal fit for compute-intensive workloads such as batch processing, distributed analytics, high performance computing (HPC), ad serving, highly scalable multiplayer gaming, and video encoding.

    To meet customer demands for increased scalability, C6i instances provide two new sizes (c6i.32xlarge and metal) with 128 vCPUs and 256 GiB of memory, 33% more than the largest C5 instance. They also provide up to 9% higher memory bandwidth per vCPU compared to C5 instances. C6i instances also give customers up to 50 Gbps of networking speed and 40 Gbps of bandwidth to the Amazon Elastic Block Store, twice that of C5 instances. Customers can use Elastic Fabric Adapter on the 32xlarge size, which enables low latency and highly scalable inter-node communication. For optimal networking performance on these new instances, Elastic Network Adapter (ENA) driver update may be required. For more information on optimal ENA driver for C6i, see this article.

    These instances are generally available today in AWS Regions: US East (Ohio), US East (N. Virginia), US West (N. California), US West (Oregon), Asia Pacific (Mumbai), Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Canada (Central), Europe (Ireland), Europe (London), Europe (Paris), and South America (São Paulo). C6i instances are available in 10 sizes with 2, 4, 8, 16, 32, 48, 64, 96, and 128 vCPUs. Customers can purchase the new instances via Savings Plans, Reserved, On-Demand, and Spot instances. To get started, visit the AWS Management Console, AWS Command Line Interface (CLI), and AWS SDKs. To learn more, visit the C6i instances page.

    » Amazon Lookout for Vision now supports visual inspection of product defects at the edge

    Posted On: Dec 13, 2021

    Amazon Lookout for Vision is excited to preview support for anomaly detection at the edge. Starting today, you can use your trained Amazon Lookout for Vision models on the edge by deploying these models to a hardware device of your choice. Your trained models can be deployed on any NVIDIA Jetson edge appliance or x86 compute platform running Linux with an NVIDIA GPU accelerator. You can use AWS IoT Greengrass to deploy and manage your edge compatible customized models on your fleet of devices. AWS IoT Greengrass is an open-source edge runtime and cloud service for building, deploying, and managing device software.

    Earlier in the year, AWS launched Amazon Lookout for Vision, a machine learning (ML) service that spots defects and anomalies in visual representations of manufactured products using computer vision (CV), allowing you to automate quality inspection. You can easily create an ML model to spot anomalies from your live production line with as few as 30 images for the process you want to visually inspect - with no machine learning experience required. You can use Amazon Lookout for Vision’s cloud APIs to quickly and accurately detect anomalies like dents, cracks, and scratches.

    Now, in addition to detecting anomalies in the cloud, you can also use your trained Amazon Lookout for Vision models on the edge to detect anomalies. You deploy the same Amazon Lookout for Vision models that you've trained in the cloud onto AWS IoT Greengrass V2 compatible edge devices. You then use your deployed model to perform anomaly detection on premises without having to stream data continuously to the cloud. This allows you to minimize bandwidth costs and detect anomalies locally with real time image analysis.

    With Amazon Lookout for Vision and AWS IoT Greengrass, you can automate visual inspection with CV for processes like quality control and defect assessment - all on the edge and in real time. You can proactively identify problems such as part damage (like dents, scratches, or poor welding), missing product components, or defects with repeating patterns, on the production line itself - saving you time and money! Customers like Tyson Foods and Baxter International Inc. are discovering the power of Amazon Lookout for Vision to increase quality and reduce operational costs by automating visual inspection.

    Amazon Lookout for Vision is available directly via the AWS console as well as through supporting partners to help customers embed computer vision into existing operating systems within their facilities. Amazon Lookout for Vision on Edge is available in preview today in US East (N. Virginia), US East (Ohio), US West (Oregon), Europe (Ireland), Europe (Frankfurt), Asia Pacific (Tokyo), and Asia Pacific (Seoul), with availability in additional regions in the coming months. To learn more and get started, visit the Amazon Lookout for Vision product page.

    » Amazon EC2 M6i instances are now available in 2 additional regions

    Posted On: Dec 13, 2021

    Starting today, Amazon EC2 M6i instances are available in additional AWS Regions Canada (Central) and Europe (London). Designed to provide a balance of compute, memory, storage and network resources, M6i instances are built on the AWS Nitro System, a combination of dedicated hardware and lightweight hypervisor, which delivers practically all of the compute and memory resources of the host hardware to your instances. These instances are SAP-Certified and are ideal for workloads such as web and application servers, back-end servers supporting enterprise applications (e.g. Microsoft Exchange Server and SharePoint Server, SAP Business Suite, MySQL, Microsoft SQL Server, and PostgreSQL databases), gaming servers, caching fleets, as well as for application development environments.

    M6i instances are powered by 3rd generation Intel Xeon Scalable processors (code named Ice Lake) with an all-core turbo frequency of 3.5 GHz, offer up to 15% better price performance over M5 instances, and always-on memory encryption using Intel Total Memory Encryption (TME). To meet customer demands for increased scalability, M6i instances provide two new instance sizes (32xlarge and metal) with 128 vCPUs and 512 GiB of memory, 33% more than the largest M5 instance. They also provide up to 20% higher memory bandwidth per vCPU compared to M5 instances. M6i instances also give customers up to 50 Gbps of networking speed and 40 Gbps of bandwidth to the Amazon Elastic Block Store, 2x that of M5 instances. Customers can use Elastic Fabric Adapter on the 32xlarge and metal sizes, which enables low latency and highly scalable inter-node communication. For optimal networking performance on these new instances, Elastic Network Adapter (ENA) driver update may be required. For more information on optimal ENA driver for M6i, see this article.

    With this regional expansion, M6i instances are now available in the following AWS Regions: US East (Ohio), US East (N. Virginia), US West (N. California), US West (Oregon), Asia Pacific (Mumbai), Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Canada (Central), Europe (Frankfurt), Europe (Ireland), Europe (London), Europe (Paris), and South America (São Paulo). M6i instances are available in 10 sizes with 2, 4, 8, 16, 32, 48, 64, 96, and 128 vCPUs including the bare metal option. Customers can purchase the new instances via Savings Plans, Reserved, On-Demand, and Spot instances.

    To get started, visit the AWS Management Console, AWS Command Line Interface (CLI), and AWS SDKs. To learn more, visit the M6i instances page.

    » Amazon EMR now supports using multiple custom AMIs when you mix AWS Graviton2-based instances with non-Graviton2 instances in a single EMR cluster

    Posted On: Dec 13, 2021

    Amazon EMR now supports using multiple custom Amazon Machine Images (AMI) when you mix Arm-based AWS Graviton2-based instances with non-Graviton2 based instances in a single cluster. This capability allows you to diversify across more instance types when using custom AMIs, helping improve your access to EC2 capacity for large clusters. Prior to this release, you could still mix multiple instance types within a cluster, but could not do so when using Custom AMIs. Custom AMIs enable you to preload additional software and libraries required by your applications, customize cluster and node configurations, and encrypt the EBS root device volumes of EC2 instances in your cluster.

    Previously, to use a custom AMI you needed to ensure that all the EC2 instances within the cluster had the same underlying architecture. Now, you can specify custom AMIs for each instance type in an EMR instance group or EMR instance fleet in your cluster. Therefore, you can mix EC2 instances with different architectures by using multiple custom AMIs in the same cluster.

    This feature is supported on Amazon EMR release 5.7.0 and later, and in all regions where Amazon EMR is available. To learn more, read Using a custom AMI in Amazon EMR documentation.

    » Amazon EC2 R6i instances are now available in 8 additional regions

    Posted On: Dec 13, 2021

    Starting today, Amazon EC2 R6i instances are available in additional AWS Regions US West (N. California), Asia Pacific (Mumbai), Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Europe (London), and Europe (Paris). Designed for memory-intensive workloads, R6i instances are built on the AWS Nitro System, a combination of dedicated hardware and lightweight hypervisor, which delivers practically all of the compute and memory resources of the host hardware to your instances. R6i instances are powered by 3rd generation Intel Xeon Scalable processors (code named Ice Lake) with an all-core turbo frequency of 3.5 GHz, offer up to 15% better compute price performance over R5 instances, and always-on memory encryption using Intel Total Memory Encryption (TME). These instances are SAP-Certified and are ideal for workloads such as SQL and noSQL databases, distributed web scale in-memory caches like Memcached and Redis, in-memory databases like SAP HANA, and real time big data analytics like Hadoop and Spark clusters.

    To meet customer demands for increased scalability, R6i instances provide two new sizes (32xlarge and metal) with 128 vCPUs and 1,024 GiB of memory, 33% more than the largest R5 instance. They also provide up to 20% higher memory bandwidth per vCPU compared to R5 instances. R6i give customers up to 50 Gbps of networking speed and 40 Gbps of bandwidth to the Amazon Elastic Block Store, 2x that of R5 instances. Customers can use Elastic Fabric Adapter on the 32xlarge and metal sizes, which enables low latency and highly scalable inter-node communication. For optimal networking performance on these new instances, Elastic Network Adapter (ENA) driver update may be required. For more information on optimal ENA driver for R6i, see this article.

    With this regional expansion, R6i instances are now available in the following AWS Regions: US East (Ohio), US East (N. Virginia), US West (N. California), US West (Oregon), Asia Pacific (Mumbai), Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Europe (Ireland), Europe (London), and Europe (Paris). R6i instances are available in 10 sizes with 2, 4, 8, 16, 32, 48, 64, 96, and 128 vCPUs including the bare metal option. Customers can purchase the new instances via Savings Plans, Reserved, On-Demand, and Spot instances. To get started, visit the AWS Management Console, AWS Command Line Interface (CLI), and AWS SDKs. To learn more, visit the R6i instances page.

    » Amazon FSx for NetApp ONTAP now supports AWS CloudFormation

    Posted On: Dec 10, 2021

    You can now use AWS CloudFormation templates to quickly deploy solutions that use Amazon FSx for NetApp ONTAP file systems.

    FSx for ONTAP is a storage service that provides the familiar features, performance, and APIs of on-premises NetApp file systems with the agility, scalability, and simplicity of a fully managed AWS service. With a few clicks, you can now use a CloudFormation template to pre-configure and deploy FSx for ONTAP resources like file systems, storage virtual machines, and volumes in a standardized and repeatable way across multiple regions and accounts.

    AWS CloudFormation support for FSx for ONTAP is available now, in all regions where FSx for ONTAP is available. See the Amazon FSx documentation for more information on how to manage your FSx for ONTAP file systems with AWS CloudFormation.

    » Amazon Lex launches support for AWS CloudFormation

    Posted On: Dec 10, 2021

    Amazon Lex now supports AWS CloudFormation, allowing you to create bots and organize Amazon Lex resources using CloudFormation stack templates. Amazon Lex is a service for building conversational interfaces into any application using voice and text. With CloudFormation support, you can easily model resources on Lex V2 APIs - namely Bot, BotVersion, BotAlias, and ResourcePolicy - to provision resources quickly and consistently, and manage them through their lifecycles.

    AWS CloudFormation support for Amazon Lex is available in the following AWS Regions: US East (N. Virginia), US West (Oregon), Africa (Cape Town), Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Canada (Central), Europe (Frankfurt), Europe (Ireland), and Europe (London). To get started with CloudFormation for Amazon Lex, see the CloudFormation user guide. For more information on Amazon Lex, visit Amazon Lex V2 documentation.

    » Amazon FinSpace is now in scope for SOC 1, SOC 2, and SOC 3 compliance

    Posted On: Dec 10, 2021

    You can now use Amazon FinSpace in applications that are subject to System and Organization Controls (SOC) compliance. AWS SOC reports are independent third-party examination reports that demonstrate how AWS achieves key compliance controls and objectives. You can download the AWS SOC reports in AWS Artifact. To learn more, visit AWS Compliance Programs, or you can go to the AWS Services in Scope by Compliance Program webpage to see a full list of services covered by each compliance program.

    Amazon FinSpace is a fully managed analytics service for financial services customers that makes it easy for analysts to access and analyze data from multiple locations such as internal data stores like portfolio, actuarial, and risk management systems as well as petabytes of data from third-party data sources, such as historical securities prices from stock exchanges. With Amazon FinSpace, customers can store, catalog, and prepare data at scale, reducing the time it takes to gain insights from from months to minutes. To get started with Amazon FinSpace, please see the AWS Amazon FinSpace product page and service documentation.

    » Amazon Route 53 updates API actions

    Posted On: Dec 9, 2021

    Amazon Route 53 is adding domain specific API actions: DeleteDomain and ListPrices. Sorting and filtering functions are also being added to the API action, ListDomains. The DeleteDomain API action is a function previously only available in the AWS Console.

    API actions help streamline operations for customers who programmatically manage their own domains. For example, customers who manage multiple domains and occasionally require a domain to be deleted. Manually looking these domains up and deleting or adding them through the console can be inefficient at scale. With the addition of the DeleteDomain, ListPrice, and expanded ListDomain functionality to the Route 53 API, these changes can be made in seconds. ListDomains actions added sorting and filtering support matches support previously only found in the console too. By using these sorting and filtering actions, customers don’t need to complicate their code by adding the sorting and filtering functions themselves and the data returned from the API action is reduced.

    This functionality is generally available in all AWS regions that support Amazon Route 53.

    To learn more, please see the Amazon Route 53 API Reference, API Actions By Function documentation.

    » AWS Network Firewall now supports AWS Managed Rules

    Posted On: Dec 9, 2021

    AWS Network Firewall now supports AWS Managed Rules, which are groups of rules based on threat intelligence data, to enable you to stay up to date on the latest security threats without writing and maintaining your own rules.

    AWS Network Firewall features a flexible rules engine enabling you to define firewall rules that give you fine-grained control over network traffic. Starting today, you can enable managed domain list rules to block HTTP/HTTPS traffic to domains identified as low-reputation or that are known or suspected to be associated with malware or botnets. You can select one or more rule groups to use in your AWS Network Firewall policies. For stateful rules, you can choose to block all requests that match managed domain list rules or use the alert action to see which requests match the rules. Each set of managed rule groups counts as a single rule group toward the maximum number of stateful rule groups per firewall policy.

    There is no additional charge for using AWS managed rules for domain lists. You can access the new Managed Rules for AWS Network Firewall using the Amazon VPC Console or the Network Firewall API. This feature is available in all commercial AWS Regions except the AWS GovCloud (US) Regions. AWS Network Firewall is a managed firewall service that makes it easy to deploy essential network protections for all your Amazon VPCs. The service automatically scales with network traffic volume to provide high-availability protections without the need to set up or maintain the underlying infrastructure. AWS Network Firewall is integrated with AWS Firewall Manager to provide you with central visibility and control of your firewall policies across multiple AWS accounts. To get started with AWS Network Firewall, please see the AWS Network Firewall product page and service documentation.

    » EBS CSI driver now available in EKS add-ons in preview

    Posted On: Dec 9, 2021

    The Amazon Elastic Block Store (EBS) Container Storage Interface (CSI) driver is now available in Amazon Elastic Kubernetes Service (Amazon EKS) add-ons in preview, enabling you to use the Amazon EKS console, CLI, and API to install and manage the add-on. This release is in addition to existing support for the Amazon VPC CNI networking plugin, CoreDNS and kube-proxy, and makes it easier to define consistent Kubernetes clusters and keep them up to date using Amazon EKS.

    The EBS CSI driver provides a CSI interface used by container orchestrators to manage the lifecycle of Amazon EBS volumes. Availability in EKS add-ons in preview enables a simple experience for attaching persistent storage to an EKS cluster. The EBS CSI driver can now be installed, managed, and updated directly through the EKS console, CLI, and API. You can see available add-ons and compatible versions in the EKS API, select the version of the add-on you want to run on your cluster, and configure key settings such as the IAM role used by the add-on when it runs. Using EKS add-ons you can go from cluster creation to running applications in a single command and easily keep tooling in your cluster up to date.

    Amazon EKS supports managing the installation and version of the EBS CSI Driver (preview), CoreDNS, kube-proxy, and the Amazon VPC CNI on clusters running Kubernetes version 1.18 and above. To learn more and get started, visit the Amazon EKS documentation.

    » You can now enable data compression for capacity pool storage in Amazon FSx for NetApp ONTAP file systems

    Posted On: Dec 9, 2021

    Amazon FSx for NetApp ONTAP now supports data compression for data stored within a file system’s capacity pool storage. Combined with FSx for ONTAP’s existing support for data deduplication and compaction, data compression enables you to reduce your storage costs for a wide spectrum of data sets — for example, you can reduce your costs for general-purpose file shares by 65%.

    Until today, data compression was supported for SSD storage and for file system backups, but not for capacity pool storage (capacity pool storage is a fully elastic storage tier that’s cost-optimized for infrequently-accessed data). Starting today, you can enable compression for capacity pool storage—meaning that you can now enable compression for all your data in FSx for ONTAP.

    Support for data compression for capacity pool storage is now available at no additional cost for all new file systems in all regions where Amazon FSx for NetApp ONTAP is available. Customers with existing file systems will get this support during an upcoming weekly maintenance window. For more information, please visit the FSx for ONTAP documentation and the FSx for ONTAP product page.

    » AWS App2Container (A2C) now supports containerization of .NET running on Linux

    Posted On: Dec 9, 2021

    AWS App2Container (A2C) now supports containerization and deployment of .NET applications running on Linux. With this release, customers can use A2C to detect the .NET Core runtime version (.NET Core 3.1, .NET 5, .NET 6) and containerize the application using the corresponding runtime base images. Customers can take advantage of cost and performance benefits offered by Linux containers. Customers can continue to deploy these containerized applications to their choice of container platforms, Amazon Elastic Container Service (Amazon ECS), Amazon Elastic Kubernetes Service (Amazon EKS), AWSFargate, and AWS App Runner using A2C.

    AWS App2Container (A2C) is a command-line tool for modernizing .NET and Java applications into containerized applications. A2C analyzes and builds an inventory of all applications running in virtual machines, on-premises or in the cloud. You simply select the application you want to containerize, and A2C packages the application artifact and identified dependencies into container images, configures the network ports, and generates the ECS task and Kubernetes pod definitions.

    To learn more, refer to App2Container technical documentation on supported applications.

    » Amazon Kinesis Data Analytics is now available in the Asia Pacific (Osaka) and Africa (Cape Town) regions.

    Posted On: Dec 9, 2021

    Amazon Kinesis Data Analytics is now available in the Asia Pacific (Osaka) and Africa (Cape Town) regions.

    Amazon Kinesis Data Analytics makes it easier to transform and analyze streaming data in real time with Apache Flink. Apache Flink is an open source framework and engine for processing data streams. Amazon Kinesis Data Analytics reduces the complexity of building and managing Apache Flink applications. Amazon Kinesis Data Analytics for Apache Flink integrates with Amazon Managed Streaming for Apache Kafka (Amazon MSK), Amazon Kinesis Data Streams, Amazon Opensearch Service, Amazon DynamoDB streams, Amazon Simple Storage Service (Amazon S3), custom integrations, and more using built-in connectors. You can learn more about Amazon Kinesis Data Analytics for Apache Flink and for SQL applications here and here.

    For a list of where Amazon Kinesis Data Analytics is available, please see the AWS Region Table.

    » Amazon FSx for NetApp ONTAP reduces minimum file system throughput capacity to 128 MB/s

    Posted On: Dec 9, 2021

    Amazon FSx for NetApp ONTAP has now reduced the minimum file system throughput capacity from 512 MB/s to 128 MB/s, decreasing the minimum cost of an FSx for ONTAP file system by over 50%.

    An FSx for ONTAP file system’s throughput capacity determines the level of network I/O performance that is supported by its file servers. Lower-throughput file systems enable you to use FSx for ONTAP at a lower cost for workloads that don’t need the highest level of performance, such as general-purpose file shares and development & testing.

    You can create new file systems with a lower throughput capacity in all regions where Amazon FSx for NetApp ONTAP is available. For more information, please visit the FSx for ONTAP documentation and the FSx for ONTAP product page.

    » AWS Launch Wizard now provides guided deployment of Amazon EKS

    Posted On: Dec 8, 2021

    You can now use AWS Launch Wizard to lead you through a best practices deployment of Amazon Elastic Kubernetes Service (Amazon EKS). Amazon EKS runs the Kubernetes management infrastructure for you across multiple AWS Availability Zones to help eliminate a single point of failure. AWS Launch Wizard uses AWS Well-Architected Quick Start architectures to guide you through the sizing, configuration, and deployment of an Amazon EKS control plane, connecting worker nodes to the cluster, and configuring a bastion host for cluster admin operations. Additionally, the Launch Wizard deployment provides custom resources that enable you to deploy and manage your Kubernetes applications using AWS CloudFormation by declaring Kubernetes manifests or Helm charts directly in CloudFormation templates.

    AWS Launch Wizard offers a guided way of sizing, configuring, and deploying AWS resources for third-party applications. With this launch, it now supports guided, best practices deployments of Microsoft SQL Server, Microsoft Active Directory, Remote Desktop Gateway, Amazon EKS and SAP-based workloads. AWS Launch Wizard is available at no additional charge. You only pay for the AWS resources that are provisioned for running your workload.

    To learn more about using AWS Launch Wizard to accelerate your EKS and other deployments, visit the overview documentation.

    » Amazon Location Service adds metadata help customers reduce costs

    Posted On: Dec 8, 2021

    Today, Amazon Location Service added metadata for tracking position updates to help developers reduce cost, improve accuracy, and simplify the development of tracking applications. Amazon Location Service Trackers already make it easy for developers to build highly scalable device-tracking applications by enabling them to retrieve the current and historical location of their tracked devices, and automatically evaluate device-positions relative to linked areas of interest (geofences). With the new metadata feature, developers can enrich these applications with additional information about each device’s position, for example the speed, direction, or engine temperature of vehicles, by including three user-defined key-values pairs with each position update. They can retrieve this information for a device’s current or historical position directly from the Amazon Location Service Tracker, for example to analyze engine performance, without building additional systems and code to track this data. Developer can also receive this metadata in the Amazon EventBridge Entry and Exit event when tracked devices cross a geofence.

    The new metadata feature also enables developers to reduce the cost of their tracking solutions by including position accuracy with each update, which can then be combined with the Tracker's accuracy-based filtering to help assess whether the update is due to real device movements and not false position changes caused by imperfect measurement systems. The accuracy of position measurements vary based on, for example, the quality of the GPS fix or the limitations of different measurement schemes such as WiFi or Bluetooth. Developers can now enable accuracy-based filtering so that new position updates are not stored or evaluated against geofences unless the change in position exceeds the limits of the measurement accuracy. For example, if the first position measurement has an accuracy of 5m, and the second measurement has an accuracy of 10m, then the second position update is considered potentially unreliable and filtered out if its position has moved less than 15m. This reduces the cost of implementing a tracking application by reducing noise, spurious storage, and geofence evaluation of unreliable data points. This feature also reduces the effect of jitter caused by inaccurate positioning systems (such as mobile phones in urban canyons), reducing false geofence entry and exit events, and improves the fidelity of map visualization of position updates.

    Amazon Location Service is a fully managed service that helps developers easily and securely add maps, points of interest, geocoding, routing, tracking, and geofencing to their applications without compromising on data quality, user privacy, or cost. With Amazon Location Service, you retain control of your location data, protecting your privacy and reducing enterprise security risks. Customers using the Amazon Location Place API can search for addresses and points of interest data from our high-quality data providers Esri and HERE.

    Amazon Location Service is available in the following AWS Regions: US East (N. Virginia), US East (Ohio), US West (Oregon), Europe (Frankfurt), Europe (Ireland), Europe (Stockholm), Asia Pacific (Singapore), Asia Pacific (Sydney) Region, and Asia Pacific (Tokyo).

    To learn more about Amazon Location Service Trackers, visit the developer guide.

    » AWS IoT Core now supports caching of responses returned by customer’s Custom Authorizer Lambdas when using HTTP connections

    Posted On: Dec 8, 2021

    You can now cache responses returned by your Custom Authorizer Lambdas when using AWS IoT Core Custom Authentication workflow for HTTP connections. Customers can now define a caching duration (i.e. refreshAfterInSecs) for responses returned by customers’ custom authorizer lambdas when using long-lived HTTP connections. Customers can set a “refreshAfterInSecs” between 5 mins and 24 hours to reduce custom authorizer lambda invocations. This feature enhancement helps customers reduce their custom authorizer lambda cost and makes the behavior for HTTP match with that of other protocols supported by AWS IoT Core.

    AWS IoT Core is a managed cloud service that lets connected devices easily and securely interact with cloud applications and other devices. With Custom Authentication workflow, AWS IoT Core lets you define custom authorizers so that you can manage your own client authentication and authorization. This is useful when you need to use authentication mechanisms other than the ones that AWS IoT Core natively supports. For more information about the natively supported mechanisms, see Client authentication documentation.

    You can visit the AWS IoT Core Custom Authentication documentation to learn more.

    » Amazon RDS for MariaDB supports new minor versions 10.5.13, 10.4.22, 10.3.32, 10.2.41

    Posted On: Dec 8, 2021

    Amazon Relational Database Service (Amazon RDS) for MariaDB now supports MariaDB minor versions 10.5.13, 10.4.22, 10.3.32, and 10.2.41. We recommend that you upgrade to the latest minor versions to fix known security vulnerabilities in prior versions of MariaDB, and to benefit from the numerous bug fixes, performance improvements, and new functionality added by the MariaDB community.

    You can leverage automatic minor version upgrades to automatically upgrade your databases to more recent minor versions during scheduled maintenance windows. Learn more about upgrading your database instances, including automatic minor version upgrades, in the Amazon RDS User Guide.

    Amazon RDS for MariaDB makes it easy to set up, operate, and scale MariaDB deployments in the cloud. See Amazon RDS for MariaDB for pricing details and regional availability. Create or update a fully managed Amazon RDS database in the Amazon RDS Management Console.

    » AWS Systems Manager announces new features for Session Manager to support maximum session timeout and annotate reason for starting the session

    Posted On: Dec 8, 2021

    Today, AWS Systems Manager announces new features for Session Manager to support maximum session timeout and annotate the reason for starting a session. AWS Systems Manager is the operational hub for AWS, that provides a unified user interface to track and resolve operational issues across AWS applications from a central place. AWS Systems Manager Session Manager allows you to manage your EC2 instances, edge devices, and on-premise servers and virtual machines (VMs), using either an interactive browser based shell or command line.

    AWS Systems Manager Session Manager now supports specifying a maximum session duration. This helps IT administrators restrict the session length, reducing use of system resources due to unattended open sessions. When a session reaches the maximum allotted time, it automatically terminates. When using Console, operators can also see a countdown timer informing them when the session will automatically close due to max-session time-out. Session Manager now allows an operator to provide a reason for starting a session (a free form text up to 256 characters long), that helps IT administrators to track purpose for each session during audits. For more information about these new features, visit the AWS Systems Manager product page and documentation.

    » AWS Launch Wizard now provides guided deployment of Remote Desktop Gateway

    Posted On: Dec 8, 2021

    You can now use AWS Launch Wizard to lead you through a best practices deployment of self-managed Remote Desktop Gateway (RD Gateway) on Amazon EC2. AWS Launch Wizard uses the AWS Well-Architected Framework to guide you through the sizing, configuration, and deployment of RD Gateway on the AWS Cloud, without the need to manually identify and provision individual AWS resources. RD Gateway employs Remote Desktop Protocol (RDP) over HTTPS which helps establishes a secure, encrypted connection between remote users and Amazon EC2 instances running Windows, without needing to configure a virtual private network (VPN). This helps reduce the attack surface on your Windows-based instances while providing a remote administration solution for administrators.

    AWS Launch Wizard offers a guided way of sizing, configuring, and deploying AWS resources for third-party applications. With this launch, it now supports Microsoft SQL Server, Active Directory, Remote Desktop Gateway, Amazon Elastic Kubernetes Service (EKS) and SAP-based workloads. AWS Launch Wizard is available at no additional charge. You only pay for the AWS resources that are provisioned for running your workload.

    To learn more about using AWS Launch Wizard to accelerate your Remote Desktop Gateway deployments, visit the overview documentation.

    » Amazon EC2 C5n instances now available in Africa (Cape Town) Region

    Posted On: Dec 8, 2021

    Starting today, Amazon EC2 C5n instances are available in AWS Africa (Cape Town) region.

    Based on the next generation AWS Nitro System, these instances make 100 Gbps networking available to network-bound workloads without requiring customers to use custom drivers or recompile applications. Customers can also take advantage of this improved network performance to accelerate data transfer to and from Amazon S3, reducing the data ingestion time for applications and speeding up delivery of results. Workloads on these instances will continue to take advantage of the security, scalability and reliability of Amazon’s Virtual Private Cloud (VPC). A wide range of applications such as High Performance Computing (HPC), analytics, machine learning, Big Data and data lake applications can benefit from these instances. To learn more, visit the Amazon EC2 C5 instance pages. 

    » Amazon DevOps Guru introduces enhanced analysis for Amazon Aurora databases and support for AWS tags as an application boundary

    Posted On: Dec 8, 2021

    Amazon DevOps Guru now supports enhanced analysis for Amazon Aurora databases, a new Machine Learning (ML) powered capability, as part of Amazon DevOps Guru for RDS.

    DevOps Guru for RDS expands upon the existing capabilities of DevOps Guru to detect, diagnose, and provide remediation recommendations for a wide variety of database-related performance issues, such as resource over-utilization and misbehavior of SQL queries. When an issue occurs, DevOps Guru for RDS immediately notifies developers and DevOps engineers and provides diagnostic information, details on the extent of the problem, and intelligent recommendations to help customers quickly resolve the issue.

    DevOps Guru for RDS is offered to customers at no additional charge, as part of the existing price that DevOps Guru charges for RDS resources. DevOps Guru for RDS is available for Amazon Aurora MySQL and PostgreSQL–Compatible Editions in the following AWS regions: US East (N. Virginia), US East (Ohio), US West (Oregon), Europe (Frankfurt), Europe (Ireland), Europe (Stockholm), Asia Pacific (Singapore), Asia Pacific (Sydney), and Asia Pacific (Tokyo).

    To get started, simply go to the Amazon RDS console and turn on Amazon RDS Performance Insights. Once Performance Insights is enabled, go the Amazon DevOps Guru console to enable DevOps Guru for your Amazon Aurora resources, other supported resources, or your entire account.

    We are also introducing support for AWS tags which provides you more control over how DevOps Guru analyzes your resources. DevOps Guru supports AWS tags (key/value pairs) when created with a specific "devops-guru" prefix as part of the key. Resources that have the same tag applied will be grouped together as part of an application that DevOps Guru analyzes. DevOps Guru will then automatically collect and analyze data from these resources in this application such as metrics, logs, and events and identify behaviors that deviate from normal operating patterns. For example, if you applied a specific tag to all your Amazon Aurora DB instances, they would then be grouped together and you would be notified when DevOps Guru sees issues such as resource over-utilization and misbehavior of SQL queries. For more information about using AWS tags with DevOps Guru, please visit the topic "Working with resource tags" in DevOps Guru documentation.

    Amazon DevOps Guru is an ML powered service that makes it easy to improve an application’s operational performance and availability. To learn more, visit the DevOps Guru product and documentation pages or post a question to the Amazon DevOps Guru forum.

    To learn more about DevOps Guru for RDS and enhanced analysis or Aurora databases , see “Working with anomalies in DevOps Guru for RDS” in the Amazon DevOps Guru User Guide.

    » AWS Announces General Availability of AWS Wavelength in Germany

    Posted On: Dec 8, 2021

    Today, we are announcing the general availability of AWS Wavelength on the  Vodafone 4G/5G network in Germany. Wavelength Zones are now available in Berlin, Munich, and Dortmund. Developers, enterprises, and Independent Software Vendors (ISVs) can now use the AWS Wavelength Zones in Germany to build ultra-low latency applications for mobile devices and users. AWS Wavelength Zones on Vodafone’s 4G/5G network are now available in four cities across Europe, including the previously announced Wavelength Zone in London.

    AWS Wavelength Zones embed AWS compute and storage services at the edge of communications service providers’ 5G networks while providing seamless access to cloud services running in an AWS Region. By doing so, AWS Wavelength minimizes the latency and network hops required to connect from a 5G device to an application hosted on AWS. With AWS Wavelength and Vodafone 5G, application developers can now build the ultra-low latency applications needed for use cases like smart factories, autonomous vehicles, video analytics and machine learning inference at the edge, and augmented and virtual reality-enhanced experiences.

    Get started with AWS Wavelength.

    » Right-size permissions for more roles in your account using IAM Access Analyzer to generate 50 fine-grained IAM policies per day

    Posted On: Dec 8, 2021

    In April 2021, IAM Access Analyzer added policy generation to help you create IAM policies based on access activity found in your AWS CloudTrail. IAM Access Analyzer has now increased policy generation quotas to 50 per day to help you right-size permissions for more roles in your account. As you right-size permissions across multiple workloads in your account, you can now use policy generation across your roles to grant just the required permissions. To use IAM Access Analyzer policy generation, visit your role’s detail page and select “generate policy” to get started. When you request a policy, IAM Access Analyzer reviews your CloudTrail logs to identify the actions used and creates a fine-grained policy. Read the blog to learn more.

    You can use IAM Access Analyzer in the commercial regions to generate policies in the IAM console or by using APIs with the AWS Command Line Interface (AWS CLI) or a programmatic client.

    » Amazon Comprehend Medical adds support for SNOMED CT and reduces pricing across all APIs by up to 90%

    Posted On: Dec 8, 2021

    Amazon Comprehend Medical is a HIPAA-eligible natural language processing service that uses machine learning to extract health data from unstructured medical text accurately and quickly. Much of health data today is in free-form medical text like doctors’ notes, clinical trial reports, and patient health records. Manually extracting the data is a time consuming process that requires broad use of synonyms and nonstandard medical terms. As a result, data often remains unusable in large-scale analytics which is needed to advance healthcare and life sciences industry.

    Today Amazon Comprehend Medical added support for SNOMED CT (Systematized Nomenclature of Medicine -- Clinical Terms), an ontology to provide customers with comprehensive clinical healthcare terminology to encode medical conditions, medications, and procedures, among other medical concepts. You can now automatically map clinical data from unstructured text to SNOMED CT, ICD10 or RxNorm ontologies with a simple API call, to help accelerate research and clinical application building. To get started with Amazon Comprehend Medical's new SNOMED CT API, you can read this blog.

    We are also excited to announce that Amazon Comprehend Medical has reduced pricing by up to 90% and introduced new tiered, volume based pricing to address large workloads. The Comprehend Medical price reductions will be effective December 8th, 2021 in all regions where Comprehend Medical is available and will be automatically reflected in your AWS bill. For more information on the Region specific price reductions and new tiered pricing, please visit the pricing page here.

    Learn more about Amazon Comprehend Medical here.

    » AWS End-of-support Migration Program (EMP) Now Supports Assisted Packaging for applications without installation media

    Posted On: Dec 8, 2021

    AWS EMP now supports packaging for end-of-support (EOS) Windows Server applications where installation media is not available, through a guided user interface (UI) experience. With today’s release of the Guided Reverse Packaging (GRP) feature, customers can input files and folders related to the application and the tool will automatically search for dependencies such as registry keys and related files and present to the user for confirmation. Next, customers will have the ability to simulate typical application workflows to ensure the full scope of dependencies is captured. Customers can then generate the compatibility package needed to then be deployed onto a newer, supported version of Windows Server in EC2 with the running application.

    Using the AWS EMP tool, customers can replatform EOS Windows applications onto newer, modern versions of Windows Server without any refactoring or code changes, increasing the security posture of applications. Whether or not installation media exists, customers can create a compatibility package for legacy Windows applications through a guided UI experience. To learn more, please refer to the EMP technical documentation or visit the EMP frequently asked questions webpage.

    » AWS Systems Manager now supports application-level cost reporting

    Posted On: Dec 8, 2021

    Application Manager, a capability of AWS Systems Manager, announces a new feature for customers to report and visualize the cost of their applications through integration with AWS Cost Explorer. Application Manager is a central hub on AWS to create, view and operate applications from a single console. With Application Manager, customers can discover and manage their applications across multiple AWS services like AWS CloudFormation, AWS Launch Wizard, AWS Service Catalog App Registry, AWS Resource Groups, Amazon Elastic Kubernetes Service (Amazon EKS), and Amazon Elastic Container Service (Amazon ECS). Using this feature, IT professionals can now view the cost of their applications and application components within the Application Manager console.

    AWS Cost Explorer allows customers to visualize, understand, and manage their application costs and usage over time. With this feature, customers who have Cost Explorer enabled can now see the overview of the costs associated with the application and application components within the Application Manager console. Customers can also click through to the Cost Explorer console to further analyze the cost data for their application. Customers who are new to cost reporting can click the “Go to Billing console” button and setup Cost Explorer to enable cost reporting for their account.

    This new feature is available in all AWS Regions where Systems Manager is offered (excluding the AWS GovCloud (US) Regions). For information about Application Manager, see our documentation. To learn more about AWS Systems Manager, visit our product page.

    » Amazon Location adds Suggestion capability

    Posted On: Dec 8, 2021

    Today, Amazon Location Service is adding Suggestions functionality.

    Amazon Location Service is a fully managed service that helps developers easily and securely add maps, points of interest, geocoding, routing, tracking, and geofencing to their applications without compromising on data quality, user privacy, or cost. With Amazon Location Service, you retain control of your location data, protecting your privacy and reducing enterprise security risks. Customers using the Amazon Location Place API, can search for addresses and points of interest data from our high-quality data providers Esri and HERE.

    With the new Suggestions feature (also known as autocomplete, autosuggest, or fuzzy search), customers can build search boxes for addresses or place names that suggest the complete search text as end-users type their text string in the search box. The suggestions from Amazon Location Service improve user experience by reducing the time and effort to complete a search and improve the accuracy of the results by reducing typing mistakes. This functionality makes it even easier to search for places using Amazon Location Service.

    Amazon Location Service is available in the following AWS Regions: US East (N. Virginia), US East (Ohio), US West (Oregon), Europe (Frankfurt), Europe (Ireland), Europe (Stockholm), Asia Pacific (Singapore), Asia Pacific (Sydney) Region, and Asia Pacific (Tokyo).

    To learn more, read this blog, and visit the Amazon Location Service developer guide.

    » AWS Toolkit for VS Code adds support for Amazon ECS Exec for troubleshooting Amazon ECS

    Posted On: Dec 8, 2021

    The AWS Toolkit for VS Code now provides developers with convenient IDE functionality to connect to Amazon ECS containers and issue commands using Amazon ECS Exec. This allows VS Code users to directly interact with containers, such as running commands in or get a shell to an ECS container running on an Amazon EC2 instance or on AWS Fargate, without leaving their IDE. ECS Exec uses the AWS Systems Manager (SSM) Session Manager under the hood to establish a connection with the running container.

    The AWS Toolkit makes it easy to access the ECS service from AWS Explorer UI. Then, running a command is as easy as choosing a remote container or service, and selecting ‘run command in container'. Review the Toolkit User Guide.

    Install the AWS Toolkit for VS Code to start using the ECS Exec features from VS Code. To submit feature requests and report issues, visit our GitHub repo.

    » AWS Glue streaming ETL now integrates with the AWS Glue Schema Registry

    Posted On: Dec 7, 2021

    AWS Glue streaming extract, transform, and load (ETL) jobs can now read from AWS Glue Data Catalog tables created using the AWS Glue Schema Registry. With streaming ETL in AWS Glue, you can set up continuous ingestion pipelines to prepare streaming data on the fly and make it available for analysis in seconds. The AWS Glue Schema Registry allows you to centrally discover, control, and evolve data stream schemas. This integration streamlines the job setup process and simplifies schema enforcement.

    Customers can use this feature to manage and enforce the schema of streaming event data like IoT event streams, clickstreams, and network logs, then quickly set up ETL jobs to process the streams. They can clean and transform those data streams in-flight, and continuously load the results into data stores for analytics.

    This feature is available in the same AWS Regions as AWS Glue.

    To learn more, visit our documentation.

    » NICE EnginFrame adds AWS HPC cluster management with AWS ParallelCluster

    Posted On: Dec 6, 2021

    Today we are announcing general availability of NICE EnginFrame 2021.0. NICE EnginFrame is an easy-to-use, web front-end that makes HPC job submission and management easier for customers. With this latest release, customers are able to use NICE EnginFrame across both on-premises and AWS environments using its new AWS HPC Connector feature. Where customers may have previously used NICE EnginFrame for these tasks on-premises and separately managed AWS resources for HPC using the AWS CLI or AWS Management Console, NICE EnginFrame customers can now manage all of these HPC workflows across both their on-premises and AWS environments using a single, unified interface.

    The new AWS HPC Connector in NICE EnginFrame makes it possible for customers to configure, deploy, and administer managed HPC clusters on AWS. This new functionality complements the existing support for on-premises systems. AWS HPC Connector is built on top of AWS ParallelCluster, an AWS-supported, open source HPC cluster management tool. Using the AWS HPC Connector customers can now submit and manage workloads across both their on-premises and AWS environments from a single, unified interface. By unifying the resources and management of on-premises and AWS environments, teams can access all of their compute from a single place, helping them to save time and simplify workflows. NICE EnginFrame can help customers increase productivity if they need additional capacity for faster time-to-results, access to specialized resources (such as GPUs or FPGAs) to run jobs with more precision, or a way to manage workloads while migrating their HPC workloads from on-premises to AWS.

    For more information, see the NICE EnginFrame 2021.0 release notes or visit the NICE EnginFrame page at https://aws.amazon.com/hpc/enginframe to get started with NICE EnginFrame with AWS HPC Connector today.

    » Amazon Redshift launches single-node RA3.xlplus cluster

    Posted On: Dec 6, 2021

    Amazon Redshift has launched the ability to run a single-node RA3.xlplus cluster. Amazon Redshift RA3 clusters support many important features including Amazon Redshift Managed Storage (RMS), data sharing and AQUA. Single-node RA3.xlplus clusters allow you to take advantage of the most advanced Redshift features at a lower cost. You can migrate single-node DS2.xlarge or single-node DC2.large clusters to single-node RA3.xlplus clusters as part of a cross-instance Classic Resize function. You can also use cross-instance Classic Resize as part of the Reserved Instance (RI) Migration feature in the Amazon Redshift Console, CLI or API to migrate your single-node DS2.xlarge RI clusters to RA3.xlplus RI clusters without changes to the RI contract’s start or end dates and without incurring additional charges.

    Single-node RA3.xlplus clusters are now available in all commercial AWS Regions. Refer to the AWS Region Table for Amazon Redshift availability. To learn more about RA3 nodes, see the Amazon Redshift RA3 feature page. To learn more about DS2 to RA3 upgrades, see the Upgrading to RA3 node types section of the Amazon Redshift documentation. You can find more information on pricing by visiting the Amazon Redshift pricing page.

    » AWS AppSync now supports custom domain names for AppSync GraphQL endpoints

    Posted On: Dec 6, 2021

    Today, we are releasing a new feature in AWS AppSync that allows customers to use custom domain names with their AWS AppSync GraphQL APIs.

    AWS AppSync is a managed GraphQL service that simplifies application development by letting you create a flexible API to securely access, manipulate, and combine data from one or more data sources with less network calls. With AWS AppSync, you create GraphQL APIs and realtime APIs that your applications interact with over the internet to access data, and receive realtime updates. AWS AppSync makes it easy for different types of applications to connect to a GraphQL API by supporting multiple modes of authorization simultaneously on the API.

    We are now introducing a way for you to utilize simple and memorable endpoint URLs with domain names of your choosing by creating custom domain names that you can associate with AppSync APIs in your account. With custom domain names, you can utilize a single custom domain that works for both your GraphQL API and your Realtime API. To create a custom domain name in AppSync, you simply specify a domain name you own, and provide a valid AWS Certificate Manager certificate that covers your domain. AppSync will provide you with a new AppSync domain name. Once your custom domain name is created, you can associate it with any available AppSync API in your account. After you’ve updated your DNS record to route to the AppSync domain name, you can configure your applications to use the new GraphQL endpoint. You can change the API association on your custom domain name at any time (e.g., to execute blue/green deployments) without having to update your applications. When AppSync receives a request on the custom domain endpoint, the request is handled by the associated API.

    Custom Domain Names are available in all AWS regions where AppSync is available. For more details, refer to our blog post and the AppSync documentation.

    » Amazon S3 File Gateway now supports NFS file share auditing

    Posted On: Dec 6, 2021

    AWS Storage Gateway now supports NFS file share auditing end-user access to files, folders, and file shares on Amazon S3 File Gateway. Amazon S3 File Gateway provides on-premises applications with file-based, cached access to virtually unlimited cloud storage using SMB and NFS protocols. This feature is intended for IT administrators and compliance managers who need audit logs about user access to files and folders for security and compliance requirements. 

    With this launch, NFS client operations for files and folders are logged to provide key operations for files and folders including create, delete, read, write, rename, and change of permissions. You can publish logs to Amazon CloudWatch Logs or stream logs to Amazon Kinesis Data Firehose, enabling you to query, process, store, and archive logs and trigger actions.

    This capability is now available on new gateways in all commercial regions. For existing gateways, this new capability will be made available during the next scheduled software update. Visit the AWS Storage Gateway product page to learn more and access the AWS Storage Gateway console to get started.

    » Amazon S3 File Gateway enables administrators to force the closing of locked files

    Posted On: Dec 6, 2021

    Amazon S3 File Gateway now enables you to force-close locked files on SMB file shares on Amazon S3 File Gateway by providing access to local security groups. Amazon S3 File Gateway provides on-premises applications with file-based, cached access to virtually unlimited cloud storage using SMB and NFS protocols. End users and applications using files on SMB shares, may stop working on those files without closing them. This leaves the files in an open, or locked, state. Until now, gateway administrators did not have permissions to close these files.

    You can now seamlessly assign force-closing permissions to users and groups from the connected Active Directory, by adding them to the GatewayAdmin local group using the AWS Storage Gateway console, API, or CLI. The GatewayAdmin local group provides the permissions needed to force the closing of locked files on a given gateway. Those added groups or individuals can then use the same windows tools they use today to unlock files on SMB file shares on the same gateway.

    This capability is now available on new gateways in all commercial regions. For existing gateways, this new capability will be made available during the next scheduled software update. Visit the AWS Storage Gateway product page to learn more and access the AWS Storage Gateway console to get started.

    » AWS Systems Manager Fleet Manager now offers console based viewing and management of instance processes

    Posted On: Dec 6, 2021

    Fleet Manager, a feature in AWS Systems Manager (SSM) that helps IT Admins streamline and scale their remote server management tasks, now offers an easy console-based experience for customers to view and manage processes on their instances. This new feature provides customers a consolidated view of the processes running on an instance coupled with the ability to assess their resource consumption in real-time and optimize operations through start/stop actions.

    The new Fleet Manager feature displays a list of all the processes currently running on the server. Customers are able to see granular details such as process name, process details, and utilization metrics for every active process. They can query processes by name and also easily sort the columns based on any of the process parameters. In addition to this detailed reporting, the process management capability in Fleet Manager shows the aggregate counts of CPU, Memory, Handles and Threads being consumed in real-time. Customers can even terminate unwanted processes directly from the Fleet manager console or start a new process by providing the process name or full path of the executable they want to run. Moreover, customers have a choice to start a new process in the current directory or specify a different path in the console.

    Fleet Manager is a console based experience in Systems Manager that provides you with visual tools to manage your Windows, Linux, and macOS servers. With it, you can easily perform common administrative tasks such as file system exploration, log management, performance counters, and user management from a single console. Fleet Manager manages instances running on AWS and on-premises, without needing to remotely connect to the servers. Fleet Manager is available in all AWS Regions where Systems Manager is offered (excluding AWS China Regions). To learn more about Fleet Manager, visit our web-page, read our blog post, or see our documentation and AWS Systems Manager FAQs. To get started, choose Fleet Manager from the Systems Manager left navigation pane.

    » Amazon Polly introduces Takumi, a new neural Japanese male voice

    Posted On: Dec 6, 2021

    Amazon Polly is a service that turns text into lifelike speech. Today, we are excited to announce the general availability of a neural version of Takumi, Polly’s Japanese male text to speech (TTS) voice. Takumi neural TTS sounds natural, friendly and smooth. With this launch, you can now select from three unique Japanese TTS voices: Mizuki Standard, Takumi Standard and Takumi Neural.

    You can use Amazon Polly to enhance the user experience and improve the accessibility of your text content with the power of voice. Common use cases include interactive voice response (IVR) systems, audiobooks, newsreaders, eLearning content, and virtual assistants. Amazon Polly neural voices are now available in 14 languages. Standard voices are supported in 31 languages.

    To get started with Takumi, please log in to the Amazon Polly console and review the documentation. For more details, review our full list of Amazon Polly text-to-speech voices, Neural TTS pricing, regional availability, service limits, and FAQs.

    » Amazon Aurora R6g instances, powered by AWS Graviton2 processors, are now available in Europe (Milan), Europe (Paris), and Europe (Stockholm) Regions

    Posted On: Dec 6, 2021

    AWS Graviton2-based R6g database instances are now available in Europe (Milan), Europe (Paris), and Europe (Stockholm) regions for Amazon Aurora MySQL-Compatible Edition and Amazon Aurora PostgreSQL-Compatible Edition.

    These instances are powered by the AWS Graviton2 processors that are custom designed by AWS using 64-bit Arm Neoverse cores. AWS Graviton2 processors deliver a major leap in performance and capabilities over first-generation AWS Graviton processors, with 7x performance, 4x the number of compute cores, 2x larger caches, and 5x faster memory. AWS Graviton2 processors feature always-on 256-bit DRAM encryption and 50% faster per core encryption performance compared to the first-generation AWS Graviton processors. These performance improvements make Graviton2 database instances a great choice for database workloads.

    You can launch R6g instances in the Amazon RDS Management Console or using the AWS CLI. Upgrading a database instance to Graviton2 requires a simple instance type modification, using the same steps as any other instance modification.  For more details, refer to the documentation.

    For complete information on pricing and regional availability, please refer to the Amazon Aurora pricing page. Review our technical documentation for more details.

    » AWS WAF adds support for CloudWatch Log and logging directly to S3 bucket

    Posted On: Dec 6, 2021

    You can now send AWS WAF logs directly to a CloudWatch Logs log group or to an Amazon S3 bucket. With this launch, we’re adding two new optional destinations for WAF logs in addition to Amazon Kinesis Data Firehose, which was already supported. When you use CloudWatch Logs as your WAF log destination, you can search and analyze WAF logs directly in the WAF console using CloudWatch Logs Insights. Using CloudWatch Logs Insights, you can view individual logs, compile aggregated reports, create visualizations, and construct dashboards.

    To send WAF logs directly to a CloudWatch Logs log group or an S3 bucket, log into the AWS WAF Console, select a web access control list (web ACL), and access the logging and metrics section to add or change the logging destination. To search and analyze WAF logs you must select CloudWatch Logs as the logging destination. Once enabled, navigate to the AWS WAF Console and select the CloudWatch Logs Insights tab.

    There is no additional AWS WAF cost to enable logging to these new destinations but standard service charges for AWS WAF, CloudWatch Logs, and S3 will still apply. Logging is available in all AWS WAF regions and for each supported service, including Amazon CloudFront, Application Load Balancer, Amazon API Gateway, and AWS AppSync. To learn more, see the AWS WAF developer guide.

    » Amazon Pinpoint now includes a one-time password (OTP) management feature

    Posted On: Dec 6, 2021

    Amazon Pinpoint now includes a one-time password (OTP) management feature. An OTP is an automatically generated string of characters that authenticates a user for a single login attempt or transaction. The OTP feature makes it easier to add OTP workflows to your application, site, or service. You can use this feature to generate new OTP codes and send them to your recipients as SMS text messages. Your applications can then call the Amazon Pinpoint API to validate that the OTP code the recipient entered is valid. 

    When you use the OTP management feature, messages are sent with a pre-defined template. You can customize the brand name that appears in the message, the origination identity (such as a phone number or Sender ID) that is used to send the message, the length of the OTP code, the amount of time the code remains valid, and the number of allowed validation attempts. You can also select one of 12 different languages for the message body. To learn more about the OTP management feature, see Sending and validating one-time passwords (OTPs) in the Amazon Pinpoint User Guide.

    » Amazon SageMaker Model Registry now supports endpoint visibility, custom metadata and model metrics

    Posted On: Dec 3, 2021

    SageMaker Model Registry, a purpose-built service which enables customers to catalog their ML models, now provides endpoint visibility from Studio UI, ability to store custom metadata and view/store broad array of metrics for a given model.

    SageMaker Model Registry catalogs customer’s models in a logical group (a.k.a. model group) and stores incremental versions of models as model package versions. Now, customers can associate custom metadata and custom metrics on a model package version. They can also store a broad array of metrics and baselines on a model package version; such as- data quality, model quality, model bias and model explainability. In addition, customers can also view the SageMaker inference endpoint where the model is hosted and dive deep into endpoint metadata.

    This feature is available in all AWS regions where Amazon SageMaker is available. To get started, create a new SageMaker Model Package Group from the Amazon SageMaker SDK or Studio and visit our documentation page on custom metadata and model metrics.

    » AWS Database Migration Service now supports Time Travel, an improved logging mechanism

    Posted On: Dec 3, 2021

    AWS Database Migration Service (AWS DMS) expands its functionality by introducing Time Travel, a feature granting customers flexibility on their logging capabilities and enhancing their troubleshooting experience. With Time Travel, you can store and encrypt AWS DMS logs using Amazon S3, and view, download and obfuscate the logs within a certain time frame. Time Travel help make troubleshooting more easily and securely.

    To learn more about this new feature, see Time Travel task settings.
    For regional availability, please refer to the AWS Region Table.

    » Amazon Fraud Detector is now in scope for AWS SOC Reports

    Posted On: Dec 3, 2021

    Amazon Fraud Detector is now in scope for AWS SOC 1 , SOC 2, and SOC 3 reports. You can now use Amazon Fraud Detector in applications requiring audited evidence of the controls in our System and Organization Controls (SOC) reporting. For example, if you use AWS to detect fraud and abuse, you can use the SOC reports to help meet your compliance requirements for those use cases. AWS SOC reports are independent third-party examination reports that demonstrate how AWS achieves key compliance controls and objectives.

    Amazon Fraud Detector is a fully managed service enabling customers to identify potentially fraudulent activities and catch more online fraud faster. With features such as default encryption at rest and in transit, audit logging via AWS CloudTrail, private connectivity via AWS PrivateLink, and access control via AWS Identity and Access Management (IAM) roles, Amazon Fraud Detector has a comprehensive security management program that follows leading industry practices. In addition to meeting standards for SOC, Amazon Fraud Detector is also Payment Card Industry Data Security Standard (PCI DSS) compliant. You can go to the Services in Scope by Compliance Program page to see a full list.

    To get started with Amazon Fraud Detector, visit our product page.

    » AWS Lambda now logs Hyperplane Elastic Network Interface (ENI) ID in AWS CloudTrail data events

    Posted On: Dec 3, 2021

    AWS Lambda now logs the Hyperplane Elastic Network Interface (ENI) ID in AWS CloudTrail data events, for functions running in an Amazon Virtual Private Cloud (VPC). Customers can use the ENI ID in AWS CloudTrail data events to audit the security of their applications, and verify that only authorized functions are accessing their VPC resources through a shared Hyperplane ENI.

    Today, Lambda functions configured with a VPC access resources in the VPC using Hyperplane ENI. Multiple Lambda functions using the same subnet and security group combination can reuse a Hyperplane ENI. With this feature, customers can now map which Lambda function invoked a Hyperplane ENI using CloudTrail data events. This is especially useful for customers in the financial services and healthcare sector who have stringent audit and regulatory compliance requirements.

    AWS Lambda support for logging Hyperplane ENI ID in AWS CloudTrail data events is generally available in US East (N. Virginia), US West (N. California), US East (Ohio), US West (Oregon), Europe (Frankfurt), Europe (Ireland), Europe (London), Europe (Paris), Europe (Stockholm), Europe (Milano), Asia Pacific (Singapore), Asia Pacific (Tokyo), Asia Pacific (Sydney), Asia Pacific (Seoul), Asia Pacific (Osaka), Asia Pacific (Mumbai), Asia Pacific (Hong Kong), Canada (Central), Middle East (Bahrain), South America (Sao Paulo), and Africa (Cape Town). For more information on availability, please see the AWS Region table.

    To understand how Lambda functions access resources in VPC using a Hyperplane ENI, refer to this blog on VPC networking for Lambda functions. For details on how to use AWS Lambda with CloudTrail, refer to Lambda developer guide. For information on CloudTrail data events, see CloudTrail data events documentation.

    » Announcing preview of SQL Notebooks support in Amazon Redshift Query Editor V2

    Posted On: Dec 3, 2021

    Amazon Redshift simplifies organizing, documenting, and sharing of multiple SQL queries with support for SQL Notebooks (preview) in Amazon Redshift Query Editor V2. The new Notebook interface enables users such as data analysts and data scientists to author queries more easily, organizing multiple SQL queries and annotations on a single document. They can also collaborate with their team members by sharing Notebooks.

    Data users engaging in advanced analytics work on multiple queries at a time to perform various tasks for their data analysis. Query Editor V2 helps you organize related queries by saving them together in a folder, or combining them into a single saved query with multiple statements. The Notebooks support provides an alternative way to embed all queries required for a complete data analysis in a single document using SQL cells. You can share your Notebooks with team members, similar to how you share your saved queries in Query Editor V2. Documenting your work precisely enables further collaboration with other users. Using the Markdown cells, you can include detailed context in your work, helping ease the learning curve for others to work on even the most complicated data analysis tasks. 

    The Notebooks in Query Editor V2 is available for preview in the following regions: US East (N. Virginia), US East (Ohio), US West (Oregon), Europe (Frankfurt), Europe (Ireland), Asia Pacific (Tokyo). See the documentation to get started. To learn more, refer to the Amazon Redshift Cluster Management Guide.

    » Amazon Redshift announces preview of Automated Materialized View

    Posted On: Dec 3, 2021

    Automated Materialized View (AutoMV) for Amazon Redshift helps lower query latency for repeatable workloads like dashboard queries minimizing the effort for manually creating and managing materialized views. 

    Materialized Views are a powerful tool for improving query performance but they require careful workload monitoring and analysis to determine where they may provide the best returns. This may take hours to days and require performance tuning knowledge. Additionally, increasing and changing workloads results in continual monitoring by users.

    AutoMV in Amazon Redshift continually monitors the workload using machine learning to decide whether a new materialized view will be beneficial. AutoMV balances the cost of creating and keeping materialized views up-to-date vs expected improvements to query latency. The system also monitors previously created AutoMVs and drops them when they are no longer beneficial to the workload. This avoids expending resources to keep unused AutoMVs fresh.

    Automated Materialized View (preview) is available for all AWS Commercial Regions. See AWS Region Table for more details. To learn more about AutoMV, please visit the documentation.

    » Amazon Redshift announces support for VARBYTE data type

    Posted On: Dec 3, 2021

    Amazon Redshift has launched support for the VARBYTE data type. VARBYTE is a variable size data type for storing and representing variable-length binary strings. With this announcement, Amazon Redshift can now support variable length binary data for use with core Amazon Redshift features, SQL UDFs and SQL DDL for creating VARBYTE columns in tables. VARBYTE(n) syntax gives you the flexibility to specify the size (‘n’). The default for n is 64KB and max is 1MB. VARBYTE values are displayed/printed as Hex values to ensure all binary bytes are printable.

    VARBYTE support is now available in all commercial AWS Regions. Refer to the AWS Region Table for Amazon Redshift availability. For more information or to get started with Amazon Redshift, see the documentation.

    » AWS Resource Access Manager enables support for global resource types

    Posted On: Dec 2, 2021

    AWS Resource Access Manager (RAM) now supports global resource types, enabling you to provision a global resource once and share that resource across your accounts. A global resource is a resource that can be used in multiple AWS Regions. For example, you can now create a RAM resource share with an AWS Cloud WAN core network, which is a managed network containing AWS and on-premises networks, and share it across your organization. As a result, you can use the Cloud WAN core network to centrally operate a unified global network across Regions and across accounts.

    RAM helps you securely share resources with individual AWS accounts, within an organization or organizational units (OUs) in AWS Organizations, and with AWS Identity and Access Management (IAM) roles and users for supported resource types. You can share global resources the same way you share regional resources. With RAM, you can lower management overhead and achieve greater consistency of cross-account operation of global and regional resources.

    With this release, we are designating US East (N. Virginia) as the home Region where you can create, discover, update, or delete resource shares containing global resources. Resource owners and users can use the RAM console and APIs in the US East (N. Virginia) Region to list, share and discover regional and global resources. Resource users can also discover and use shared Cloud WAN core networks directly through the Cloud WAN console and APIs.

    To learn more about sharing global resources on RAM visit Sharing Regional resources compared to global resources in the AWS RAM User Guide. To get started, visit the AWS Resource Access Manager Console

    » AWS Cloud Development Kit (AWS CDK) v2 is now generally available

    Posted On: Dec 2, 2021

    The AWS Cloud Development Kit (AWS CDK) v2 for JavaScript, TypeScript, Java, Python, .NET and Go (preview) is now generally available in a single package, making it easier for you to use the CDK and stay up-to-date with new versions as we evolve it going forwards. AWS CDK v2 consolidates the AWS Construct Library into a single package called aws-cdk-lib, and eliminates the need to download individual packages for each AWS service used. If you write your own CDK construct libraries, you only need to take a minimum dependency on this single package and let library consumers choose which exact AWS CDK version to use.

    AWS CDK v2 only includes stable APIs, which comply with Semantic Versioning (semver), so you can confidently update to new minor versions. The CDK follows the “release early, release often” philosophy to encourage community participation, and we will continue deliver new features via experimental APIs for your feedback. However, going forward, experimental modules will be distributed separately from aws-cdk-lib, versioned clearly to indicate their pre-release status, and will only be merged into aws-cdk-lib when mature and stable. 

    In addition to simplified packing, the CDK includes developer productivity improvements such as a CDK API Reference refresh with code snippets throughout, CDK Watch for faster inner-loop development iterations on the application code (AWS Lambda handler code, Amazon ECS tasks, and AWS Step Function state machines) in your CDK project. You can also preserve successfully provisioned resources by disabling automatic stack rollbacks, further reducing deployment and iteration time. To find issues earlier in your infrastructure code development cycle, you can use the new assertions library to run automated unit tests in any CDK-supported language. 

    Upgrading to AWS CDK v2, for most projects, can be accomplished with a one-time, safe re-bootstrapping of your AWS accounts and “import“ statement changes. To learn more, refer to the following resources:

  • Read "How customer feedback shaped the AWS Cloud Development Kit version 2" for more details
  • Learn about Migrating to CDK v2 in the CDK Developer Guide 
  • Get started with the AWS CDK in all supported languages by taking CDK Workshop
  • Checkout the new code snippets throughout the API Reference
  • Find useful constructs published by AWS, partners and the community in Construct Hub
  • Connect with the community in the cdk.dev Slack workspace
  • Follow our Contribution Guide to learn how to contribute fixes and features to the CDK
  • » New Sustainability Pillar for the AWS Well-Architected Framework

    Posted On: Dec 2, 2021

    The AWS Well-Architected Framework has been helping AWS customers improve their cloud workloads since 2015. The framework consists of design principles, questions, and best practices across multiple pillars: Operational Excellence, Security, Reliability, Performance Efficiency, and Cost Optimization. Today we are introducing a new AWS Well-Architected Sustainability Pillar to help organizations learn, measure, and improve workloads using environmental best practices for cloud computing.

    With an increasing number of organizations setting sustainability targets, CTOs, architects, developers, and operations team members are seeking ways that they can directly contribute to their organization's sustainability goals. Using the new AWS Well-Architected Sustainability Pillar, you can make informed decisions in balancing security, cost, performance, reliability, and operational excellence with sustainability outcomes for your cloud workloads. Every action taken to reduce resource usage and increase efficiency across all components of a workload contributes to a reduced environmental impact for that workload as well as contributing to your organization's wider sustainability goals.

    To learn more, see the new Sustainability Pillar for the AWS Well-Architected Framework.

    » Introducing AWS Cloud WAN Preview

    Posted On: Dec 2, 2021

    Today AWS announced the preview release of AWS Cloud WAN, a new wide area networking (WAN) service that helps you build, manage, and monitor a unified global network that manages traffic running between resources in your cloud and on-premises environments.

    With Cloud WAN, you use a central dashboard and network policies to create a global network that spans multiple locations and networks—eliminating the need to configure and manage different networks individually using different technologies. Your network policies can be used to specify which of your Amazon Virtual Private Clouds (VPCs) and on-premises locations you wish to connect through AWS VPN or third-party software-defined WAN (SD-WAN) products, and the Cloud WAN central dashboard generates a complete view of the network to monitor network health, security, and performance. Cloud WAN automatically creates a global network across AWS Regions using Border Gateway Protocol (BGP) so you can easily exchange routes around the world.

    Cloud WAN is available in ten AWS Regions in Public Preview; US East (Northern Virginia), US West (Northern California), Africa (Cape Town), Asia Pacific (Mumbai), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Europe (Ireland), Europe (Frankfurt), and South America (São Paulo). Learn more by visiting the product overview page and documentation. To get started, visit the Cloud WAN console or read our technical blog post.

    » AWS SDK for Rust (Developer Preview)

    Posted On: Dec 2, 2021

    We’re excited to announce the AWS SDK for Rust is now in developer preview. The AWS SDK for Rust empowers developers to interact with AWS services and enjoy APIs that follow Rust idioms and best practices. It utilizes modern Rust language features like async/await, non-blocking IO, and builders. The SDK also integrates with popular libraries in the Rust ecosystem like Tokio, Tracing, and Hyper.

    This developer preview release supports access to 288 AWS services, each with their own crate. All crates are available on crates.io. The SDK provides automatic configuration when running in environments such as EC2, ECS and Lambda, built-in retry support, and a wide variety of authentication mechanisms to meet customer needs. The AWS SDK for Rust is engineered to be fast with serializers and deserializers that minimize unnecessary copies and allocations in order to reduce CPU and memory utilization, freeing up more resources for your application.

    Since this is a preview release, we are providing this SDK for early access and evaluation purposes only. Our public APIs may change before the general availability release as we gather more customer feedback and learn what is most important to Rust developers. We’d like to offer our sincere thanks to everyone who evaluated and provided feedback during the alpha — your time, expertise, and ideas helped shape our preview release. We are especially thankful to the Rusoto authors and maintainers who worked on Rusoto since its first release in 2015.

    To get started with the AWS SDK for Rust, visit our Getting Started Guide and Product Detail Page. You can learn where the project is headed on our Roadmap, and provide feedback and see source code at our GitHub repository.

    » Announcing Amazon EC2 M1 Mac instances for macOS

    Posted On: Dec 2, 2021

    Starting today, Amazon Elastic Compute Cloud (EC2) M1 Mac instances for macOS are available in preview. Built on Apple silicon Mac mini computers and powered by AWS Nitro System, EC2 M1 Mac instances deliver up to 60% better price performance over x86-based EC2 Mac instances for iOS and macOS application build workloads. EC2 M1 Mac instances also enable native ARM64 macOS environments for the first time in AWS to develop, build, test, deploy, and run Apple applications. Developers rearchitecting their macOS applications to natively support Apple silicon Macs can now provision ARM64 macOS environments within minutes, dynamically scale capacity as needed, and benefit from AWS’s pay-as-you-go pricing to enjoy faster builds and convenient distributed testing. Learn more and get started with EC2 M1 Mac instances preview here.

    Today, millions of Apple developers across the globe are seeking to support Apple’s once-in-a-generation transition to a new family of custom silicon designed specifically for the Mac, starting with the M1 System on Chip (SoC). M1 is the first personal computer SoC built using cutting-edge 5-nanometer process technology, combines numerous powerful technologies into a single chip, and features a unified memory architecture for improved performance and efficiency. Apple-designed M1 SoC also enables ARM64 architecture for the first time on Apple macOS. Now with EC2 M1 Mac instances, Apple developers can enjoy up to 60% better price performance over x86-based EC2 Mac instances for iOS and macOS application build workloads, while enjoying the same elasticity, scalability, and reliability that AWS’s secure, on-demand infrastructure has offered to millions of customers for more than a decade. Customers rearchitecting their macOS applications to natively support ARM64 architecture on Apple SoC can now enjoy faster builds with bare metal performance and convenient distributed testing without having to procure, install, manage, patch, and upgrade physical build infrastructure. Customers can also consolidate development of cross-platform Apple, Windows, and Android apps onto AWS, leading to increased developer productivity and accelerated time to market. Similar to other EC2 instances, customers can easily use EC2 M1 Mac instances together with AWS services and features like Amazon Virtual Private Cloud (VPC) for network security, Amazon Elastic Block Store (EBS) for expandable storage, Amazon Elastic Load Balancer (ELB) for distributing build queues, and Amazon Machine Images (AMIs) for OS image orchestration. The availability of EC2 M1 Mac instances also offloads the heavy lifting that comes with managing infrastructure to AWS, which means Apple developers can focus on rearchitecting their macOS applications to natively support Macs with Apple silicon.

    EC2 M1 Mac instances are powered by a combination of Apple silicon Mac mini computers—featuring the M1 chip with 8 CPU cores, 8 GPU cores, 16 GiB of memory, 16 core Apple Neural Engine—and the AWS Nitro System, providing up to 10 Gbps of VPC network bandwidth and 8 Gbps of EBS storage bandwidth through high speed Thunderbolt connections. EC2 M1 Mac instances are uniquely enabled by the AWS Nitro System, which makes it possible to offer Mac mini computers as a fully integrated and managed compute instances with Amazon VPC networking and EBS storage just like any other EC2 instance. EC2 M1 Mac instances also support both macOS Big Sur 11 and macOS Monterey 12 as Amazon Machine Images (AMIs).

    EC2 M1 Mac instances preview is starting today in US West (Oregon) and US East (N.Virginia) AWS Regions, with additional AWS Regions coming soon at launch. Learn more about EC2 M1 Mac instances here, or request access to preview here.

    » AWS and partners of the Open 3D Foundation announce the first Stable release of Open 3D Engine

    Posted On: Dec 2, 2021

    Today, AWS and the Open 3D Foundation (O3DF) announced the first stable release of Open 3D Engine (O3DE), an Apache 2.0 licensed multi-platform 3D engine that enables developers to build AAA games, cinema-quality 3D worlds for video production, and simulations for non-gaming use-cases unencumbered by licensing fees or commercial terms. Since the formation of O3DF and launch of the O3DE Developer Preview in July, over 250 developers from a wide range of industries have contributed thousands of pull requests, issues, and millions of lines of code changes to add developer features, improve stability, and increase performance to ensure that O3DE is ready for use in live games and simulations. As the successor to Amazon Lumberyard, O3DE offers developers and content creators a wide set of 3D content creation tools and a growing community of developers and foundation partners including AccelByte, Adobe, Apocalypse Studios, Audiokinetic, AWS, Backtrace.io, Carbonated, Futurewei, GAMEPOCH, Genvid Technologies, Hadean, HERE Technologies, Huawei, Intel, International Game Developers Association, KitBash3D, Kythera AI, Niantic, Open Robotics, PopcornFX, Red Hat, Rochester Institute of Technology, SideFX, Tafi, TLM Partners and Wargaming.

    With today’s release, “Stable 21.11”, developers can build 3D games and simulations and customized versions of the engine on a stable foundation with support from the community and O3DF. Developers using Linux can now install a native version of the engine with the Debian-based Linux package distribution. Teams using Windows can get started faster with a verified Windows installer. This release also adds new developer features including performance profiling and benchmarking tools, an experimental terrain system, a Script Canvas integration for the multiplayer networking system, and an SDK to facilitate engine customization with platform support for PC, MacOS, iOS, and Android. In addition to core engine capabilities, Open 3D Foundation partners have contributed new capabilities to O3DE through the extensible Gem system. Kythera released an update to their artificial intelligence Gem to add support for pre-built O3DE SDK, enabling creators to include AI behaviors in their games and simulations. Cesium released a geospatial 3D tile extension. The Gem system has also been extended to enable external Gem repositories, making it even easier to add capabilities from 3rd party contributors.

    » Announcing Extended Maintenance Plan for FreeRTOS

    Posted On: Dec 2, 2021

    Today, we are announcing Extended Maintenance Plan for FreeRTOS - a real-time operating system for microcontrollers. FreeRTOS Extended Maintenance Plan (EMP) allows embedded developers to receive critical bug fixes and security patches on their chosen FreeRTOS Long Term Support (LTS) version for up to 10 years beyond the expiry of the initial LTS period. FreeRTOS EMP helps customers secure their microcontroller-based devices for years, save operating system upgrade costs, and reduce risks associated with patching their devices. FreeRTOS EMP applies to libraries covered by FreeRTOS LTS, so developers can continue using a version that provides feature stability, security patches, and critical bug fixes, without having to plan a costly version upgrade.

    FreeRTOS EMP has a flexible annual subscription plan. Developers can continue to renew their subscriptions annually for a duration (up to 10 years) that aligns with their device lifecycle or application requirements. During the subscription period, developers receive timely notification of upcoming patches on FreeRTOS libraries, so they can plan the deployment of security patches on their Internet of Things (IoT) devices. Before the end of the current FreeRTOS LTS period, developers will be able to subscribe to FreeRTOS EMP using their AWS account, and renew the subscription annually to cover the product lifecycle or until they are ready to transition to a new FreeRTOS release.

    To learn more about FreeRTOS EMP, refer to the FreeRTOS features webpage and frequently asked questions. Sign up to get periodic updates on when and how you can subscribe to FreeRTOS EMP.

    » AWS announces Construct Hub general availability

    Posted On: Dec 2, 2021

    Today we are announcing the general availability of Construct Hub, a registry of open-source construct libraries for simplifying cloud development. Constructs are reusable building blocks of the Cloud Development Kits (CDKs). Discover and share CDK constructs for the AWS Cloud Development Kit (CDK)CDK for Kubernetes (CDK8s) and CDK for Terraform (CDKtf) and other construct-based tools.

    You can find construct libraries published by the community, AWS and cloud service providers that solve for your use case: monitoring, containers, serverless, databases, utilities, deployment, websites, security, compliance, network, artificial intelligence (AI), cloud service integrations and more. Each library includes documentation, API reference and code samples in TypeScript, Python, Java and .NET. You can also find installation instructions for each programming language, dependency list, number of downloads, licensing information, and helpful links.

    To get started, see the following list of resources:

  • https://constructs.dev
  • Blog post 
  • What is a construct library? 
  • Construct Hub FAQ 
  • Construct Hub Contribute Page 
  • Getting started with the AWS CDK 
  • Getting started with the CDKtf 
  • Getting started with the CDK8s 
  • » Introducing AWS re:Post, a new, community-driven, questions-and-answers service

    Posted On: Dec 2, 2021

    Amazon Web Services (AWS) announces the availability of AWS re:Post (re:Post), a new, community-driven, questions-and-answers service to help AWS customers remove technical roadblocks, accelerate innovation, and enhance operation. AWS re:Post enables you to ask questions about anything related to designing, building, deploying, and operating workloads on AWS, and get answers from community experts, including AWS customers, Partners, and employees.

    AWS re:Post replaces AWS Forums and introduces new ways to improve the accuracy of answers provided, as well as the likelihood of receiving an answer from the community. AWS re:Post automatically connects your question with subject matter experts, and is also integrated with AWS Support. Customers with AWS Premium Support subscriptions receive responses from AWS employees for questions that are not answered by the community.

    AWS re:Post is part of the AWS Free Tier and is available to anyone with an AWS account at https://repost.aws.

    » Introducing AWS Amplify Studio

    Posted On: Dec 2, 2021

    AWS Amplify announces AWS Amplify Studio, a visual development environment that offers frontend developers new features (public preview) to accelerate UI development with minimal coding, while integrating Amplify’s powerful backend configuration and management capabilities. Amplify Studio automatically translates designs made in Figma to human-readable React UI component code. Within Amplify Studio, developers can visually connect the UI components to app backend data. For configuring and managing backends, Amplify Admin UI’s existing capabilities will be part of Amplify Studio going forward, providing a unified interface to enable developers to build full-stack apps faster. Learn more.

    Developers can now use Amplify Studio to set up a backend, create UI components, and connect the two together, all within Amplify Studio. Amplify Studio contains all of Admin UI’s existing backend creation and management capabilities, simplifying set up and management of app backend infrastructure such as database tables, user authentication, and file storage, without requiring cloud expertise. To accelerate UI development, Amplify Studio offers developers a React UI library with dozens of components such as newsfeeds, contact forms, and e-commerce cards. All UI components are fully customizable within Figma, giving designers complete control over the visual styling of components within tooling they are familiar with. Developers can import component customizations from Figma into Amplify Studio, and use the component editor to visually connect the UI components to data from the app backend. Amplify Studio exports all frontend and backend artifacts (UI components, backend infrastructure) to credible code, empowering developers to fully customize the application design and behavior using familiar programming concepts — JavaScript for application code, Amplify CLI , and AWS CDK for extending backend infrastructure.

    AWS Amplify Studio’s UI development capabilities are currently in public preview. All backend configuration and management capabilities are generally available in 17 AWS Regions: US East (N. Virginia), US East (Ohio), US West (Oregon), US West (N. California), Asia Pacific (Mumbai), Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Canada (Central), Europe (Frankfurt), Europe (Ireland), Europe (London), Europe (Paris), Europe (Stockholm), South America (São Paulo), and Middle East (Bahrain).

    To get started check out our launch blog.

    » AWS SDK for Swift (Developer Preview)

    Posted On: Dec 2, 2021

    We’re happy to announce that the AWS SDK for Swift is now in developer preview. The AWS SDK for Swift has been designed from the ground-up to provide idiomatic support for Swift’s concise syntax, multi platform capabilities, and features extensions to take advantage of Swift’s new concurrency features. This initial release supports iOS, macOS and Linux, with support for other platforms such as tvOS, watchOS, Catalyst, and Windows coming in the future.

    The AWS SDK for Swift enables developers to build a wide variety of applications in the Swift language with AWS services. In this release, we provide support for 268 services including S3, DynamoDB, and Lambda to name a few. Constructing clients and requests and invoking calls to AWS can be done with async/await syntax using Swift 5.5+

    Since this is a preview release, we are providing this SDK for evaluation purposes. We expect our public APIs to change somewhat before a general availability launch as we gather customer feedback and learn what is most important to Swift developers.

    To get started with the AWS SDK for Swift, take a look at our Getting Started Guide. and Product Detail Page. Learn where the project is headed on our Roadmap, or provide feedback and see source code at our GitHub project page.

    » AWS SDK for Kotlin (Developer Preview)

    Posted On: Dec 2, 2021

    We’re pleased to announce that the AWS SDK for Kotlin is now in developer preview. The AWS SDK for Kotlin allows developers to interact with AWS services using idiomatic Kotlin, including native coroutine support for concurrent usage.

    The AWS SDK for Kotlin enables developers to build a wide variety of applications in the Kotlin language with AWS services. In this release, we provide support for 284 services including S3, DynamoDB, and Lambda to name a few. The SDK provides automatic configuration when running in environments such as EC2, ECS and Lambda, built-in retry support, and a wide variety of authentication mechanisms to meet customer needs.

    Since this is a preview release, we are providing this SDK for early access and evaluation purposes only. It is possible for our public APIs to change somewhat before the general availability release as we gather more customer feedback and learn what is most important to Kotlin developers.

    To get started with the AWS SDK for Kotlin, visit our Getting Started Guide and Product Detail Page. You can find all our releases on Maven, learn where the project is headed on our Roadmap, and provide feedback and see source code at our GitHub project page.

    » Announcing a simplified FreeRTOS out-of-box AWS IoT connectivity experience

    Posted On: Dec 1, 2021

    Today, we are excited to announce a new and simplified out-of-box AWS IoT connectivity experience that can be implemented on two partner-provided FreeRTOS Reference Integration boards: the STM32L4+ and the ESP32-C3. 

    It is now possible for IoT developers to quickly connect their devices to AWS IoT Core in minutes. The process does not require a cloud account, nor lengthy registration and configuration steps. After unwrapping and powering up the evaluation board, visit the FreeRTOS Quick Connect page and with only a few clicks, download the demo application and start sending sensor data to the cloud to be immediately visualized via the new graphical interface. After successfully connecting the device through this first experience, you will be guided through simple project customization steps, adding new sensor inputs, and eventually on to master the complete IoT device development with FreeRTOS.

    To get started, select a partner-provided board from the supported FreeRTOS reference integrations and begin experimenting and developing new AWS IoT applications. Reach out to us on the FreeRTOS forums to leave comments or to request to extend the experience to additional FreeRTOS qualified boards!

    » Announcing Amazon DevOps Guru for RDS, an ML-powered capability that automatically detects and diagnoses performance and operational issues within Amazon Aurora

    Posted On: Dec 1, 2021

    Amazon DevOps Guru for RDS is a new Machine Learning (ML) powered capability for Amazon Relational Database Service (Amazon RDS) that automatically detects and diagnoses database performance and operational issues, enabling you to resolve bottlenecks in minutes rather than days. Amazon DevOps Guru for RDS is a feature of Amazon DevOps Guru, which detects operational and performance related issues for all Amazon RDS engines and dozens of other resource types. DevOps Guru for RDS expands upon the existing capabilities of DevOps Guru to detect, diagnose, and provide remediation recommendations for a wide variety of database-related performance issues, such as resource over-utilization and misbehavior of SQL queries. When an issue occurs, DevOps Guru for RDS immediately notifies developers and DevOps engineers and provides diagnostic information, details on the extent of the problem, and intelligent remediation recommendations to help customers quickly resolve the issue.

    DevOps Guru for RDS is available for Amazon Aurora MySQL and PostgreSQL–Compatible Editions in the following AWS regions: US East (N. Virginia), US East (Ohio), US West (Oregon), Europe (Frankfurt), Europe (Ireland), Europe (Stockholm), Asia Pacific (Singapore), Asia Pacific (Sydney), and Asia Pacific (Tokyo). 

    Amazon Aurora combines the performance and availability of high-end commercial databases with the simplicity and cost-effectiveness of open source databases. It provides up to three times better performance than the typical PostgreSQL database, together with increased scalability, durability, and security. For more information, please visit the Amazon Aurora product page.

    Amazon DevOps Guru is an ML powered service that makes it easy to improve an application’s operational performance and availability. By analyzing application metrics, logs, events, and traces, DevOps Guru identifies behaviors that deviate from normal operating patterns and creates an insight that alerts developers with issue details. When possible DevOps Guru, also provides proposed remedial steps via Amazon Simple Notification Service (SNS) and partner integrations, like Atlassian Opsgenie and PagerDuty. To learn more, visit the DevOps Guru product and documentation pages or post a question to the Amazon DevOps Guru forum.

    To get started, simply go to the Amazon RDS console and turn on Amazon RDS Performance Insights. Once Performance Insights is enabled, go the Amazon DevOps Guru console to enable DevOps Guru for your Amazon Aurora resources, other supported resources, or your entire account.

    To learn more about DevOps Guru for RDS, see “Working with anomalies in DevOps Guru for RDS" in the Amazon DevOps Guru User Guide.

    » Amazon SageMaker Studio now enables interactive data preparation and machine learning at scale within a single universal notebook through built-in integration with Amazon EMR

    Posted On: Dec 1, 2021

    Amazon SageMaker Studio is the first fully integrated development environment (IDE) for machine learning (ML). It provides a single, web-based visual interface where you can perform all ML development steps required to prepare data, as well as to build, train, and deploy models. We recently introduced  the ability to visually browse and connect to Amazon EMR clusters right from the SageMaker Studio notebook. Starting today, you can now monitor and debug your Apache Spark jobs running on EMR right from SageMaker Studio notebooks with just a click. Additionally, you can now discover, connect to, create, terminate and manage EMR clusters directly from SageMaker Studio. The built-in integration with EMR therefore enables you to do interactive data preparation and machine learning at peta-byte scale right within the single universal SageMaker Studio notebook.

    Analyzing, transforming, and preparing large amounts of data is a foundational step of any data science and ML workflow. Data workers such as data scientists and data engineers leverage Apache Spark, Hive, and Presto running on EMR for fast data preparation. Until today, these data workers could easily connect to EMR clusters from Studio notebooks in the same account. However, they had to set up complex security rules and web proxies to connect across accounts or to monitor and debug their Apache Spark jobs running on EMR. Furthermore, when these data workers needed to create EMR clusters tailored to their specific workloads, they had to either request their administrator to create them or had to switch to using other tools and use detailed technical knowledge of network, compute, and cluster configuration to create clusters by themselves. This process was not only challenging and disruptive to their workflow but also distracted them from focusing on their data preparation tasks. Consequently, although uneconomical, many customers kept persistent clusters running in anticipation of incoming workload regardless of active usage.

    Starting today, data workers can easily discover and connect to EMR clusters in single account and cross account configurations directly from SageMaker Studio. Further, data workers can now have one-click access to Apache Spark UI to monitor and debug Apache Spark jobs running on EMR right from SageMaker Studio Notebooks, greatly simplifying their debugging workflow. Customers can also use AWS Service Catalog to define and roll out pre-configured templates to selected data workers to enable them to create EMR clusters right from SageMaker Studio. Customers can fully control the organizational, security, compute and networking guardrails when data workers use these templates. Data workers can visually browse through a set of templates made available to them, customize them for their specific workloads, create EMR clusters on-demand and terminate them with just a few clicks right from SageMaker Studio. Customers can use these features to simplify their data preparation workflow and more optimally use EMR clusters for interactive workloads from SageMaker Studio.

    These features are generally available in the following AWS Regions there are no additional charges to use this capability: US East (N. Virginia and Ohio), US West (N.California and Oregon), Canada (Central), Europe (Frankfurt), Europe (Ireland), Europe (Stockholm), Europe (Paris) and Europe (London), Asia Pacific (Mumbai), Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific (Sydney), and Asia Pacific (Tokyo) and South America (Sao Paolo). To learn more, see this blog post and the SageMaker Studio Notebooks user guide.

    » AWS announces AWS DeepRacer Student, offering free model training, learning content, and a global autonomous racing competition exclusively for students

    Posted On: Dec 1, 2021

    AWS DeepRacer Student Presented by Intel is a new service for students enrolled in high schools and colleges globally. AWS DeepRacer Student builds on the success of of the award-winning AWS DeepRacer service, which educates aspiring developers on artificial intelligence and machine learning (AI/ML), while removing barriers to entry faced by students. AWS DeepRacer Student provides an all-in-one solution with free learning modules, model training, and competition.

    Starting today, students can sign up and access more than 20 hours of foundational AI/ML content developed by AWS experts. Students with no prior ML experience can learn the fundamentals with easy to understand, self paced content to take you from first-time developer to the top of the AWS DeepRacer leaderboard. For the first time, AWS DeepRacer service is available for free for students, providing 10 hours of monthly training and 5 gigabytes of model storage. Students will be able to train a reinforcement learning (RL) model in a simulated 3D environment using the AWS DeepRacer service. Once a model is trained, students can submit to the new AWS DeepRacer Student League pre-season. Starting in March 2022, participate monthly to climb the leaderboard and win scholarships, prizes and glory. Get started today at deepracerstudent.com.

    » AWS Announces the AWS AI & ML Scholarship Program in collaboration with Intel and Udacity to help bring diversity to the future of the AI and ML workforce

    Posted On: Dec 1, 2021

    The AWS Artificial Intelligence (AI) and Machine Learning (ML) Scholarship program, in collaboration with Intel and Udacity, provides students who self-identify as underserved and underrepresented in tech educational content, career mentorship programs, and 2,500 scholarships annually as part of a commitment to a more diverse future AI & ML workforce.

    The AWS AI & ML Scholarship aims to help underrepresented and underserved high school and college students learn foundational ML concepts and prepare them for careers in artificial intelligence and machine learning. Delivered in collaboration with Intel and supported by the talent transformation platform Udacity, the AWS AI & ML Scholarship program allows students from around the world to access dozens of hours of free training modules and tutorials on the basics of ML and its real-world applications. Students can use AWS DeepRacer to turn theory into hands-on action by learning how to train ML models to power a virtual race car. Students who successfully complete training educational modules by passing knowledge-check quizzes, and meet certain AWS DeepRacer lap time performance will be eligible to apply for one of 2,000 AI Programming with Python Udacity Nanodegree program scholarships. Five hundred of the top performing students who received the highest scores in the first Udacity Nanodegree program will have the chance to earn a second more advanced Udacity Nanodegree, curated specifically for AWS AI & ML Scholarship recipients. as well as access to mentorship opportunities from tenured Amazon and Intel technology experts for career insights and advice.

    To learn more about the AWS AI & ML Scholarship, visit awsaimlscholarship.com.

    » Introducing Amazon SageMaker Inference Recommender

    Posted On: Dec 1, 2021

    Amazon SageMaker Inference Recommender helps you choose the best available compute instance and configuration to deploy machine learning models for optimal inference performance and cost.

    Selecting a compute instance with the best price performance for deploying machine learning models is a complicated, iterative process that can take weeks of experimentation. First, you need to choose the right ML instance type out of over 70 options based on the resource requirements of your models and the size of the input data. Next, you need to optimize the model for the selected instance type. Lastly, you need to provision and manage infrastructure to run load tests and tune cloud configuration for optimal performance and cost. All this can delay model deployment and time to market.

    Amazon SageMaker Inference Recommender automatically selects the right compute instance type, instance count, container parameters, and model optimizations for inference to maximize performance and minimize cost. You can use SageMaker Inference Recommender from SageMaker Studio, the AWS Command Line Interface (CLI), or the AWS SDK, and within minutes, get recommendations to deploy your ML model. You can then deploy your model to one of the recommended instances or run a fully managed load test on a set of instance types you choose without worrying about testing infrastructure. You can review the results of the load test in SageMaker Studio and evaluate the tradeoffs between latency, throughput, and cost to select the most optimal deployment configuration.

    Amazon SageMaker Inference Recommender is generally available in all regions where SageMaker is available except the AWS China regions. To learn more, see the SageMaker model deployment webpage and the SageMaker Inference Recommender documentation.

    » AWS Transit Gateway introduces intra-region peering for simplified cloud operations and network connectivity

    Posted On: Dec 1, 2021

    Starting today, AWS Transit Gateway supports intra-region peering, giving you the ability to establish peering connections between multiple Transit Gateways in the same AWS Region. With this change, different units in your organization can deploy their own Transit Gateways, and easily interconnect them resulting in less administrative overhead and greater autonomy of operation.

    Transit Gateway enables you to connect thousands of Amazon Virtual Private Clouds (VPCs) and your on-premises networks using a single gateway. Until now you could only establish peering connections between Transit Gateway in different AWS Regions. With this launch, you can simplify routing and inter-connectivity between networks that are serviced via separate Transit Gateways in the same AWS Region. The ability to natively peer Transit Gateways in the same region eliminates the need to create and manage transit VPCs, simplifies route-tables management, and reduces the probability of configuration errors. Using intra-region peering, you can build flexible network topologies and easily integrate your network with a third-party or partner managed network in the same region.

    To get started, create a peering attachment on your transit gateway, and specify a transit gateway you want to peer within the same AWS Region. The peer Transit Gateway can be in your account or a different AWS account. By creating static routes in Transit Gateway route tables, you can route traffic between the VPCs and connections attached to each Transit Gateway. This feature is available through the AWS Management Console, the Amazon Command Line Interface (Amazon CLI), and the Amazon Software Development Kit (Amazon SDK).

    Transit Gateway intra-region peering is available in all AWS commercial and AWS GovCloud (US) regions. Pricing for intra-region peering is the same as that for inter-region peering. For additional information, visit the AWS Transit Gateway product page, the documentation, pricing page and the blog post.

    » Amazon SQS Enhances Dead-letter Queue Management Experience For Standard Queues

    Posted On: Dec 1, 2021

    Amazon Simple Queue Service (SQS) announces support of dead-letter queue (DLQ) redrive to source queue, giving you better control over the life cycle of unconsumed messages. Dead-letter queues are an existing feature of Amazon SQS that allows customers to store messages that applications could not successfully consume. You can now efficiently redrive messages from your dead-letter queue to your source queue on the Amazon SQS console. DLQ redrive augments the dead-letter queue management experience for developers and enables them to build applications with the confidence that they can examine their unconsumed messages, recover from errors in their code, and reprocess messages in their dead-letter queues.

    Amazon SQS is a fully managed message queuing service that makes it easier to decouple and scale microservices, distributed systems, and serverless applications. With dead-letter queue redrive to source queue, you can simplify and enhance your error-handling workflows for standard queues. Often, messages end up in your dead-letter queue when consumer applications stop processing messages as expected. Once your consumer application recovers, you can now more easily redrive the messages from the dead-letter queue to the source queue. This redrive support is available on the Amazon SQS console, making it easier for you to inspect a sample of the messages and move them to source queues with a click of a button.

    To get started, navigate to the queue details page on the Amazon SQS console for a queue that you have defined as a dead-letter queue on an existing source queue. Then navigate to the DLQ Redrive workflow page to inspect the messages and select your redrive destination to be the source queue, and click "redrive messages." Support for dead-letter queue redrive to source queue is available in all AWS Commercial Regions, except the China regions. To learn more, please visit the Amazon SQS documentation.

    » Amazon Virtual Private Cloud (VPC) announces Network Access Analyzer to help you easily identify unintended network access

    Posted On: Dec 1, 2021

    Amazon VPC Network Access Analyzer is a new feature that enables you to identify unintended network access to your resources on AWS. Using Network Access Analyzer, you can verify whether network access for your Virtual Private Cloud (VPC) resources meets your security and compliance guidelines. With Network Access Analyzer, you can assess and identify improvements to your cloud security posture. Additionally, Network Access Analyzer makes it easier for you to demonstrate that your network meets certain regulatory requirements.

    As a part of the AWS shared responsibility model, customers often need to verify that their networks on AWS are built with appropriate controls to block any unintended network access. Examples include, “Databases should never be accessible from the Internet”, “Application servers can only send TCP traffic on port 443 to a trusted on-premises IP range,” and “Production VPCs should not be accessible from Development VPCs.” Network Access Analyzer allows you to capture such requirements in simple and precise specifications. Using automated reasoning, Network Access Analyzer identifies network paths in your AWS environment that do not meet the requirements you defined. You can specify the sources and destinations for your network access requirements in terms of IP address ranges, port ranges, traffic protocols, AWS resource IDs, AWS Resource Groups, and resource types such as Internet Gateways or NAT Gateways. This way, you can easily govern network access across your AWS environment, independent of how your network is configured.

    To get started, visit the AWS Management Console and evaluate your network using one of the Amazon created Network Access Scopes in Network Access Analyzer. You can also define your own Network Access Scopes and analyze your network using the AWS CLI, AWS SDK or AWS Management Console.

    Amazon VPC Network Access Analyzer is generally available in the following AWS Regions: US East (N. Virginia), US East (Ohio), US West (Northern California), US West (Oregon), Africa (Cape Town), Asia Pacific (Hong Kong), Asia Pacific (Mumbai), Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Canada (Central), Europe (Frankfurt), Europe (Ireland), Europe (London), Europe (Milan), Europe (Paris), Europe (Stockholm), South America (Sao Paulo), and the Middle East (Bahrain).

    To learn more, visit the Amazon VPC documentation and blog post for Network Access Analyzer. To view Network Access Analyzer prices, visit Amazon VPC Pricing.

    » Amazon DynamoDB announces the new Amazon DynamoDB Standard-Infrequent Access table class, which helps you reduce your DynamoDB costs by up to 60 percent

    Posted On: Dec 1, 2021

    Amazon DynamoDB announces the new Amazon DynamoDB Standard-Infrequent Access (DynamoDB Standard-IA) table class, which helps you reduce your DynamoDB costs by up to 60 percent for tables that store infrequently accessed data. The DynamoDB Standard-IA table class is ideal for use cases that require long-term storage of data that is infrequently accessed, such as application logs, old social media posts, e-commerce order history, and past gaming achievements.

    Now, you can optimize the costs of your DynamoDB workloads based on your tables’ storage requirements and data access patterns. The new DynamoDB Standard-IA table class offers 60 percent lower storage costs than the existing DynamoDB Standard tables, making it the most cost-effective option for tables with storage as the dominant table cost. The existing DynamoDB Standard table class offers 20 percent lower throughput costs than the DynamoDB Standard-IA table class. DynamoDB Standard remains your default table class and the most cost-effective option for the wider variety of tables that store frequently accessed data with throughput as the dominant table cost. You can switch between DynamoDB Standard and DynamoDB Standard-IA table classes with no impact on table performance, durability, or availability and without changing your application code. For more information about using DynamoDB Standard-IA check the DynamoDB Developer Guide.

    DynamoDB Standard-IA is now available in all AWS Regions, except China Regions and AWS GovCloud (US) Regions. To learn more about DynamoDB Standard-IA pricing, see the DynamoDB pricing page. To get started using this new table class, go to the DynamoDB console, or use the AWS CLI or AWS SDK.

    » Introducing Amazon SageMaker Training Compiler to accelerate DL model training by up to 50%

    Posted On: Dec 1, 2021

    Today, we are excited to announce Amazon SageMaker Training Compiler, a new feature of SageMaker that can accelerate the training of deep learning (DL) models by up to 50% through more efficient use of GPU instances.

    State-of-the-art DL models for natural language processing (NLP) and computer vision (CV) tasks are complex multi-layered neural networks with billions of parameters that can take thousands of GPU hours to train. Even fine-tuning these models can sometimes take days, incurring high costs and slowing down innovation. To accelerate this process, you can now use SageMaker Training Compiler with minimal changes to your existing training script. SageMaker Training Compiler is integrated into the latest versions of PyTorch and TensorFlow in SageMaker and works under the hood of these frameworks so that no other changes to your workflow are required when it is enabled.

    SageMaker Training Compiler accelerates training by converting DL models from their high-level language representation to hardware-optimized instructions. More specifically, SageMaker Training Compiler compilation makes graph-level optimizations (operator fusion, memory planning, and algebraic simplification), data flow-level optimizations (layout transformation, common sub-expression elimination), and back end optimizations (memory latency hiding, loop oriented optimizations) to more efficiently use hardware resources and, as a result, train the model faster. The returned model artifact from this accelerated training process is the same as it would be without these training optimizations enabled.

    SageMaker Training Compiler is tested on most popular NLP DL models from Hugging Face including bert-base-cased, bert-base-uncased, distilbert-base-uncased, distilbert-base-uncased-finetuned-sst-2-english, gpt2, roberta-base, roberta-large, bert-base-chinese, and xlm-roberta-base. These models train up to 50% faster with SageMaker Training Compiler.

    SageMaker Training Compiler is now generally available in N. Virginia, Ohio, Oregon, and Ireland and provided at no additional charge to SageMaker customers. For more details, please visit SageMaker Model Training web page and SageMaker Training Compiler technical documentation.

    » Introducing AWS DMS Fleet Advisor for automated discovery and analysis of database and analytics workloads (Preview)

    Posted On: Dec 1, 2021

    AWS Database Migration Service (AWS DMS) is a service that helps you migrate databases to AWS quickly and securely. AWS DMS Fleet Advisor is a new feature of AWS DMS that allows you to quickly build a database and analytics migration plan by automating the discovery and analysis of your fleet. AWS DMS Fleet Advisor is intended for users looking to migrate a large number of database and analytic servers to AWS.

    AWS DMS Fleet Advisor collects and analyzes your database schemas and objects, including information on feature metadata, schema objects, and usage metrics. It then allows you to build a customized migration plan by determining the complexity of migrating your source databases to target services in AWS. AWS DMS Fleet Advisor makes it easy to plan your database and analytics migration to AWS without requiring expensive migration experts or third-party tools.

    To get started with AWS DMS Fleet Advisor, visit the AWS DMS Studio console and follow the instructions to install the AWS DMS data collector to discover your database and analytics server fleet. AWS DMS Fleet Advisor is free to use, learn more on the AWS DMS pricing page. You can learn more about Fleet Advisor by reading our documentation.

    » Amazon SageMaker now supports cross-account lineage tracking and multi-hop lineage querying

    Posted On: Dec 1, 2021

    Amazon SageMaker now offers enhancements to the machine learning (ML) lineage tracking capability that enables customers to track and query the lineage of artifacts such as data, features, and models across an ML workflow. Now, customers can retrieve the end-to-end lineage graph spanning the entire workflow from data preparation to model deployment through a single query. This feature eliminates undifferentiated heavy lifting needed to retrieve lineage information one workflow step at a time and manually stitch them all together. Customers can also retrieve lineage information for segments of the workflow by defining a step as the focal point and querying the lineage of the steps that are upstream or downstream of that focal point. For instance, customers can define a model as the focal entity and retrieve the location of the raw data set from which features were extracted to train that model.

    The new feature also enables tracking lineage information of workflow steps that span multiple AWS accounts. Creating multiple accounts for various personas (Data Scientist, ML engineers etc.) to organize all the resources of your organization is a common DevOps practice. To enable this feature, customer can share lineage resources across AWS accounts using AWS RAM. AWS RAM helps reduce operational overhead and provides visibility of shared resources. Once configured, customers can use lineage querying API to track relationships between various artifacts spanning across multiple AWS accounts..

    The ML lineage information can be used to improve model governance, reproduce previous versions of the artifacts, or troubleshoot workflows more efficiently. To get started, train a new ML model using SageMaker Studio or SDK and use lineage querying APIs to track lineage information. To learn more, visit our documentation page on cross account graph based lineage tracking and lineage querying API.

    » AWS DeepRacer announces the 2022 Season of DeepRacer League including physical races in the Summit Circuit, LIVE virtual head-to-head racing and a dedicated Student League

    Posted On: Dec 1, 2021

    Today AWS announces a new structure for the 2022 Season of the award-winning AWS DeepRacer League. The AWS DeepRacer League is the world’s first global autonomous racing league, including an autonomous 1/18th scale race car driven by reinforcement learning and a 3D racing simulator where developers can get hands-on experience with Machine Learning (ML). 2022 introduces more opportunities to race LIVE for everyone via the return of physical racing on the Summit Circuit and a new LIVE head-to-head format in the Virtual Circuit, plus a new student-only division dubbed the AWS DeepRacer Student League.

    Starting March 1st, The DeepRacer League Summit Circuit will offer attendees at 2022 AWS Summits around the globe a chance to race an AWS DeepRacer car on a physical track. Winners at each AWS Summit will qualify for regional playoffs to secure one of fifteen opportunities to win a trip to advance to the Championship Cup at re:Invent 2022 in Las Vegas. Additionally, the Virtual Circuit adds even more challenges to make racing fun, with finalists now racing head-to-head LIVE during monthly finales livestreamed on twitch.tv/aws. Finally, students will compete with peers on a level playing field in the AWS DeepRacer Student League. Each racer will receive 10 hours of model training and 5 GB of storage per month to train time-trial models.

    Visit www.awsdeepracerleague.com to start training a model and test your skills with the AWS DeepRacer League pre-season races.

    » Introducing Amazon Lex Automated Chatbot Designer (Preview)

    Posted On: Dec 1, 2021

    We are excited to announce the preview of automatic chatbot designer in Amazon Lex, enabling developers to automatically design chatbots from conversation transcripts in hours rather than weeks. Amazon Lex helps you build, test, and deploy chatbots and virtual assistants on contact center services (such as Amazon Connect), websites, and messaging channels (such as Facebook Messenger). The automatic chatbot designer enhances the usability of Amazon Lex by automating conversational design, minimizing developer effort and reducing the time it takes to design a chatbot.

    The automated chatbot designer uses machine learning (ML) to analyze conversation transcripts and semantically cluster them around the most common intents and related information. For example, to create a bot design for insurance transactions it can analyze thousands of lines of transcripts and identify intents such as ‘file a new claim’ from phrases such as ‘my basement is flooded, I need to start a new claim’. The intents in the design are well-defined to remove any overlap between them so the bot can understand the user better for an efficient interaction. And finally, the design includes information, such as policy ID or claim type, needed to fulfill all identified intents. Developers can iterate on the design, add chatbot prompts and responses, integrate business logic to fulfill user requests, and then build, test, and deploy the chatbot in Amazon Lex. The automated chatbot designer automates a significant portion of the bot design, minimizing effort and reducing the overall time it takes to design a chatbot.

    The automated chatbot designer is available for free during the preview in English (US) language in all the AWS regions where Amazon Lex V2 operates. Amazon Connect customers using Contact Lens can directly use conversation transcripts in their original format as an input to the automated chatbot designer. Conversation transcripts from other transcription services may require a simple conversion. To learn more, visit Amazon Lex automated chatbot designer and Amazon Lex documentation page.

    » Announcing Amazon RDS Custom for SQL Server

    Posted On: Dec 1, 2021

    Amazon Relational Database Service (Amazon RDS) Custom is a managed database service for legacy, custom, and packaged applications that require access to the underlying OS and DB environment. Amazon RDS Custom is now available for the SQL Server database engine. Amazon RDS Custom for SQL Server automates setup, operation, and scaling of databases in the cloud while granting access to the database and underlying operating system to configure settings, install drivers, and enable native features to meet the dependent application's requirements.

    With Amazon RDS Custom for SQL Server, customers can customize their database server host and operating system and change database software settings to support third-party applications that require privileged access. Through the time-saving benefits of a managed service, Amazon RDS Custom for SQL Server allows valuable resources to focus on more important business-impacting, strategic activities. By automating backups and other operational tasks, customers can rest easy knowing their data is safe and ready to be recovered if needed. And finally, Amazon RDS Custom cloud-based scalability will help our customer's database infrastructures keep pace as their business grows.

    Get started using the AWS CLI or AWS Management Console today! Amazon RDS Custom for SQL Server is generally available in the following regions: US East (N. Virginia), US West (Oregon), US East (Ohio), Europe (Ireland), Europe (Frankfurt), Europe (Stockholm), Asia Pacific (Sydney), Asia Pacific (Tokyo), and Asia Pacific (Singapore).

    To learn more about Amazon RDS Custom:

  • Read the AWS News blog post 
  • Visit the Amazon RDS Custom website
  • See Amazon RDS Custom pricing page for full pricing details and regional availability
  • See Amazon RDS Custom User Guide
  • » Amazon SageMaker Studio Lab (currently in preview), a free, no-configuration ML service

    Posted On: Dec 1, 2021

    Introducing Amazon SageMaker Studio Lab is a free, no-configuration service that allows developers, academics, and data scientists to learn and experiment with machine learning.

    Using Amazon SageMaker Studio Lab customers will be able to focus on experimenting with the data science aspect of machine learning, without having to set up, or configure any infrastructure. Based on the the open source JupterLab web application, customers have a completely open environment that enables them to leverage any framework, such as Pytorch, TensorFlow, MxNet, or Hugging Face, and libraries such as SciKitLearn, NumPy, and Pandas. Studio Lab has auto-save capabilities that will automatically save the customers’ user sessions, so they can pick up where they left off on their next user session. Other benefits of SageMaker Studio Lab is its integration to Github enabling customers to open, view, edit, and run any notebook, as well as integration to Git, an open source distributed version control system.

    To get started vist here and request an account. It only takes a valid email address to register. After that you can quickly start learning and experimenting with Jupyter notebooks. We also make it easy to get started with assets like AWS Machine Learning University, Dive into DeepLearning, and Hugging Face notebooks.

    » Introducing Amazon SageMaker Serverless Inference (preview)

    Posted On: Dec 1, 2021

    Amazon SageMaker Serverless Inference is a new inference option that enables you to easily deploy machine learning models for inference without having to configure or manage the underlying infrastructure. Simply select the serverless option when deploying your machine learning model, and Amazon SageMaker automatically provisions, scales, and turns off compute capacity based on the volume of inference requests. With SageMaker Serverless Inference, you pay only for the duration of running the inference code and the amount of data processed, not for idle time.

    Amazon SageMaker Serverless Inference is ideal for applications with intermittent or unpredictable traffic. For example, a chatbot service used by a payroll processing company experiences increase in inquiries at the end of month while for rest of the month traffic is intermittent. Provisioning instances for the entire month in such scenarios is not cost effective as you end up paying for idle periods. Amazon SageMaker Serverless Inference helps address these types of use cases by automatically scaling compute capacity based on the volume of inference requests without the need for you to forecast traffic demand up front or manage scaling policies. Additionally, you pay only for the compute time to run your inference code (billed in milliseconds) and amount of data processed, making it a cost-effective option for workloads with intermittent traffic. With the introduction of SageMaker Serverless Inference, SageMaker now offers four inference options, expanding the deployment choices available to a wide range of use cases. The other three options are: SageMaker Real-Time Inference for workloads with low latency requirements in the order of milliseconds, SageMaker Batch Transform to run predictions on batches of data, and SageMaker Asynchronous Inference for inferences with large payload sizes or requiring long processing times. To learn more, visit the Amazon SageMaker deployment webpage.

    You can easily create a SageMaker Inference endpoint from the console, the AWS SDKs, or the AWS Command Line Interface (CLI). For detailed steps on how to get started, see the SageMaker Serverless Inference documentation, which also includes a sample notebook. For pricing information, see the SageMaker pricing page. SageMaker Serverless Inference is available in preview in US East (Northern Virginia), US East (Ohio), US West (Oregon), Europe (Ireland), Asia Pacific (Tokyo), and Asia Pacific (Sydney).

    » AWS Managed Microsoft AD helps optimize scaling decisions with directory metrics in Amazon CloudWatch

    Posted On: Dec 1, 2021

    AWS Directory Service for Microsoft Active Directory (AWS Managed Microsoft AD) now helps optimize scaling decisions for improved performance and resilience with Amazon CloudWatch. Starting today, AWS Managed Microsoft AD provides domain controller and directory utilization metrics in Amazon CloudWatch for new and existing directories automatically. Analyzing these utilization metrics helps you quantify your average and peak load times to identify the need for additional domain controllers. With this, you can define the number of domain controllers to meet your performance, resilience, and cost requirements.

    AWS Managed Microsoft AD provides utilization metrics in Amazon CloudWatch such as CPU, Memory, Disk and Network of domain controllers, as well as AD-specific metrics, such as DNS and Directory reads/writes. Based on the insights provided by these utilization metrics, you can decide to deploy additional domain controllers during peak load periods to improve performance and resilience, or reduce the number of domain controllers off-peak for cost effective operations. Additionally, using Amazon CloudWatch Alarms, you can automate the deployment of additional domain controllers.

    For step-by-step instructions on how to configure CloudWatch alarms, guidance on which counters and thresholds to use, and sample automation for adding domain controller, please see the blog post How to Automate AWS Managed Microsoft AD Scaling Based on Utilization Metrics.

    This new feature is available in all AWS Regions where AWS Managed Microsoft AD is available (excluding AWS China Regions). To learn more, see the AWS Directory Service Administration Guide and Amazon CloudWatch metrics documentation.

    » Amazon Virtual Private Cloud (VPC) announces IP Address Manager (IPAM) to help simplify IP address management on AWS

    Posted On: Dec 1, 2021

    Amazon VPC IP Address Manager (IPAM) is a new feature that makes it easier for you to plan, track, and monitor IP addresses for your AWS workloads. With IPAM's automated workflows, network administrators can more efficiently manage IP addresses.

    VPC IPAM allows you to easily organize IP addresses based on your routing and security needs and set simple business rules to govern IP assignments. Using IPAM, you can automate IP address assignment to VPCs, eliminating the need to use spreadsheet-based or homegrown IP planning applications, which can be hard to maintain and time-consuming. This automation helps remove delays in on-boarding new applications or growing existing applications, by enabling you to assign IP addresses to your VPCs in seconds. IPAM also automatically tracks critical IP address information, including its AWS account, Amazon VPC, and routing and security domain, eliminating the need to manually track or do bookkeeping for IP addresses. This reduces the possibility of errors during IP assignments. You can also set alarms for IP address utilization and gain visibility to proactively fix any IP address issues. IPAM automatically retains your IP address monitoring data (up to a maximum of three years). You can use this historical data to do retrospective analysis and audits for your network security and routing policies.

    IPAM provides a unified operational view and enables you to manage IP addresses across AWS Regions and your accounts, using AWS Resource Access Manager and AWS Organizations. As the unified operation view can be used as your single source of truth for information related to any IP address, IPAM can help you more efficiently perform routine IP address management activities such as tracking IP utilization, troubleshooting, and auditing, much faster.

    Amazon VPC IPAM is generally available in the following AWS Regions: Africa (Cape Town), Asia Pacific (Hong-Kong), Asia Pacific (Mumbai), Asia Pacific (Osaka), Asia Pacific (Sydney), Asia Pacific (Tokyo), Asia Pacific (Seoul), Asia Pacific (Singapore), Canada (Central), Europe (Dublin), Europe (Frankfurt), Europe (London), Europe (Milan), Europe (Paris), Europe (Stockholm), Middle East (Bahrain), South America (Sao Paulo), US West (Northern California), US East (N. Virginia), US East (Ohio), and US West (Oregon). 

    To learn more about IPAM, you can view the IPAM documentation or read the announcement blog. To view IPAM prices, visit Amazon VPC Pricing.

    » AWS Shield Advanced introduces automatic application-layer DDoS mitigation

    Posted On: Dec 1, 2021

    AWS Shield Advanced now automatically protects web applications by blocking application layer (Layer 7) DDoS events with no manual intervention needed by you or the AWS Shield Response Team (SRT). When you protect your resources with AWS Shield Advanced and enable automatic application layer DDoS mitigation, Shield Advanced will identify patterns associated with layer 7 DDoS events and isolate this anomalous traffic by automatically creating AWS WAF rules in your web access control lists (ACLs). These rules can be implemented in count mode to observe how they will impact resource traffic and then deployed in block mode. These capabilities enable you to quickly respond to and mitigate DDoS events that threaten the availability of your applications.

    With automatic application layer DDoS mitigation, AWS Shield Advanced will create custom WAF rules in a Shield-managed rule group to mitigate layer 7 DDoS events affecting your protected resources. Shield Advanced evaluates each WAF rule it creates against normal traffic into your resources to minimize false positives and deploys them in either count or block mode. The action taken by these WAF rules can be changed to count or block mode at any time. You can also view detection, mitigation, and top contributor metrics associated with application layer DDoS events for further investigation or to assess the effect of any mitigations Shield creates.

    Automatic application layer DDoS mitigation is available to AWS Shield Advanced subscribers at no additional cost. To view the list of AWS Regions where AWS Shield Advanced is currently available, see the AWS Region Table. For more details, visit the AWS Shield Advanced Developer guide for more details.

    » Introducing AWS Direct Connect SiteLink

    Posted On: Dec 1, 2021

    Today AWS announced the general release of AWS Direct Connect SiteLink. SiteLink makes it easy to create private network connections between your on-premises locations, such as offices and data centers, by connecting them to Direct Connect locations throughout the world.

    The Direct Connect service helps you create private connections between your on-premises networks and your AWS resources by connecting your network directly to Direct Connect locations. Using the new SiteLink feature, you can now link your on-premises locations to Direct Connect and send data between them over the shortest path between Direct Connect locations. With over 100 Direct Connect locations around the world, you can create networks that span multiple continents. Once you’ve connected your on-premises locations to Direct Connect, you can enable (or disable) SiteLink for that location in minutes. Direct Connect SiteLink uses elastic, pay-as-you-go pricing with no long-term commitments. 

    Direct Connect SiteLink is available in all commercial AWS Regions except China. Learn more by visiting the Direct Connect product overview page and API documentation. For a more in-depth technical overview, read our Introducing AWS Direct Connect SiteLink blog post. 

    » Amazon SageMaker Pipelines now integrates with SageMaker Model Monitor and SageMaker Clarify

    Posted On: Dec 1, 2021

    Amazon SageMaker Pipelines, a fully managed service that enables you to create, automate, and manage end-to-end machine learning (ML) workflows, now supports integration with Amazon SageMaker Model Monitor and Amazon SageMaker Clarify. With these integrations, you can easily incorporate model quality and bias detection in your ML workflow. The increased automation can help reduce your operational burden in building and managing ML models.

    SageMaker Model Monitor and SageMaker Clarify enable you to continuously monitor the quality and bias metrics of ML models in production so that you can set up alerts or trigger retraining when the model or the data quality drifts. To set up model monitoring, you must establish a baseline metric for data and model quality that SageMaker Model Monitor can then use to measure drift. With the new integration, you can automatically capture the baselines for model and data quality as part of the model building pipeline, eliminating the need to calculate these metrics outside the model building workflow. You can also use QualityCheckStep and ClarifyCheckStep in SageMaker Pipelines to stop the model training pipeline, if any deviation from previously known baseline metric is detected. Once computed, you can also store and view calculated quality and bias metrics along with the baselines in the Model Registry.

    This integration is also available as a template in SageMaker Projects so that you can automatically schedule model monitoring and bias detection jobs leveraging the baseline metrics that are recorded in the Model Registry. To get started, create a new SageMaker Project from the SageMaker Studio or the command-line interface using the new model-monitoring template. To learn more, visit our documentation page on check-steps in Sagemaker Pipelines, metrics/baselines in Model RegistrySagemaker Model Monitor and model-monitoring CI/CD template.

    » Introducing Amazon SageMaker Ground Truth Plus: Create high-quality training datasets without having to build labeling applications or manage the labeling workforce on your own

    Posted On: Dec 1, 2021

    Today, we are excited to announce the general availability of Amazon SageMaker Ground Truth Plus, a new turnkey data labeling servicethat enables you to create high-quality training datasets quickly and reduces costs by up to 40%.

    To train a machine learning (ML) model, data scientists need large, high-quality, labeled datasets. As ML adoption grows, labeling needs increase. This forces data scientists to spend weeks on building data labeling workflows and managing a data labeling workforce. Unfortunately, this slows down innovation and increases cost. To ensure data scientists can spend their time building, training, and deploying ML models, data scientists typically task other in-house teams consisting of data operations managers and program managers to produce high-quality training datasets. However, these teams typically don't have access to skills required to deliver high-quality training datasets, which affects ML results. What if you could rely on a turnkey service that enables you to create high-quality training datasets at scale without consuming your in-house resources? Enter Amazon SageMaker Ground Truth Plus.

    Amazon SageMaker Ground Truth Plus makes it easy for data scientists as well as business managers, such as data operations managers and program managers, to create high-quality training datasets by removing the undifferentiated heavy lifting associated with building data labeling applications and managing the labeling workforce. All you do is share data along with labeling requirements and Ground Truth Plus sets up and manages your data labeling workflow, based on these requirements. From there, an expert workforce that is trained on a variety of ML tasks performs data labeling. You don't even need deep ML expertise or knowledge of workflow design and quality management to use Ground Truth Plus.

    Ground Truth Plus uses ML techniques, including active-learning, pre-labeling, and machine validation. This increases the quality of the output dataset and decreases the data labeling costs. Ground Truth Plus provides transparency into your data labeling operations and quality management. With it, you can review the progress of training datasets across multiple projects, track project metrics, such as daily throughput, inspect labels for quality, and provide feedback on the labeled data. Ground Truth Plus can be used for a variety of use cases, including computer vision, natural language processing, and speech recognition.

    Amazon SageMaker Ground Truth Plus is generally available today in the US East (N. Virginia) AWS Region. To learn more about Amazon SageMaker Ground Truth Plus, read the blog post, refer to Ground Truth Plus documentation, and visit the SageMaker data labeling webpage or visit the Ground Truth Plus console to get started.

    » Amazon Textract announces specialized support for automated processing of identity documents

    Posted On: Dec 1, 2021

    Amazon Textract, a machine learning service that makes it easy to extract text and data from any document or image, now offers specialized support to extract data from identity documents, such U.S. Driver Licenses and U.S. Passports. You can extract implied fields like name and address, as well as explicit fields like Date of Birth, Date of Issue, Date of Expiry, ID #, ID Type, and more in the form of key-value pairs. Until today, current OCR based solutions were limited, and did not offer the ability to extract all the required fields accurately due to rich background images or the ability to recognize names and addresses, as well as the fields associated with them (e.g., Washington state ID lists home address with the key "8"), or support ID designs and formats that varied by country or state.

    Starting today, you can quickly and accurately extract information from IDs (U.S. Driver Licenses and Passports) that have different templates or formats. Analyze ID API returns two categories of data types:
  • Key- value pairs available on ID’s such as Date of Birth, Date of Issue, ID #, and Restrictions
  • Implied fields on the document that may not have explicit keys associated with them such as Name, Address, and Issued By
  • Additionally, we standardize the key names within the response. For example, if your driver license says LIC# (license number) and passport says Passport No, Analyze ID response will return the standardized key as “Document ID” along with the raw key (e.g. LIC#). This standardization lets customers easily combine information across many IDs that use different terms for the same concept.

    To learn more about this new feature you can read a step-by-step blog to get started now or you can view the documentation. Pricing for this new feature is available on Amazon Textract’s pricing page. 

    Analyze ID will be available US East (N. Virginia), US East (Ohio), US West (Northern California),US West (Oregon), GovCloud (US-East),GovCloud (US-West),Canada (Central), Europe (London), Europe (Paris),Europe (Ireland),Europe (Frankfurt),Asia Pacific (Singapore),Asia Pacific (Sydney),Asia Pacific (Seoul), and Asia Pacific (Mumbai) starting December 1st, 2021. To get started with Analyze ID, visit: http://aws.amazon.com/textract.

    » Amazon Kendra launches Experience Builder, Search Analytics Dashboard, and Custom Document Enrichment

    Posted On: Dec 1, 2021

    Amazon Kendra is an intelligent search service powered by machine learning. Today, we are excited to announce the launch of three new features: (1) Experience Builder to create fully functional search applications in a few clicks, (2) Search Analytics Dashboard for search insights and metrics, and (3) Custom Document Enrichment for document pre-processing and enrichment during ingestion.

    Amazon Kendra Experience Builder
    You can now deploy a fully functional and customizable search experience with Amazon Kendra in a few clicks, without any coding or ML experience. Experience Builder delivers an intuitive visual workflow to quickly build, customize, and launch your Kendra-powered search application, securely on the cloud. You can start with the ready-to-use search experience template in the builder, which can be customized by simply dragging and dropping the components you want, such as filters or sorting. You can invite others to collaborate or test your search application for feedback, and then share the project with all users when you are ready to deploy the experience. Amazon Kendra Experience Builder comes with AWS Single Sign-On (SSO) integration supporting popular identity providers such as Azure AD and Okta, delivering secure end-user SSO authentication while accessing the search experience. For more information about Amazon Kendra Experience Builder, please visit the documentation

    Amazon Kendra Search Analytics Dashboard
    Amazon Kendra Search Analytics Dashboard allows you to better understand quality and usability metrics across your Kendra-powered search applications. The Analytics Dashboard helps administrators and content creators understand how easily the end users are finding relevant search results, the quality of the search results and gaps in the content. Amazon Kendra Search Analytics Dashboard provides a snapshot of how your users interact with your search application and how effective your search results are. The analytics data can be viewed in a visual dashboard in the console or you can build your own dashboards by accessing the Search Analytics data via an API. It empowers customers to dive deep into search trends and user behavior to identify insights and also helps to bring clarity to potential areas of improvement. For more information about Amazon Kendra Search Analytics Dashboard, please visit the documentation

    Amazon Kendra Custom Document Enrichment
    With Amazon Kendra Custom Document Enrichment capabilities, you can build a custom ingestion pipeline that can pre-process documents before they get indexed into Kendra. For example, while ingesting content from a repository like SharePoint using our connectors, customers can enrich documents with additional metadata, convert scanned documents to text, classify documents, extract entities, and further transform the document using custom ETL processes. The enrichment is performed by simple rules that can be configured in the console or by invoking functions from Amazon Lambda. These functions can optionally call other AWS AI Services such as Amazon Comprehend, Amazon Transcribe, or Amazon Textract. For more information about Amazon Kendra Custom Document Enrichment, please visit the documentation

    Amazon Kendra Experience Builder, Search Analytics Dashboard, and Custom Document Enrichment features are available in the following regions - US East (N. Virginia), US East (Ohio), US West (Oregon), Europe (Ireland), Asia Pacific (Sydney), Asia Pacific (Singapore), Canada (Central).

    » AWS Database Migration Service now offers a new console experience, AWS DMS Studio

    Posted On: Dec 1, 2021

    AWS Database Migration Service (AWS DMS) is pleased to announce the launch of AWS DMS Studio, a new service console that makes it easy to manage database migrations from start to finish. AWS DMS Studio accelerates and simplifies migrations by integrating tools for each phase of the migration journey from assessment to conversion to migration. AWS DMS Studio integrates AWS DMS Fleet Advisor to inventory and analyzes your database and analytics fleet, AWS Schema Conversion Tool (SCT) to convert database schema and application code, and AWS DMS to migrate your data. At each step of the migration, AWS DMS Studio assists you by providing contextual resources such as documentation and guidance on engaging migration experts where needed.

    AWS DMS Studio will be available in the US East (N. Virginia) region. To learn more, please refer to our technical documentation

    »

    Page 1|Page 2|Page 3|Page 4