Contents of this page is copied directly from AWS blog sites to make it Kindle friendly. Some styles & sections from these pages are removed to render this properly in 'Article Mode' of Kindle e-Reader browser. All the contents of this page is property of AWS.

Page 1|Page 2|Page 3|Page 4

Amazon EC2 now supports sharing Amazon Machine Images across AWS Organizations and Organizational Units

Posted On: Oct 29, 2021

You can now share your Amazon Machine Images (AMIs) with AWS Organizations and Organizational Units (OUs). Previously, you could share AMIs only with specific AWS account IDs. To share AMIs with AWS Organizations, you had to explicitly manage sharing of AMIs with AWS accounts that were added to or removed from AWS Organizations. With this new feature, you no longer have to update your AMI permissions because of organizational changes. AMI sharing will be automatically synced when organizational changes occur. This feature helps you centrally manage and govern your AMIs as you grow and scale your AWS accounts.

You can share AMIs with AWS Organizations and Organizational Units the same way as you share AMIs with specific accounts, allowing any account in that organization or organizational unit to describe and launch the AMI. To share the AMI, simply add the Org ID or OU ID in launch permissions of EC2 ModifyImageAttribute API.

This capability is available through the AWS Command Line Interface and the AWS Software Development Kit (AWS SDK) in all AWS Regions except Amazon Web Services China (Beijing) Region and Amazon Web Services China (Ningxia) Region. To learn more about sharing AMIs with organizations, please refer to the documentation here.

» Amazon Connect Chat adds real-time message streaming APIs

Posted On: Oct 29, 2021

Amazon Connect Chat now provides new APIs that allow you to create customized experiences for your customers by enabling you to subscribe to a real-time stream of chat messages. Using the new APIs, you can integrate Amazon Connect Chat with SMS solutions and third party messaging applications (e.g., Facebook Messenger, Twitter), enable mobile push notifications, and create analytics dashboards to monitor and track chat message activity. Messages are published via Amazon Simple Notification Service (Amazon SNS), and can be set up in a couple of clicks by going to the Amazon SNS console and creating a new SNS topic.

The message streaming APIs are available in all AWS regions where Amazon Connect is offered. There is no charge to use these APIs beyond standard per message fees and associated SNS usage. To learn more and get started, review the following resources:

  • Amazon Connect Admin documentation
  • Amazon Connect API Reference
  • Blog - Building personalized customer experiences over SMS through Amazon Connect
  • Blog - Adding digital messaging channels to your Amazon Connect contact center 
  • » Amazon QLDB launches new version of QLDB Shell

    Posted On: Oct 29, 2021

    Amazon Quantum Ledger Database (QLDB) launches a new version of the QLDB Shell that is easier to install and use. QLDB customers can now download the QLDB Shell tailored to their favorite operating system and begin querying a QLDB ledger without any other installation steps or dependencies to install. Expert users have the option to build from source code and tailor to their custom requirements, because the QLDB Shell is open sourced. The updated QLDB Shell also provides improved query runtime statistics, introduces an optional tabular data format for query output, offers config file support for saving preferred options, and adds convenient commands for listing tables and switching ledgers, regions, and endpoints.

    Amazon QLDB Shell is a lightweight, easy-to-use command line tool that is designed to allow customers to quickly query a QLDB ledger using the Amazon PartiQL query language. The QLDB Shell is written in Rust and is open-sourced in the GitHub repository awslabs/amazon-qldb-shell.

    Amazon QLDB is a fully managed ledger database that provides a transparent, immutable, and cryptographically verifiable log. Customers can use QLDB to track all application data changes, as well as maintain a complete and verifiable history of changes to data over time.

    Get started with QLDB and the new QLDB Shell today.

    » Amazon Transcribe now supports batch transcription in AWS Stockholm and Cape Town Regions

    Posted On: Oct 29, 2021

    Amazon Transcribe is an automatic speech recognition (ASR) service that makes it easy for you to add speech-to-text capabilities to your applications without any machine learning expertise. Starting today, Amazon Transcribe supports batch transcription in the AWS Stockholm and Cape Town Regions. 

    Amazon Transcribe enables organizations to increase the accessibility and discoverability of their audio and video content, serving a breadth of use cases. For instance, contact centers can transcribe recorded calls for downstream analysis to better understand key call drivers. Content producers and media distributors can automatically generate transcriptions for subtitles to improve content accessibility. 

    Amazon Transcribe batch transcription is now available in the following AWS Regions: US East (Ohio), US East (N. Virginia), US West (N. California), US West (Oregon), Asia Pacific (Hong Kong), Asia Pacific (Mumbai), Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Canada (Central), Europe (Frankfurt), Europe (Ireland), Europe (London), Europe (Paris), Europe (Stockholm), Middle East (Bahrain), Africa (Cape Town), South America (São Paulo), AWS GovCloud (US-East), AWS GovCloud (US-West), the AWS China (Beijing) Region operated by Sinnet, and the AWS China (Ningxia) Region operated by NWCD. For more information, please refer to the Amazon Transcribe documentation.

    » AWS App Mesh Metric Extension is now generally available

    Posted On: Oct 29, 2021

    AWS App Mesh Metric Extension is now generally available. With the Metric Extension, customers can collect and filter aggregated App Mesh service metrics that help with debugging, simplify monitoring, and reduce usage costs. App Mesh Metric Extension is available to all customers running workloads on Amazon EC2, Amazon ECS, Amazon EKS, and self-managed Kubernetes. AWS App Mesh is a service mesh that provides application-level networking to make it easy for your services to communicate with each other across multiple types of compute infrastructure.

    App Mesh Metric Extension improves end-to-end experience for customers. Aggregated metrics can be automatically sent to integrated endpoints, such as Amazon CloudWatch or Datadog, based on customers’ selections. This reduces the total the number of metrics sent to CloudWatch and its usage cost. In addition, Metrics Extension makes it simpler and easier for those customers who query metrics in AWS Console directly.

    To learn more about App Mesh and App Mesh Metric Extension, please visit App Mesh product page. You can give feedback, report issues or review the AWS App Mesh Roadmap on GitHub.

    » AWS App2Container now supports ECS Fargate Windows

    Posted On: Oct 29, 2021

    AWS App2Container (A2C) now supports deployment of containerized Windows applications to AWS Fargate for ECS Windows containers. With this feature, users can now target AWS Fargate for ECS Windows containers as deployment runtime in addition to ECS and EKS that were previously supported. Using App2Container, developers can take a running Windows based .NET application or a Windows service, analyze, containerize, and deploy to AWS Fargate for ECS Windows containers, in few simple steps. Developers can take advantage of auto-scaling, host management, and secured application lifecycle management offered by AWS Fargate.

    AWS App2Container (A2C) is a command-line tool for modernizing .NET and Java applications into containerized applications. A2C analyzes and builds an inventory of all applications running in virtual machines, on-premises or in the cloud. You simply select the application you want to containerize, and A2C packages the application artifact and identified dependencies into container images, configures the network ports, and generates the ECS task and Kubernetes pod definitions.

    AWS Fargate for Amazon ECS Windows simplifies running Windows containers on AWS. With AWS Fargate, customers no longer need to set up and manage host instances for their applications and worry about auto-scaling those applications. Developers can focus on building applications while delegating infrastructure operational efforts such patching, securing, scaling and managing servers to AWS Fargate.

    To learn more, refer to App2Container technical documentation.

    » New region availability and Graviton2 support now available for Amazon GameLift

    Posted On: Oct 29, 2021

    Amazon GameLift, a fully managed dedicated game server hosting solution that deploys, operates, and scales cloud servers for multiplayer games, is now available in the AWS Asia Pacific (Osaka) Region. Game developers can now deploy instances in Osaka using GameLift multi-region fleet.

    GameLift FleetIQ now supports next-generation AWS Graviton2 processors. You can now use Graviton2-hosted game servers, based on the Arm-based processor architecture, to achieve increased performance at a lower cost when compared to the equivalent Intel-based compute options. Customers can deploy AWS Graviton2 processor-powered instances in their existing or new GameServerGroups from the GameLift Console or Command Line Interface (CLI).

    GameLift FleetIQ instances powered by AWS Graviton2 processors include c6g, m6g, r6g instance families. Available regions are Asia Pacific (Mumbai), Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Canada (Central), Europe (Frankfurt), Europe (Ireland), Europe (London), South America (Sao Paolo), US East (N. Virginia), US East (Ohio), and US West (N. California), US West (Oregon).

    For more information, see the Amazon GameLift product page and our documentation.

    » Amazon Lightsail now supports AWS CloudFormation for instances, disks and databases

    Posted On: Oct 29, 2021

    Amazon Lightsail now supports AWS CloudFormation, allowing you to utilize CloudFormation templates to provision and manage your application stacks comprised of Lightsail instances, disks or databases. You can also easily automate and replicate your stacks as needed. This adds a new convenient way of managing Lightsail resources in addition to the Lightsail Console and AWS CLI/SDK.

    With this launch, in addition to being able to provision and manage Lightsail instances, databases and disks with CloudFormation, you can also perform operations like attaching disks to instances, creating and attaching static IPs to instances, setting up auto-snapshots for instances and disks - all via the same template.

    This AWS CloudFormation support for Lightsail resources is available in all Amazon Lightsail regions. To learn more on how to use CloudFormation templates to manage Lightsail resources, click here.

    » Improved celebrity recognition is now available for Amazon Rekognition Video

    Posted On: Oct 29, 2021

    Amazon Rekognition is a machine learning (ML) based service that can analyze images and videos to detect objects, people, faces, text, scenes, activities, and inappropriate content. Celebrity Recognition makes it easy for customers to automatically recognize tens of thousands of well-known personalities in images and videos using ML. Celebrity recognition significantly reduces the repetitive manual effort required to tag produced media content and make it readily searchable. On 8/26, we launched an update for Rekognition Image where customers got higher accuracy (lower false detections and rejections) and increased coverage for global celebrities. In addition, customers got three new attributes for each celebrity recognized: presentation of gender, expression, and smile.

    Starting today, these improvements are available for Amazon Rekognition Video in all supported regions. Broadcast and video on demand (VOD) media customers can use celebrity recognition to better organize, search, and monetize their content catalogs at scale.

    To get started, please refer to our documentation and download the latest AWS SDK.

    » Introducing Amazon EC2 C6i instances

    Posted On: Oct 28, 2021

    Amazon Web Services (AWS) announces the general availability of compute optimized Amazon EC2 C6i instances. C6i instances are powered by 3rd generation Intel Xeon Scalable processors (code named Ice Lake) with an all-core turbo frequency of 3.5 GHz, offer up to 15% better compute price performance over C5 instances for a wide variety of workloads, and always-on memory encryption using Intel Total Memory Encryption (TME). Designed for compute-intensive workloads, C6i instances are built on the AWS Nitro System, a combination of dedicated hardware and lightweight hypervisor, which delivers practically all of the compute and memory resources of the host hardware to your instances. These instances are an ideal fit for compute-intensive workloads such as batch processing, distributed analytics, high performance computing (HPC), ad serving, highly scalable multiplayer gaming, and video encoding.

    To meet customer demands for increased scalability, C6i instances provide a new instance size (c6i.32xlarge) with 128 vCPUs and 256 GiB of memory, 33% more than the largest C5 instance. They also provide up to 9% higher memory bandwidth per vCPU compared to C5 instances. C6i also give customers up to 50 Gbps of networking speed and 40 Gbps of bandwidth to the Amazon Elastic Block Store, twice that of C5 instances. Customers can use Elastic Fabric Adapter on the 32xlarge size, which enables low latency and highly scalable inter-node communication. For optimal networking performance on these new instances, Elastic Network Adapter (ENA) driver update may be required. For more information on optimal ENA driver for c6i, see this article

    These instances are generally available today in AWS US East (Northern Virginia, Ohio), US West (Oregon), and Europe (Ireland) Regions. C6i instances are available in 9 sizes with 2, 4, 8, 16, 32, 48, 64, 96, and 128 vCPUs. Customers can purchase the new instances via Savings Plans, Reserved, On-Demand, and Spot instances. To get started, visit the AWS Management Console, AWS Command Line Interface (CLI), and AWS SDKs. To learn more, visit the C6i instances page.

    » AWS Marketplace announces Purchase Order Management for SaaS contracts

    Posted On: Oct 28, 2021

    AWS Marketplace announces Purchase Order Management for SaaS contracts
    Today, AWS Marketplace announced Purchase Order Management for SaaS contracts, which allows customers to add purchase order numbers to their AWS invoices for SaaS contracts purchased in AWS Marketplace. Previously, customers could only add one purchase order number across all their AWS Marketplace transactions. Now, customers can add purchase order numbers specific to public and private offers for SaaS contracts when transacting in AWS Marketplace. Purchase order numbers entered in AWS Marketplace appear on corresponding invoices, easing software spend allocation to internal budgets.

    Customers can now add their purchase order number in the text box provided on the subscription page for SaaS contracts on AWS Marketplace, including private offers with flexible payment schedules. Customers can see the purchase order number entry in the Purchase Order Management system within AWS Billing and on the AWS invoice generated for the SaaS transaction. Customers now have more granular control over budget allocation for AWS Marketplace transactions, reducing the operational overhead associated with budgeting and spend chargeback calculations when purchasing from AWS Marketplace.

    To learn more about Purchase Order Management for SaaS contracts, visit AWS Documentation.

    » Babelfish for Aurora PostgreSQL is now generally available

    Posted On: Oct 28, 2021

    Babelfish for Aurora PostgreSQL is a new capability for Amazon Aurora PostgreSQL-Compatible Edition that enables it to understand queries from applications written for Microsoft SQL Server. With Babelfish, applications currently running on SQL Server can run directly on Aurora PostgreSQL with a fraction of the work required, compared to a traditional migration. Babelfish understands the SQL Server wire-protocol (TDS) and T-SQL, the Microsoft SQL Server query language, so you don't have to switch database drivers or re-write all of your application queries.

    You can connect to Babelfish by changing your SQL Server-based applications to point to the Babelfish TDS port on an Aurora PostgreSQL cluster, after turning Babelfish on. Babelfish includes support for stored procedures, savepoints, static cursors, nested transactions, the SQL_VARIANT data type and much more.

    Babelfish for Amazon Aurora PostgreSQL is available in all regions supported by Aurora PostgreSQL, including AWS GovCloud (US) Regions.

    Amazon Aurora combines the performance and availability of high-end commercial databases with the simplicity and cost-effectiveness of open source databases. It provides up to three times better performance than the typical PostgreSQL database, together with increased scalability, durability, and security. For more information, please visit the Aurora product page.  To get started with Aurora PostgreSQL-Compatible Edition, take a look at our getting started page. Or, to learn more about Babelfish, visit the Babelfish for Aurora PostgreSQL product page.

    » Amazon Aurora Supports PostgreSQL 13.4, 12.8, 11.13, and 10.18

    Posted On: Oct 28, 2021

    Following the announcement of updates to the PostgreSQL database by the open source community, we have updated Amazon Aurora PostgreSQL-Compatible Edition to support PostgreSQL 13.4, 12.8, 11.13, and 10.18. These releases contain bug fixes and improvements by the PostgreSQL community. As a reminder, Amazon Aurora PostgreSQL 9.6 will reach end of life on January 31, 2022.

    You can initiate a minor version upgrade manually by modifying your DB cluster, or you can enable the “Auto minor version upgrade” option when creating or modifying a DB cluster. Doing so means that your DB cluster is automatically upgraded after AWS tests and approves the new version. For more details, see Automatic Minor Version Upgrades for PostgreSQL. Please review the Aurora documentation to learn more. With this regional expansion, these minors are available in all commercial regions. For full feature parity list, head to our feature parity page, and to see all the regions that support Amazon Aurora head to our region page.

    Amazon Aurora combines the performance and availability of high-end commercial databases with the simplicity and cost-effectiveness of open source databases. It provides up to three times better performance than the typical PostgreSQL database, together with increased scalability, durability, and security. For more information, please visit the Amazon Aurora product page. To get started with Amazon Aurora, take a look at our getting started page.

    » Amazon Aurora PostgreSQL Supports PostGIS 3.1

    Posted On: Oct 28, 2021

    Amazon Aurora PostgreSQL-Compatible Edition now supports PostGIS major version 3.1. This new version of PostGIS is available on PostgreSQL versions 13.4, 12.8, 11.13, 10.18, and higher.

    PostGIS allows you to store, query and analyze geospatial data within a PostgreSQL database. PostGIS 3.1 significantly improves performance such as spatial joins, which now run up to [N]X faster on PostgreSQL 13. As an example, you could use a spatial join to count the number of people living in an area defined by the reception of mobile phones from radio towers. PostGIS 3.1 is the new default version on PostgreSQL 10 and higher starting with the new minor versions. However, you can still create older versions of PostGIS in your PostgreSQL database, e.g., if you require version stability.

    You can view a list of all PostgreSQL extensions supported by database version on Amazon Aurora on the AWS User Guide. For full feature parity list, head to our feature parity page, and to see all the regions that support Amazon Aurora head to our region page.

    Amazon Aurora combines the performance and availability of high-end commercial databases with the simplicity and cost-effectiveness of open source databases. It provides up to three times better performance than the typical PostgreSQL database, together with increased scalability, durability, and security. For more information, please visit the Amazon Aurora product page. To get started with Amazon Aurora, take a look at our getting started page.

    Don't see an extension you'd like to use? Let us know through the AWS Forum, through AWS Enterprise support.

    » Amazon EKS Managed Node Groups adds native support for Bottlerocket

    Posted On: Oct 28, 2021

    Amazon Elastic Kubernetes Service (EKS) now adds native support for Bottlerocket in EKS managed node groups in all commercial AWS regions. Most EKS customers today deploy their applications on worker nodes backed by operating systems that are designed for a variety of use cases. AWS launched Bottlerocket, a minimal, Linux-based open source operating system that is purpose built and optimized to run containers. When combined, EKS managed node groups and Bottlerocket give customers a simple way to provision and manage compute capacity using the latest best practices for running containers in production. Bottlerocket is now included as a built-in AMI choice for managed node groups, enabling customers to provision container optimized worker nodes with a single click.

    EKS customers can easily migrate their applications to run on Bottlerocket based worker nodes and benefit from an improved node security posture. By moving to worker nodes that include only the minimal set of packages needed to run containers, customers benefit from a reduced attack surface, decreased node provisioning time and improved efficiency as more node resources are allocated to applications. This improves cluster utilization and scale. Managed node groups provides notifications when newer EKS Bottlerocket AMIs are available, enabling customers to more easily update nodes to the latest versions of the software.

    To get started, customers can follow their existing workflows for provisioning managed node groups, and simply select Bottlerocket as the AMI type under compute configuration options. Bottlerocket is available at no additional cost and is fully supported by AWS. You only pay for the EC2 instances that you connect to your cluster. Please refer our blog and EKS documentation to get started.

    » Amazon Connect is now available in the Asia Pacific (Seoul) AWS Region

    Posted On: Oct 28, 2021

    Amazon Connect is now available in the Asia Pacific (Seoul) AWS Region, increasing the number of AWS Regions where Amazon Connect is available to ten. You can now claim South Korean toll-free and local telephone numbers. 

    Amazon Connect is an easy-to-use, self-service, cloud contact center service you can use to deliver more engaging customer service experiences. You can create an Amazon Connect cloud contact center with just a few clicks in the Amazon Connect console, allowing your agents to take calls and chats within minutes. For a list of regions where Amazon Connect is available, see the AWS Region table. To learn more about Amazon Connect, please visit the Amazon Connect website.

    » Announcing availability of the Babelfish for PostgreSQL open source project

    Posted On: Oct 28, 2021

    The Babelfish for PostgreSQL open source project is now available. Babelfish for PostgreSQL provides the capability for PostgreSQL to understand queries from applications written for Microsoft SQL Server. With Babelfish, applications currently running on SQL Server can now run directly on PostgreSQL with a fraction of the work required, compared to a traditional migration. Babelfish understands the SQL Server wire-protocol and T-SQL, the Microsoft SQL Server query language, so you don't have to switch database drivers or re-write all of your application queries.

    Anyone can access the source code for Babelfish from the open source project page. This allows users to leverage Babelfish on their own PostgreSQL servers. Babelfish includes support for stored procedures, savepoints, static cursors, nested transactions, the variant data type and much more.

    To get started, please navigate to the Babelfish open source project for additional information and obtain the source code.

    » Amazon Chime SDK now supports phone call recording

    Posted On: Oct 28, 2021

    The Amazon Chime SDK is a service that makes it easy for developers to add real-time audio, video, screen sharing, and messaging capabilities to their applications. With the Public Switched Telephone Network (PSTN) audio APIs, developers can build customized telephony applications like voice menus, click-to-call, and call routing using the agility and operational simplicity of a serverless AWS Lambda function. Starting today, developers can record PSTN and Session Initiation Protocol (SIP) voice calls and store the recordings in the Amazon Simple Storage Service (Amazon S3) bucket of their choice using the call recording feature. 

    Combining call recording with other AWS services, developers can easily build machine learning enabled solutions for post-call processing and analytics. For example, you can use Amazon Transcribe Call Analytics to get turn-by-turn transcripts, help redact sensitive information (for example names, addresses, and credit card information), and derive insights such as customer sentiment and issues from recorded calls. The results can be easily stored in your Amazon S3 bucket.

    This feature is generally available, and you can use it in the US East (N. Virginia) and US West (Oregon) regions.

    To learn more about the on-demand call recording feature and get started, refer to the following resources:

  • Amazon Chime SDK
  • Amazon Chime SDK PSTN Audio Developers Guide
  • Amazon Chime SDK API Reference
  • Amazon Transcribe Call Analytics
  • Amazon Transcribe Call Analytics Developer Guide
  • Blog - Building an on-demand phone call recording solution with Amazon Chime SDK
  • » AWS Fargate now supports Amazon ECS Windows containers

    Posted On: Oct 28, 2021

    Today, AWS announces the availability of AWS Fargate for Amazon ECS Windows containers. This feature simplifies the adoption of modern container technology for Amazon ECS customers by making it even easier to run their Windows containers on AWS.

    Customers running Windows applications spend a lot of time and effort securing and scaling virtual machines to run their containerized workloads. Provisioning and managing the infrastructure components and configurations can slow down the productivity of developer and infrastructure teams. With today’s launch, customers no longer need to set up automatic scaling groups or manage host instances for their application. In addition to providing task-level isolation, Fargate handles the necessary patching and updating to help provide a secure compute environment. Customers can reduce the time spent on operational efforts, and instead focus on delivering and developing innovative applications.

    With Fargate, billing is at a per second granularity with a 15-minute minimum, and customers only pay for the amount of vCPU and memory resources their containerized application requests. Customers can also select a Compute Savings Plan, which allows them to save money in exchange for making a one- or three-year commitment to a consistent amount of compute usage. For additional details, visit the Fargate pricing page.

    Fargate support for Amazon ECS Windows containers is available in all AWS Regions, excluding AWS China Regions and AWS GovCloud (US) Regions. It supports Windows Server 2019 Long-Term Servicing Channel (LTSC) release on Fargate Windows Platform Version 1.0.0 or later. Visit our public documentation and read our Running Windows Containers with Amazon ECS on AWS Fargate blog post to learn more about using this feature from API, AWS Command Line Interface (CLI), AWS SDKs, or the AWS Copilot CLI.

    » AWS IoT SiteWise announces support for using the same asset models across different hierarchies

    Posted On: Oct 28, 2021

    AWS IoT SiteWise now allows customers to use the same asset model under different asset model hierarchies. With this feature, customers can simplify the asset modeling experience by reducing the number of models required to build a virtual representation of their industrial operations. Previously, for the same type of machine deployed in different production sites, users had to create one asset model for each production site model. Now, users can create one asset model and reuse it in all of your production site models. 

    You will be able to reuse asset models when defining the hierarchy through the AWS IoT SiteWise console, the AWS SDK, or the AWS CLI. For more information on how to create asset model hierarchies, see defining relationships between asset models (hierarchies) in our User Guide.

    As part of this launch, we have also increased the service quota for the number of asset hierarchy definitions per asset from 10 to 20 and the number of asset models per hierarchy tree from 20 to 50. Visit AWS IoT SiteWise quotas page for information.

    AWS IoT SiteWise is a managed service to collect, store, organize and monitor data from industrial equipment at scale. To learn more, please visit the AWS IoT SiteWise website or the developer guide.

    » AWS Global Accelerator adds support for two new Amazon CloudWatch metrics

    Posted On: Oct 28, 2021

    Starting today, you can use two new Amazon CloudWatch metrics to monitor your AWS Global Accelerator resources. You can now monitor the total number of healthy endpoints and the total number of unhealthy endpoints served by your accelerator, including EC2 instances, Application Load Balancers, Network Load Balancers and Elastic IP addresses. With the two new metrics, you can create CloudWatch alarms to more quickly and easily detect issues with your Global Accelerator endpoints.

    These new metrics complement existing CloudWatch metrics for Global Accelerator, such as the total number of incoming and outgoing bytes processed by your accelerator and the total number of new TCP or UDP flows between clients and your application endpoints. You can set alarms and specify automated actions with your Amazon CloudWatch metrics based on predefined thresholds. You can also build metric dashboards and view metrics for your accelerators, listeners, and endpoint groups directly in the Amazon CloudWatch console. To learn more about Amazon CloudWatch metrics for Global Accelerator, visit the Global Accelerator documentation.

    » Amazon EC2 R5b instances are now available in 2 additional regions

    Posted On: Oct 28, 2021

    Amazon EC2 R5b instances, which provide the fastest block storage performance on EC2, are now available in AWS Europe (Ireland and London) Regions. R5b instances are powered by the AWS Nitro System and provide 3x higher EBS-Optimized performance compared to R5 instances. These instances offer up to 60 Gbps of EBS bandwidth and 260,000 I/O operations per second (IOPS), enabling customers to lift and shift memory intensive applications to AWS.

    R5b instances make 60 Gbps EBS bandwidth available to storage performance-bound workloads without requiring customers to use custom drivers or recompile applications. Customers can take advantage of this improved EBS performance to accelerate data transfer to and from Amazon EBS, reducing the data ingestion time for applications and speeding up delivery of results. With R5b on EBS, customers have access to high performance scalable, durable, and highly available block storage. R5b instances are ideal for large relational database workloads such as OracleDB, SQL server, Postgres and MySQL to run applications like commerce platforms, ERP systems, and health record systems. R5b instances are also certified for production SAP workloads including SAP NetWeaver based applications and the in-memory SAP HANA database.

    With this expansion, R5b instances are now available in the following AWS Regions: US West (Oregon), Asia Pacific (Tokyo), US East (N. Virginia), US East (Ohio), Asia Pacific (Singapore), Europe (Frankfurt), Europe (Ireland), Europe (London), and Asia Pacific (Tokyo).

    To get started, visit the AWS Management Console, AWS Command Line Interface (CLI), and AWS SDKs. To learn more, visit the Amazon EC2 R5 instance pages.

    » AWS Elemental MediaLive now supports Nielsen Watermarking for audience measurement

    Posted On: Oct 28, 2021

    If you are working with Nielsen for audience measurement, you can now use AWS Elemental MediaLive to encode Nielsen's proprietary watermarks in your MediaLive channel.

    This feature enables you to encode Nielsen watermarks to the audio of live streams. For more information on how to enable Nielsen Watermarking, visit the AWS Elemental MediaLive documentation. Nielsen Watermarking in MediaLive is available at no additional cost.

    AWS Elemental MediaLive is a broadcast-grade live video processing service. It lets you create high-quality live video streams for delivery to broadcast televisions and internet-connected multiscreen devices, like connected TVs, tablets, smartphones, and set-top boxes.

    The MediaLive service functions independently or as part of AWS Elemental Media Services, a family of services that form the foundation of cloud-based workflows and offer you the capabilities you need to transport, create, package, monetize, and deliver video.

    Visit the AWS region table for a full list of AWS Regions where AWS Elemental MediaLive is available.

    » Amazon EC2 announces attribute-based instance type selection for Auto Scaling groups, EC2 Fleet, and Spot Fleet

    Posted On: Oct 27, 2021

    Starting today, you can request EC2 capacity based on your workload’s instance requirements. Attribute-based instance type selection, a new feature for Amazon EC2 Auto Scaling, EC2 Fleet, and Spot Fleet, makes it easy to create and maintain instance fleets without researching and selecting EC2 instance types. This is useful for running instance type flexible workloads and frameworks such as containers, big data, and CI/CD, or for simple cases where you want your instance fleets to automatically use the latest generation instance types. Instead of creating and maintaining a list of acceptable instance types, you can now simply define your instance requirements once, and let attribute-based instance type selection handle the rest.

    To get started, create or modify an Auto Scaling group or Fleet and specify your workload’s instance requirements. For most general purpose workloads it is enough to specify the number of vCPUs and memory that you need. For advanced use cases, you can specify attributes like storage type, network interfaces, CPU manufacturer, and accelerator type. Once you are done, EC2 Auto Scaling or Fleet will select and launch instances based on the attributes, purchase option, and allocation strategy you selected.

    Attribute-based instance type selection is especially helpful for running instance type flexible workloads on EC2 Spot Instances. The best way to use Spot instances is to request capacity across as many instance types as possible. Attribute-based instance type selection makes it easy to pick from the widest array of available instance types based on your instance requirements. It can also future-proof your instance fleets by automatically adding new generation EC2 instance types in your Auto Scaling groups or Fleets as they are released.

    For more information, see the Amazon EC2 Auto Scaling attribute-based instance type selection documentation and blog post, EC2 Fleet attribute-based instance type selection documentation, and Spot Fleet attribute-based instance type selection documentation.

    Amazon EC2 Auto Scaling helps you maintain application availability and allows you to automatically add or remove EC2 instances according to conditions that you define. You can use the fleet management features to maintain the health and availability of your fleet. EC2 Auto Scaling is available in all commercial and AWS GovCloud (US) Regions. For more information, visit the Amazon EC2 Auto Scaling documentation page.

    Amazon EC2 Fleet and Spot Fleet simplify the provisioning of EC2 capacity across different EC2 instance types, Availability Zones, and purchase models (On-Demand, Reserved Instances, Savings Plans, and Spot) to optimize your application’s scalability, performance, and cost. To learn more about using EC2 Fleet, please visit this page. To learn more about using Spot Fleet, please visit this page.

    » Introducing Amazon EC2 Spot placement score

    Posted On: Oct 27, 2021

    Today, we are introducing Amazon EC2 Spot placement score to help you find the optimal location for your Spot workloads. Spot Instances availability varies depending on the instance type, time of day, Region, and Availability Zone. Until now there was no way to find an optimal Region or Availability Zone to fulfill your Spot capacity needs without trying to launch Spot Instances there first. Now, Spot placement score can recommend a Region or Availability Zone based on your Spot capacity requirements. Spot placement score is useful for instance type flexible workloads that can be launched in any Region or Availability Zone.

    To get started, go to the Spot placement score screen under the EC2 Spot Console. Simply specify the amount of Spot capacity you would like to request, what your instance type requirements are, and whether you would like a recommendation for a Region or a single Availability Zone. For instance type requirements, you can either provide a list of instance types, or you can just specify the attributes your instances must have, like the number of vCPUs and amount of memory. You will receive a score for each Region or Availability Zone on a scale from 1 to 10, based on factors such as the requested instance types, target capacity, historical and current Spot usage trends, and time of the request. The score reflects the likelihood of success when provisioning Spot capacity, with a 10 meaning that the request is highly likely to succeed. Please note that Spot placement score serves as a recommendation only and no score guarantees that your Spot request will be fully or partially fulfilled.

    You can use Spot placement score through EC2 Spot Console, AWS CLI, or SDK. To learn more about Spot placement score see Spot placement score documentation and blog post.

    » The Amazon Chime SDK now supports push notifications

    Posted On: Oct 27, 2021

    The Amazon Chime SDK lets developers add real-time audio, video, screen share, and messaging capabilities to their web or mobile applications. Starting today, the Amazon Chime SDK supports iOS and Android push notifications via Amazon Pinpoint for messages sent through Amazon Chime SDK messaging channels. With push notifications, a developer using Amazon Chime SDK messaging for chat can help ensure their users are notified about new messages even when they are not actively using their app. Users can switch applications or lock their mobile device and receive a notification when a new message or call comes in, allowing them to tap the notification and return to the original app to continue the conversation or join the call.

    To get started, developers create an Amazon Pinpoint project and associate it with an Amazon Chime SDK messaging AppInstance. Once connected, developers enable push notifications for a message by using the PushNotification attribute in the SendChannelMessage API and including the title and body for the notification. Developers can also use filter rules to allow users of their application to choose which notifications they receive. Users can choose to turn notifications on or off for all channels, for a single channel, or to receive notifications for messages where they are mentioned.

    To learn more about the Amazon Chime SDK and push notifications, review the following resources:

    * Amazon Chime SDK 
    * Amazon Chime SDK Developer Guide 
    * Amazon Chime SDK for JavaScript 

    » Announcing AWS SAM Accelerate - quickly test code changes against the cloud (public preview)

    Posted On: Oct 27, 2021

    The AWS Serverless Application Model (SAM) announces a public preview of AWS SAM Accelerate. The AWS SAM CLI is a developer tool that makes it easier to build, locally test, package, and deploy serverless applications. SAM Accelerate is a new capability of SAM CLI that makes it faster and easier for developers to test code changes made locally to their serverless applications against a cloud-based environment, reducing the time from local iteration to production-readiness.

    SAM Accelerate allows developers to bring their rapid iteration workflows to serverless application development, achieving the same levels of productivity they're used to when testing locally, while testing against a realistic application environment in the cloud. SAM Accelerate synchronizes infrastructure and code changes on a developer's local workspace with a cloud environment in near real time: code changes are updated in seconds in AWS Lambda; API definition changes in Amazon API Gateway; state machine updates to AWS Step Functions; and infrastructure changes are deployed via infrastructure-as-code tooling such as CloudFormation.

    To learn more about SAM Accelerate see the announcement blog post. To get started, you can install the SAM CLI by following the instructions in the documentation.

    » Amazon Textract launches TIFF support and adds asynchronous support for receipts and invoices processing

    Posted On: Oct 27, 2021

    Amazon Textract now supports Tag Image File Format (TIFF) documents in addition to the PNG, JPEG, and PDF formats. Customers can now process TIFF documents either synchronously or asynchronously using any of the following Amazon Textract APIs - DetectDocumentText, StartDocumentAnalysis, StartDocumentTextDetection, AnalyzeDocument, and AnalyzeExpense. Amazon Textract is a machine learning service that automatically extracts printed and handwritten text and data from any document.

    With this launch, Amazon Textract also adds support for processing PDF documents asynchronously using the AnalyzeExpense API, building on top of the synchronous support for PNG and JPEG image files that has been available since launch. Similar to the way customers submit PDF documents to the DetectDocumentText and AnalyzeDocument APIs, they can now submit receipts and invoices in PDF format to the AnalyzeExpense API.

    Log into the Amazon Textract console to test out your TIFF documents. To learn more about Amazon Textract’s capabilities, please visit the Amazon Textract website, developer guide, or resource page.

    » AWS Systems Manager Maintenance Windows now supports defining custom cutoff behavior for tasks

    Posted On: Oct 26, 2021

    You can now define a cutoff behavior for your maintenance tasks using AWS Systems Manager Maintenance Windows, which allows you to stop or continue ongoing tasks when the cutoff time is reached. This provides DevOps and IT engineers with more control on the cutoff behavior to ensure disruptive tasks are not run outside the desired period. For instance, while registering an Automation task with a maintenance window, you can now set up the cutoff behavior to cancel ongoing tasks. This would ensure that no new task invocations are started when the cutoff time is reached.

    To get started, open the AWS Systems Manager console and, in the navigation pane, choose Maintenance Windows. Then, enable new task invocation cutoff while you are defining a new maintenance window task.

    This feature is available in all AWS Regions where Systems Manager is supported, excluding AWS China (Beijing) Region, operated by Sinnet, and AWS China (Ningxia) Region, operated by NWCD. Customers can use this feature at no additional charge. For more details about Maintenance Windows, visit the AWS Systems Manager product page and documentation.

    » Announcing the General Availability of AWS Local Zones in Las Vegas, New York City, and Portland

    Posted On: Oct 26, 2021

    Today we are announcing the general availability of AWS Local Zones in Las Vegas, New York City (located in New Jersey) and Portland. Customers can now use these new Local Zones to deliver applications that require single-digit millisecond latency to end-users or for on-premises installations in these three metro areas.

    AWS Local Zones are a type of AWS infrastructure deployment that places AWS compute, storage, database, and other select services closer to large population, industry, and IT centers where no AWS Region exists today. You can use AWS Local Zones to run applications that require single-digit millisecond latency for use cases such as real-time gaming, hybrid migrations, media and entertainment content creation, live video streaming, engineering simulations, AR/VR, and machine learning inference at the edge.

    With this launch, AWS Local Zones are now generally available in 13 metro areas - Boston, Chicago, Dallas, Denver, Houston, Kansas City, Las Vegas, Los Angeles, Miami, Minneapolis, New York City (located in New Jersey), Philadelphia, and Portland. With an additional three Local Zones launching later in 2021 in Atlanta, Phoenix, and Seattle, customers will be able to deliver ultra-low latency applications to end-users in cities across the US.

    You can enable AWS Local Zones from the “Settings” section of the EC2 Console or ModifyAvailabilityZoneGroup API. To learn more, please visit the AWS Local Zones website.

    » Amazon SageMaker Autopilot adds support for time series data

    Posted On: Oct 26, 2021

    Amazon SageMaker Autopilot automatically builds, trains, and tunes the best machine learning models based on your data, while allowing you to maintain full control and visibility. Starting today, SageMaker Autopilot supports time series data. You can now use SageMaker Autopilot to build machine learning models for regression and classification problems for time series data or any sequence data, enabling scenarios such as supervised anomaly detection, risk assessment or fault prediction based on a sequence of datapoints. For example, you can now build models to identify and classify anomalous network traffic recorded over time, or identifying faulty devices based on emitted metrics.

    You can get started with automatically building machine learning models with time series data by simply including the time series data in your input tabular dataset for SageMaker AutoPilot. SageMaker Autopilot will automatically parse this data, extract meaningful features, and test multiple ML algorithm to process it. Support for time series data is available in all AWS regions where SageMaker Autopilot is currently supported. For more details, please review documentation. To get started with SageMaker Autopilot, see the product page or access SageMaker Autopilot within SageMaker Studio.

    » Disable default reverse DNS rules with Route 53 Resolver

    Posted On: Oct 26, 2021

    Amazon Route 53 Resolver is the recursive DNS service that runs by default in your Virtual Private Clouds (VPCs). Paired with Route 53 Resolver Endpoints and Resolver Rules, you can create seamless DNS query resolution across your entire hybrid cloud, with precise control over the resolution of DNS namespaces between your on-premises data center and Amazon Virtual Private Cloud (Amazon VPC).

    Route 53 Resolver automatically creates rules for reverse DNS lookup for all VPCs where you set "enableDnsHostnames" to "true." Previously, customers could not disable these rules. While these default rules are useful for many customers, some customers with hybrid cloud architectures need to forward all reverse DNS queries to their on-premises name servers, for example to enable on-premises Active Directory services to perform user authentication.

    With today’s release, customers can disable the creation of these default reverse rules and instead forward queries for reverse DNS namespaces to external servers as desired.

    Please visit our product page to learn more about Amazon Route 53.

    » Announcing Amazon RDS Custom for Oracle

    Posted On: Oct 26, 2021

    Amazon Relational Database Service (Amazon RDS) Custom is a managed database service for legacy, custom, and packaged applications that require access to the underlying OS and DB environment. Amazon RDS Custom is now available for the Oracle database engine. Amazon RDS Custom for Oracle automates setup, operation, and scaling of databases in the cloud while granting access to the database and underlying operating system to configure settings, install patches, and enable native features to meet the dependent application's requirements.

    With Amazon RDS Custom for Oracle, customers can customize their database server host and operating system and apply special patches or change database software settings to support third-party applications that require privileged access. Through the time-saving benefits of a managed service, Amazon RDS Custom for Oracle allows valuable resources to focus on more important business-impacting, strategic activities. By automating backups and other operational tasks, customers can rest easy knowing their data is safe and ready to be recovered if needed. And finally, Amazon RDS Custom cloud-based scalability will help our customer's database infrastructures keep pace as their business grows.

    Get started using the AWS CLI or AWS Management Console today! Amazon RDS Custom for Oracle is generally available in the following regions: US East (N. Virginia), US West (Oregon), US East (Ohio), EU (Ireland), EU (Frankfurt), EU (Stockholm), Asia Pacific (Sydney), Asia Pacific (Tokyo), and Asia Pacific (Singapore).

    To learn more about Amazon RDS Custom:

  • Read the AWS News blog post
  • Visit the Amazon RDS Custom website
  • See Amazon RDS Custom pricing page for full pricing details and regional availability
  • See Amazon RDS Custom User Guide
  • » Announcing Amazon EC2 DL1 instances for cost efficient training of deep learning models

    Posted On: Oct 26, 2021

    Today we are announcing the general availability of Amazon EC2 DL1 instances powered by Gaudi accelerators from Habana Labs, an Intel company. EC2 DL1 instances deliver up to 40% better price performance than current generation GPU-based EC2 instances for training deep learning models and are optimized for workloads such as image classification, object detection and natural language processing.

    These instances feature 8 Gaudi accelerators with 32 GB of high bandwidth memory (HBM), 768 GiB of system memory, custom 2nd generation Intel Xeon Scalable processors, 400 Gbps of networking throughput, and 4 TB of local NVMe storage. Customers can quickly and easily get started with DL1 instances using Habana SynapseAI SDK, which is integrated with leading machine learning frameworks such as TensorFlow and PyTorch. Customers can seamlessly migrate their existing machine learning models currently running on GPU-based or CPU-based instances onto DL1 instances, with minimal code changes.

    Customers can launch DL1 instances using AWS Deep Learning AMIs or using Amazon Elastic Kubernetes Service (Amazon EKS) or Amazon Elastic Container Service (Amazon ECS) for containerized applications.

    DL1 instances are now available in US East (N. Virginia) and US West (Oregon) regions. Customers can purchase DL1 instances as On-Demand Instances, Reserved Instances, Spot Instances, or as part of Savings Plan.

    To get started, visit the AWS Management Console, AWS Command Line Interface (CLI), and AWS SDKs. To learn more, visit the DL1 instance page.

    » Amazon QuickSight launches SPICE Incremental Refresh

    Posted On: Oct 26, 2021

    Amazon QuickSight announced the availability of Incremental Refresh, a feature in Amazon QuickSight that supports incrementally loading new data to SPICE data sets without needing to refresh the full set of data. SPICE is QuickSight's Super-fast, Parallel, In-memory Calculation Engine.

    Previously, QuickSight customers could only have full refresh of SPICE data sets that can take hours to reload all the data even if only a small portion has changed. With incremental refresh, QuickSight customers can update their SPICE data sets in a fraction of the time a full refresh would take, enabling users to access the most recent insights much sooner. Incremental refresh can be scheduled to run up to every 15 minutes on a data set so they can serve up-to-date insights.

    Incremental refresh works for SQL data sources connected to databases. QuickSight customers can select a timestamp column from the database that correlates to event, for example, creation time, update time, or published date, and specify a look-back window, for example, 30 days or 3 hours. QuickSight refreshes data changed within that look-back window and can complete the refresh in minutes instead of hours if the data size in the look-back window is much smaller than the entire data set. By allowing to refresh more frequently with small amount of data, readers get access to fresh data for large dataset in minutes instead of hours.

    Incremental Refresh is available in Amazon QuickSight Enterprise Editions in all QuickSight regions - US East (N. Virginia), US East (Ohio), US West (Oregon), US West (Canada), US West (Sau Paulo), Europe (Frankfurt), Europe (Ireland), Europe (London), Asia Pacific (Mumbai), Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), and AWS GovCloud (US). If your dataset uses CustomSQL, Incremental Refresh may not be able to benefit you. For further details, visit here.

    » Amazon Pinpoint now supports 10 Digit Long Code (10DLC) vetting

    Posted On: Oct 25, 2021

    Amazon Pinpoint now includes the ability to perform an extended review of your company’s Ten-Digit Long Code (10DLC) registration details. This extended review process is called “vetting.” By vetting your company’s 10DLC registration, you can gain higher throughput rates for the messages that you send using 10DLC numbers.

    10DLC is a type of phone number that enables you to send Application-to-Person (A2P) SMS messages to recipients in the United States. 10DLC offers SMS senders high message delivery rates at an affordable price. To use 10DLC, the mobile carriers require you to register your company and use case (referred to as your “10DLC campaign”). Amazon Pinpoint added the ability to complete these registration processes in February, 2021.

    When you first complete the 10DLC registration process for your company, your registration data is sent to The Campaign Registry, the industry-wide authority that manages 10DLC registrations. The Campaign Registry analyzes the information that you provided. If the information is accurate, The Campaign Registry verifies your registration and provides a throughput limit for your campaigns. If you require higher throughput rates than you were provided, you can request extended vetting of your registration.

    For more information about 10DLC vetting, see Vetting your 10DLC registration in the Amazon Pinpoint User Guide.

    » Amazon DocumentDB (with MongoDB compatibility) adds support for Access Control with User-Defined Roles

    Posted On: Oct 25, 2021

    Amazon DocumentDB (with MongoDB compatibility) is a fast, scalable, highly available, and fully managed document database service that supports MongoDB workloads. As a document database, Amazon DocumentDB makes it easy to store, query, and index JSON data at scale.

    Today, Amazon DocumentDB added support for access control with user-defined roles. With user-defined roles you can grant users one or more custom roles that determine which operations they are authorized to perform. This release improves on DocumentDB’s RBAC support which was previously limited to built-in roles. For some use cases, the built-in roles are not sufficient and you may need the ability to customize authorization across specific actions and resources. For example, you may wish to grant a user read-only access to a specific collection, and grant read-write access to another collection. User-defined roles gives you the flexibility to customize RBAC roles based on your organizations requirements.

    To add a role, you can use the db.createRole method. For more information on how to get started see our documentation or check out our blog post. Ability to create user-defined roles is now available in all regions where Amazon DocumentDB is available. If you are new to Amazon DocumentDB, the getting started guide will show you how to quickly provision an Amazon DocumentDB cluster and explore the flexibility of the document model. Have questions or feature requests? Email us at: documentdb-feature-request@amazon.com.

    » Amazon DocumentDB (with MongoDB compatibility) adds support for $literal, $map, and $$ROOT

    Posted On: Oct 25, 2021

    Amazon DocumentDB (with MongoDB compatibility) is a database service that is purpose-built for JSON data management at scale, fully managed and integrated with AWS, and enterprise-ready with high durability.

    Amazon DocumentDB continues to increase compatibility with MongoDB and today added support for the following MongoDB APIs and indexing improvements in DocumentDB 3.6 and 4.0:

  • $literal: Aggregation operator that accepts any valid expression, returns a value without parsing
  • $map: Aggregation operator that applies an expression to each item in an array and returns an array with the applied results
  • $$ROOT: System variable that references top-level document that is being processed in the aggregation pipeline stage
  • These capabilities are supported in all regions where Amazon DocumentDB is available. All supported MongoDB APIs and aggregation pipeline capabilities for Amazon DocumentDB can be found here. If you are new to Amazon DocumentDB, the getting started guide will show you how to quickly provision an Amazon DocumentDB cluster and explore the flexibility of the document model. Have questions or feature requests? Email us at: documentdb-feature-request@amazon.com.

    » Amazon DocumentDB (with MongoDB compatibility) adds support for storing, querying and indexing Geospatial data

    Posted On: Oct 25, 2021

    Amazon DocumentDB (with MongoDB compatibility) is a database service that is purpose-built for JSON data management at scale, fully managed and integrated with AWS, and enterprise-ready with high durability.

    Today, Amazon DocumentDB added support for storing, querying and indexing Geospatial data. With Geospatial querying capabilities you get the following benefits :

  • Creating 2dsphere Indexes - You can now create 2dsphere indexes on Points. A 2dsphere index supports queries that calculate geometries on a sphere.
  •  Proximity querying - You can now use the MongoDB APIs such as $nearSphere, $geoNear, $minDistance, $maxDistance to perform proximity queries stored on data stored DocumentDB. For example you can use $geoNear to find all airports within 50 miles from a given city.
  • For more information on how to get started with Geospatial querying see our documentation, or check out our blog post. Geospatial capabilities are supported in all regions where Amazon DocumentDB is available. If you are new to Amazon DocumentDB, the getting started guide will show you how to quickly provision an Amazon DocumentDB cluster and explore the flexibility of the document model. Have questions or feature requests? Email us at: documentdb-feature-request@amazon.com.

    » Amazon CloudFront adds support for client IP address and connection port header

    Posted On: Oct 25, 2021

    Amazon CloudFront now provides a CloudFront-Viewer-Address header that includes IP address and connection port information for requesting clients. The connection port field indicates the TCP source port used by the requesting client. Previously, IP address and client connection port information were available only in CloudFront access logs, making it harder to resolve issues or perform real-time decision-making based on these data. Now you can configure your CloudFront origin request policies to forward the CloudFront-Viewer-Address header to your origin servers. The header can also be used in CloudFront Functions when included in an origin request policy. The CloudFront-Viewer-Address header uses the following syntax: CloudFront-Viewer-Address: 127.0.0.1:4430

    The CloudFront-Viewer-Address header is provided at no additional cost. You can use the header, along with other CloudFront headers, for analyzing, auditing, and logging purposes. For more information about how to use the CloudFront-Viewer-Address header, see the CloudFront Developer Guide. Learn more about cache and origin request policies from our blog. To learn more about Amazon CloudFront, visit CloudFront product page.

    » Amazon RDS for MySQL supports new minor version 8.0.26, includes Global Transaction Identifiers (GITDs) and Delayed Replication

    Posted On: Oct 25, 2021

    Following the release of updates in MySQL version 8.0, we have updated Amazon Relational Database Service (Amazon RDS) for MySQL and Amazon RDS for MySQL on Outposts to support MySQL minor version 8.0.26. We recommend that you upgrade to the latest minor version to fix known security vulnerabilities in prior versions of MySQL, and to benefit from the numerous bug fixes, performance improvements, and new functionality added by the MySQL community.

    Along with this minor version, we’ve added support for Global Transaction Identifiers (GTIDs) and Delayed Replication to MySQL version 8.0. These two features were previously unavailable in Amazon RDS for MySQL version 8.0, and are now available for MySQL version 8.0.26 and higher.

    Global transaction identifiers (GTIDs) are unique identifiers generated for committed MySQL transactions. You can use GTID-based replication to make binlog replication simpler and easier to troubleshoot when using Amazon RDS for MySQL read replicas or with an external MySQL database. For more details see Using GTID-based Replication for MySQL in the Amazon RDS User Guide.

    Delayed Replication allows you to configure a read replica to lag a fixed time interval behind the source database. You can introduce this intentional delay to help with disaster recovery, such as recovering from a human error that accidentally dropped a database table. For more details on delayed replication see Working with MySQL read replicas in the Amazon RDS user guide.

    Learn more about upgrading your database instances in the Amazon RDS User Guide; and create or update a fully managed Amazon RDS database using the latest available minor versions in the Amazon RDS Management Console.

    » Introducing AWS Migration Hub Strategy Recommendations

    Posted On: Oct 25, 2021

    AWS Migration Hub now helps you easily build a migration and modernization strategy for your applications running on-premises or in AWS. The new Strategy Recommendations feature is the ideal starting point to begin your transformation journey delivering prescriptive guidance on the optimal strategy and tools to help you migrate and modernize at scale. Strategy Recommendations automates the manual process of analyzing each running application, its process dependencies and technical complexity to reduce the time and effort spent on planning application migration and modernization, and accelerate your business transformation on AWS.

    AWS Migration Hub Strategy Recommendations accelerates migration and modernization with:

  • Single Source for Migration and Modernization:
    Strategy Recommendations leverages deep understanding of the many specialized AWS and partner migration and modernization tools to recommend not just a strategy but a path forward to transform your applications
  • Detailed Automated Analysis:
    Strategy Recommendations automates the manual process of analyzing each running application, its process dependencies and technical complexity to deliver detailed analysis of viable migration and modernization options enabling you to make more informed decisions
  • Accelerated Planning and Action:
    Strategy Recommendations cuts the time needed to analyze your entire application portfolio and offers portfolio-wide strategy recommendations helping you understand which applications in your portfolio you should rehost, replatform, or refactor and the effort involved so you can quickly meet your transformationgoals.
  • AWS Migration Hub Strategy Recommendations is available in all AWS regions where AWS Migration Hub is available.

    To learn more, please read the blog, the documentation, and start planning your migration and modernization in AWS Migration Hub today.

    » Amazon DocumentDB (with MongoDB compatibility) now provides a JDBC driver to connect from BI tools and execute SQL Queries

    Posted On: Oct 25, 2021

    Amazon DocumentDB (with MongoDB compatibility) is a database service that is purpose-built for JSON data management at scale, fully managed and integrated with AWS, and enterprise-ready with high durability.

    Today, Amazon DocumentDB announced a JDBC driver that enables connectivity from BI tools such as Tableau, MicroStrategy, and QlikView. Customers can also use the JDBC driver to run SQL queries against their Amazon DocumentDB cluster from tools such as SQLWorkbench. Customers can download the driver here. The Document JDBC driver is also open-source and available for the user community under the Apache 2.0 license on GitHub. Customers can use the GitHub repository to gain enhanced visibility into the driver implementation and contribute to its development.

    To learn more about how to use the driver, please refer to Using the DocumentDB JDBC driver. If you are new to Amazon DocumentDB, the getting started guide will show you how to quickly provision an Amazon DocumentDB cluster and explore the flexibility of the document model. Have questions or feature requests? Email us at: documentdb-feature-request@amazon.com.

    » AWS Load Balancer Controller version 2.3 now available with support for ALB IPv6 targets

    Posted On: Oct 22, 2021

    The AWS Load Balancer Controller provides a Kubernetes native way to configure and manage Elastic Load Balancers that route traffic to applications running in Kubernetes clusters. Elastic Load Balancing offers multiple load balancers that all feature the high availability, automatic scaling, and robust security necessary to help make your applications fault tolerant.

    Version 2.3 of AWS Load Balancer Controller for Kubernetes is now available. This update adds multiple enhancements to make it easier to route traffic to your applications using Elastic Load Balancers including:

  • IPv6 target group support for Application Load Balancers.
  • Security group rule management optimizations that reduce the number of overall security group rules needed when using multiple load balancers per cluster. 
  • Increased performance in large clusters through support for Kubernetes Endpoint Slices.
  • Support for specifying Network Load Balancer attributes including deletion protection.
  • Support for specifying Application Load Balancer attributes through IngressClassParams.
  • Subnet auto discovery based on available IP addresses.
  • More information about controller configuration parameters and defaults can be found here and here. To get started using the AWS Load Balancer Controller, see the installation guide and walkthrough on GitHub. To learn more about Amazon EKS, see the product page or documentation.

    Important
    Amazon EKS does not support IPv6. You can follow progress on this feature by subscribing to the issue for EKS IPv6 support on the containers roadmap. 

    » AWS Audit Manager custom framework sharing is now generally available

    Posted On: Oct 22, 2021

    AWS Audit Manager now offers custom framework sharing so you have a secure and easy way to share custom frameworks across AWS accounts and regions. This enables instant access of your custom frameworks across multiple AWS accounts, without the need to manually copy or move underlying custom controls. Custom framework sharing provides quick access to the shared framework so that your users always see the most up-to-date and consistent information as it is provided by you. You can use the custom framework sharing feature on your AWS Audit Manager account at no additional cost.

    If you are an Audit Manager consulting partner, the custom framework sharing feature allows rapid scaling of your audit practice as you work closely with your customers to build custom controls and frameworks that meet customer’s specific needs. With this feature, you can avoid the complexity and delays that are often associated with version management and manual data transfer. If you are a business user, custom framework sharing enables seamless collaboration across diverse business groups in an organization. This can make handling cross-region audits and providing guidance to audit consultants easier.

    You can accept shared custom frameworks and start assessments from them as-is, or they can customize the frameworks further. Custom framework sharing is available in all AWS Regions where AWS Audit Manager is available, specifically: US East (Ohio), US East (N. Virginia), US West (N. California), US West (Oregon), Asia Pacific (Mumbai), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Canada (Central), Europe (Frankfurt), Europe (Ireland) and Europe (London). 

    Learn more about the AWS Audit Manager custom framework sharing capability in the Audit Manager feature page and refer to our documentation. Get started today by visiting the AWS Audit Manager Console, AWS Command Line Interface, or APIs.   

    » AWS Fault Injection Simulator now injects Spot Instance Interruptions

    Posted On: Oct 22, 2021

    You can now inject Amazon EC2 Spot Instance interruptions into your Spot Instance workloads using AWS Fault Injection Simulator (FIS). Spot Instances enable you to run compute workloads on Amazon EC2 at steep discounts in exchange for returning the Spot Instances when Amazon EC2 needs the capacity back. Because it is always possible that your Spot Instance may be interrupted, you should ensure that your application is prepared for a Spot Instance interruption. However, until now it has been difficult to recreate the circumstances of a Spot Instance interruption in order to evaluate and improve how your application responds.

    Now, using AWS FIS, you can simulate what happens when Amazon EC2 reclaims Spot Instances by simply running an AWS FIS experiment, allowing you to observe how your applications responds so that you can improve their performance and resiliency. The Spot Instance interruptions that are injected by your AWS FIS experiments behave in the same way as they do when reclaimed by Amazon EC2 — including instance termination notifications, rebalance notifications, and the interruption behaviors you have specified — so that you can accurately reproduce real-world conditions. You can also easily configure safeguards such as alarms, stop conditions, and rollback steps to help build confidence in your experiments even when running them in production.

    To learn more about using AWS FIS experiments to inject Spot Instance interruptions, visit Amazon EC2 actions in the AWS FIS user guide. In addition to Spot Instance interruptions, you can use AWS FIS to simulate other faults such as Amazon EC2 API errors as well as disruptions to On-Demand or Reserved Instances, RDS database instances, ECS containers, and EKS node groups. AWS FIS and Amazon EC2 Spot Instances are available in all public AWS Regions.

    » Amazon Connect launches AWS CloudFormation support for users, user hierarchy groups, and hours of operation

    Posted On: Oct 22, 2021

    Amazon Connect now supports AWS CloudFormation on three new APIs: Users, User Hierarchies, and Hours of Operation. You can now use AWS CloudFormation templates to deploy these Amazon Connect resources—along with the rest of your AWS infrastructure—in a secure, efficient, and repeatable way. Additionally, you can use these templates to maintain consistency across Amazon Connect instances. For more information, see Amazon Connect Resource Type Reference in the AWS CloudFormation User Guide.

    AWS CloudFormation support for Amazon Connect resources is available in all AWS regions where Amazon Connect is offered. To learn more about Amazon Connect, the AWS contact center as a service solution on the cloud, please visit the Amazon Connect website.

    » New AWS Solutions Implementation: Web Client for AWS Transfer Family

    Posted On: Oct 22, 2021

    Web Client for AWS Transfer Family provides an intuitive web browser interface for using AWS Transfer for Secure Shell File Transfer Protocol (SFTP). It allows you to adopt AWS Transfer Family plus provides a simple web portal to your corporate SFTP environments for your users.

    Non-technical users find it inconvenient to use thick client applications, such as FileZilla and others to transfer files. It’s also complicated to install and support different clients on various end user devices and operating systems. By adopting this browser-based solution you can avoid the effort of managing a commercial client and evade troubleshooting different end-user devices and operating systems. Your customers will be able to access your files without installing any software or using your system from the backend.

    This solutions implementation supports common file operations, such as upload, download, rename and delete. Currently, the solution only supports the AWS Transfer Family SFTP-enabled server service – AWS Transfer for SFTP.

    To learn more and get started, please visit the solutions implementation web page.

    Additional AWS Solutions Implementations are available on the AWS Solutions Implementations webpage, where you can browse technical reference implementations that are vetted by AWS architects, offering detailed architecture and instructions for deployment to help build faster to solve common problems.

    » Amazon Connect launches API to configure hours of operation programmatically

    Posted On: Oct 22, 2021

    Amazon Connect now provides an API to programmatically create and manage hours of operation. Using this API, you can programmatically configure hours of operation which can be used in contact flows to decide which queue to route contacts to. Additionally, you can now delete hours of operation that are no longer required using the delete API. To learn more, see the API documentation.

    Hours of operation API is available in all AWS regions where Amazon Connect is offered. To learn more about Amazon Connect, the AWS contact center as a service solution on the cloud, please visit the Amazon Connect website.

    » AWS Amplify for JavaScript now supports resumable file uploads for Storage

    Posted On: Oct 21, 2021

    AWS Amplify for JavaScript now supports pause, resume and cancel actions on file uploads to Simple Storage Service (Amazon S3) via the Amplify Storage category. Amplify provides a set of use-case oriented UI components, libraries and command-line tools to make it easy for frontend web and mobile developer to build AWS cloud backends for their apps. With this release, developers can create experiences where end-users can reliably upload very large files, including raw video and large productivity documents. Being able to resume uploads is particularly useful for handling scenarios where a user experiences network interruption during an upload.

    The Amplify JS library will now automatically segment large files into 5Mb chunks, and upload them using the Amazon S3 Multipart upload process. This method allows chunks to be uploaded in any order, and individual chunks can be re-transmitted if their upload fails or times out. Developers are able to provide callback logic to control how and when re-transmits should be attempted.

    We also improved Typescript type coverage for all of the Storage category functionality within Amplify. Previously, developers would not see auto-suggest options for the parameters which control interaction with Amazon S3 buckets, like file uploads and downloads. Now, web developers who use Typescript with a modern code editor will see suggestions, and detailed warnings when they attempt to use invalid values for function parameters.

    Developers can get started with Resumable Uploads today by adding the Storage category to their Amplify projects.

    » Amazon RDS Proxy now supports Amazon RDS for MySQL Version 8.0

    Posted On: Oct 21, 2021

    Amazon RDS Proxy now supports Amazon RDS for MySQL major version 8.0. MySQL 8.0 is the latest Community Edition major version, and offers better performance, reliability, security, and manageability. To learn more about Amazon RDS for MySQL, please visit our details page or view our documentation.

    Amazon RDS Proxy is a fully managed and a highly available database proxy for Amazon Aurora and RDS databases. RDS proxy helps improve application scalability, resiliency, and security. To learn more, visit the RDS Proxy details page, see our tutorial, or view our documentation.

    » AWS Fault Injection Simulator now supports Spot Interruptions

    Posted On: Oct 21, 2021

    Starting today, you can trigger the interruption of an Amazon EC2 Spot Instance using AWS Fault Injection Simulator (FIS). Spot Instances use spare EC2 Capacity that is available up to 90% discount compared to the On-Demand price. In exchange for the discount, Spot Instances can be interrupted by Amazon EC2 when Amazon EC2 needs the capacity back. When using Spot Instances, you need to be prepared to be interrupted. With FIS, you can test the resiliency of your workload and validate that your application is reacting to the interruption notices that EC2 sends before terminating your instances. You can target individual Spot Instances or a subset of instances in clusters managed by services that tag your instances such as ASG, Fleet and EMR.

    With a few clicks in the console, you can set up an experiment that triggers interruptions on your Spot Instances. Additionally, you can leverage FIS experiments to run more complex scenarios with additional distributed system failures happening in parallel or building sequentially over time, enabling you to create the real world conditions necessary to find hidden weaknesses in your workloads.

    FIS is available in all of the commercial AWS Regions today except Asia Pacific (Osaka) and the two China regions. 

    To learn more about Spot Instances visit Amazon EC2 Spot Instances page.

    » Amazon Chime SDK now supports video background blur

    Posted On: Oct 21, 2021

    The Amazon Chime SDK lets developers add real-time audio, video, and screen share to their web applications. Developers can now use video background blur to obfuscate their users’ surroundings, which can help increase visual privacy.

    Video background blur runs locally in each user's browser, transforming video before it is shared into the meeting. Users can confirm their background is blurred prior to joining a meeting through video preview. The blur strength is adjustable, from a low-strength bokeh effect to a high-strength obfuscation.

    The segmentation model used to separate the users from their background uses WebAssembly (WASM) and single instruction multiple data (SIMD) for efficient processing on most modern computers and browsers. The segmentation model can be substituted with other compatible models optimized for specific use cases or performance goals.

    Video background blur is a turnkey feature built on top of video processing APIs in the Amazon Chime SDK for JavaScript and Amazon Chime SDK React component library. To learn more about the Amazon Chime SDK, video background blur, or other ways to process video, review the following resources:

    * Amazon Chime SDK 
    * Amazon Chime SDK Developer Guide 
    Amazon Chime SDK for JavaScript - Using background blur


     

    » Amazon Transcribe now supports custom language models for streaming transcription

    Posted On: Oct 20, 2021

    We are pleased to announce that Amazon Transcribe will now support custom language models (CLM) for streaming transcription. Amazon Transcribe is an automatic speech recognition (ASR) service that makes it easy for you to add speech-to-text capabilities to your applications. CLM allows you to leverage pre-existing data to build a custom speech engine tailored for your transcription use case. No prior machine learning experience required. 

    Live streaming transcription is used across industries in contact center applications, broadcast events, and e-learning. CLM enables you to improve the transcription accuracy by leveraging the text data, such as website content or instruction manuals, which covers your industry’s unique lexicon and vocabulary. To get started, just upload your training data set to train your CLM. Next, simply run transcription jobs using your new CLM. 

    CLM for streaming transcriptions is available in US English and is available in AWS Regions where Amazon Transcribe streaming is supported, including US East (N. Virginia), US East (Ohio), US West (Oregon), South America (São Paulo), Asia Pacific (Seoul), Asia Pacific (Sydney), Asia Pacific (Tokyo), Canada (Central), EU (Frankfurt), EU (Ireland), and EU (London). For more details about the CLM feature, read our blog post “Building custom language models to supercharge speech-to-text performance for Amazon Transcribe” or visit the Amazon Transcribe documentation page

    » Amazon RDS for MySQL on Outposts supports new minor versions

    Posted On: Oct 20, 2021

    We have updated Amazon Relational Database Service (Amazon RDS) for MySQL on Outposts to support MySQL minor versions 8.0.23, and 8.0.25. We recommend that customers upgrade to the latest minor versions to fix known security vulnerabilities in prior versions of MySQL, and to benefit from the numerous bug fixes, performance improvements, and new functionality added by the MySQL community.

    Learn more about upgrading your database instances in the Amazon RDS User Guide; and create or update a fully managed Amazon RDS database using the latest available minor versions in the Amazon RDS Management Console. Amazon RDS for MySQL on Outposts makes it easy to set up, operate, and scale MySQL deployments on premises and in the cloud. See Amazon RDS for MySQL on Outposts Pricing for pricing details and regional availability.

    » Introducing support for AWS KMS customer managed keys for encrypting artifacts by Amazon CloudWatch Synthetics

    Posted On: Oct 20, 2021

    CloudWatch Synthetics now supports using an AWS Key Management Service (AWS KMS) key that you provide to encrypt the canary run data that CloudWatch Synthetics stores in your Amazon Simple Storage Service (Amazon S3) bucket. By default, these artifacts are encrypted at rest using an AWS managed key.

    Canaries are modular, lightweight scripts that you can configure to run on a schedule to monitor your endpoints and APIs from the outside in. Canaries simulate the same actions as a user, which makes it possible for you to monitor your user experience nearly continuously. With the new runtime version syn-nodejs-3.3, you can choose to provide CloudWatch Synthetics with your own KMS key. Alternatively, you can choose SSE-S3 encryption mode when creating or updating the canary to encrypt the canary run data at rest. Then, CloudWatch Synthetics uses the specified encryption option instead of the default AWS managed key to encrypt the artifacts. CloudWatch Synthetics now also supports updating the S3 bucket location used for storing artifacts for a canary.

    This feature is available in all Regions where CloudWatch Synthetics is available, except China Regions.

    To learn more about this feature, see the CloudWatch Synthetics documentation. For pricing, refer to Amazon CloudWatch pricing.

    » AWS Security Hub adds support for cross-Region aggregation of findings to simplify how you evaluate and improve your AWS security posture

    Posted On: Oct 20, 2021

    AWS Security Hub now allows you to designate an aggregation Region and link some or all Regions to that aggregation Region. This gives you a centralized view of all your findings across all of your accounts and all of your linked Regions. After you link a Region to the aggregation Region, your findings are continuously synchronized between the Regions. Any update to a finding in a linked Region is replicated to the aggregation Region, and any update to a finding in the aggregation Region is replicated to the linked Region where the finding originated. To learn more about this feature, you can read about in our documentation here or watch a demo video.

    Previously, you needed to have a separate Security Hub tab open for each Region. Now, your Security Hub administrator or delegated administrator account can view and manage all of your findings in the aggregation Region. Individual Security Hub member accounts in the aggregation Region can also view and manage all of their findings across all linked Regions.

    Your Amazon EventBridge feed in your administrator account and aggregation Region also now includes all of your findings across all member accounts and linked Regions. This allows you to simplify integrations with ticketing, chat, incident management, logging, and auto-remediation tools by consolidating those integrations into your aggregation Region. There is no additional cost to use this feature.

    AWS Security Hub is available globally and is designed to give you a comprehensive view of your security posture across your AWS accounts. With Security Hub, you now have a single place that aggregates, organizes, and prioritizes your security alerts, or findings, from multiple AWS services, including Amazon GuardDuty, Amazon Inspector, Amazon Macie, AWS Firewall Manager, AWS Systems Manager Patch Manager, AWS Chatbot, AWS Config, AWS IAM Access Analyzer. You can also receive and manage findings from over 60 AWS Partner Network (APN) solutions. You can also continuously monitor your environment using automated security checks that are based on standards, such as AWS Foundational Security Best Practices, the CIS AWS Foundations Benchmark, and the Payment Card Industry Data Security Standard.

    You can take action on these findings by investigating findings in Amazon Detective or sending them to AWS Audit Manager. You can also use Amazon EventBridge rules to send the findings to ticketing, chat, Security Information and Event Management (SIEM), response and remediation workflows, and incident management tools.

    You can enable your 30-day free trial of AWS Security Hub with a single click in the AWS Management console. To learn more about AWS Security Hub capabilities, see the AWS Security Hub documentation, and to start your 30-day free trial, see the AWS Security Hub free trial page.

    » Announcing General Availability of the AWS Panorama Appliance

    Posted On: Oct 20, 2021

    Today, the AWS Panorama Appliance is generally available. The AWS Panorama Appliance is a new device that enables customers to improve their operations and reduce costs by using existing on-premises cameras and analyzing video streams locally with computer vision.

    Customers in industrial, hospitality, logistics, retail and other industries want to use computer vision to make decisions faster and optimize their operations. For example, these organizations often need to perform physical inspections of production lines to spot defects in manufacturing products, monitor drive-through queues at quick-service restaurants to enhance customer experience, or optimize the layout of their physical locations by improving product placement. These customers typically have cameras installed onsite to support their businesses, but they often resort to manual processes like watching video feeds in real time to extract value from their network of cameras, which is tedious, expensive, and difficult to scale. While some smart cameras can provide real-time visual inspection, replacing existing cameras with new smart cameras can be cost prohibitive. Even then, smart cameras are often ineffective because they are limited to specific use cases and require additional effort to fine-tune. For example, updating a smart camera due to a simple change in the environment (e.g. lighting, camera placement, or production line speed) means that a customer often has to contact their vendor for support which can be costly and time-consuming. Alternatively, some customers send video feeds from existing on-premises cameras to third party servers, but often the required internet bandwidth is costly or facilities are in remote locations where internet connectivity can be slow, all of which degrades the usefulness and practicality of the analysis. Consequently, most customers are stuck using slow, expensive, error-prone, or manual processes for visual monitoring and inspection tasks that do not scale and can lead to missed defects or operational inefficiencies.

    The AWS Panorama Appliance helps customers to solve these challenges by enabling them to improve operations and reduce costs by using existing on-premises cameras and analyzing video streams locally with computer vision. Customers can get started in minutes by connecting the AWS Panorama Appliance to their network and identifying the video feeds for analysis. Because the computer vision processing happens locally on the AWS Panorama Appliance at the edge, customers can save on bandwidth costs and use it in locations with limited internet bandwidth. Additionally, the AWS Panorama Appliance is integrated with Amazon SageMaker (an AWS service that makes it easy for data scientists and developers to build, train, and deploy machine learning models), so customers can update their computer vision application in Amazon SageMaker and deploy the model to the AWS Panorama Appliance themselves.

    For customers that do not want to build their own computer vision applications, AWS Panorama Partners like Deloitte, TaskWatch, Vistry, Sony, and Accenture provide a wide range of solutions that can address unique use cases across manufacturing, construction, hospitality, retail, and other industries. For example, customers in the retail industry have used AWS Panorama Partners to develop computer vision applications that can analyze foot traffic to help optimize store layout and product placement, analyze peak times when additional staffing is needed to assist customers, and quantify inventory levels.

    Customers like The Vancouver Fraser Port Authority, Tyson Foods, Inc., and The Cincinnati/Northern Kentucky International Airport are are working with AWS Panorama Partners to use AWS Panorama to improve quality control, optimize supply chain, and enhance consumer experiences at the edge.

    The AWS Panorama Appliance is available for sale on AWS Elemental in the United States, Canada, United Kingdom, and most of the European Union. The AWS Panorama service is available today in US East (N. Virginia), US West (Oregon), Canada (Central), and Europe (Ireland), with availability in additional AWS Regions in the coming months. To learn more and get started, visit the AWS Panorama product page.

    » Amazon Chime SDK announces messaging channel flows

    Posted On: Oct 20, 2021

    The Amazon Chime SDK lets developers add real-time audio, video, screen share, and messaging capabilities to their web or mobile applications. Starting today, the Amazon Chime SDK allows developers to execute business logic on in-flight messages before they are delivered to members of a messaging channel with channel flows. Using channel flows you can create flows that remove sensitive data such as government ID numbers, phone numbers, or profanity from messages before they are delivered, which may be helpful for implementing corporate communications policies or other communication guidelines. Channel flows can also be used to perform functions like aggregation of responses to a poll before sending results back to participants.

    To get started, developers create channel processors in AWS Lambda, building in desired business logic. Channel flows can then be created using up to three different channel processors. Developers can apply channel flow functionality to channels, or channel moderators and administrators can add channel flows to channels. In channels where a flow has been applied, users that send messages will see sent messages briefly in a Pending state before they are delivered to the intended recipients. If a message fails to process, it will go into the Denied state and will not be sent. Recipients only see messages that have been successfully processed, helping ensure that critical business processing is completed on all delivered messages.

    To learn more about the Amazon Chime SDK and channel flows, review the following resources:

  • Amazon Chime SDK 
  • Amazon Chime SDK Developer Guide  
  • Amazon Chime SDK for JavaScript 
  • Blog - Use channel flows to remove profanity and sensitive content from messages in the Amazon Chime SDK
  • » Amazon QuickSight launches new Table and Pivot table enhancements

    Posted On: Oct 20, 2021

    Amazon QuickSight now supports advanced styling for your Table and Pivot Table. Authors can create beautiful tables, follow a design pattern or apply a standardized corporate identity to their tabular visuals with the newly launched options to customize borders and colors. They can also apply custom borders and styling for their totals and sub-totals letting them create financial reports like income statements etc. See here for more details.

    QuickSight now also supports Hyperlinks on Table visuals. Authors can now create hyperlinks to external resources by formatting fields containing URLs as Hyperlinks. Authors can customize the “Open in“ behavior of these links to open in same tab, new tab or a new window. They can also customize the Display style of these links to display as hyperlink, icons, plain text or even custom text. See here for more details.

    Tables also support images within Table Cells. Authors can add images to their table cells by formatting fields containing Image URLs to display as Images. Authors can customize the Image size to fit to the cell height, cell width or even choose to not scale the image size. See here for more details.

    Lastly, QuickSight now supports content alignment and wrapping customizations in Tables and Pivot tables. Authors can wrap text to display content without increasing the column width, and can also vertically align text to display at top, middle or bottom of the cell. Additionally, Authors can customize row-height to control the presentation of the data. See here for more details.

    New Table and Pivot table enhancements are now available in all supported Amazon QuickSight regions - US East (N. Virginia), US East (Ohio), US West (Oregon), Europe (Frankfurt), Europe (Ireland), Europe (London), Asia Pacific (Mumbai), Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Canada (Central), South America (São Paulo), and AWS GovCloud (US-West). See here for QuickSight regional endpoints.

    » Announcing Amazon WorkSpaces API to create new updated images with latest AWS drivers

    Posted On: Oct 19, 2021

    Amazon WorkSpaces now offers APIs which you can use to keep your WorkSpaces images up-to-date with the latest AWS drivers. Previously, WorkSpaces images were kept up to date by manually launching a WorkSpaces instance, installing driver updates and creating a new image. With this launch, you can use WorkSpaces APIs to know if latest AWS drivers are available for your images, install those updates and create updated images. After the new image is created, you can test it before updating your production bundles or sharing the image with other AWS accounts. Keeping your WorkSpaces up to date with latest AWS drivers lets you leverage the benefits of the latest instance types and other infrastructure components offered by AWS.

    This API is now available in all AWS Regions where Amazon WorkSpaces is available.

    There is no additional cost to use this API and it is automatically enabled for you. For more information, see CreateUpdatedWorkspaceImage at Amazon WorkSpaces API Reference.

    » Bulk Editing of OpsItems in AWS Systems Manager OpsCenter

    Posted On: Oct 19, 2021

    AWS Systems Manager now supports bulk editing of work items within OpsCenter.

    Today, AWS announces an additional capability of AWS Systems Manager OpsCenter. The new Bulk Edit feature provides IT professionals the ability to perform actions against multiple operational work items (OpsItems) at the same time.

    This feature enables users to remove the heavy lifting of configuring and managing identical edits to multiple OpsItems. For example, if you need to change the status of multiple work items, you can now modify the status of several OpsItems at once using a configurable interface. Bulk Edits can either be performed from the OpsCenter console under the OpsItems tab, or behind the scenes with an automation document (AWS-BulkEditOpsItems), which can be executed from the Automation console, the AWS command line, and from AWS SDKs. With this launch users can get to their desired state faster while saving operational time and reducing risks through automation.

    This new feature is available in all AWS Regions where Systems Manager is offered. For more information about OpsCenter, see our documentation. To learn more about AWS Systems Manager, visit our product page.

    » AWS Systems Manager Fleet Manager now offers advanced filtering for Managed Instances

    Posted On: Oct 19, 2021

    Fleet Manager, a feature in AWS Systems Manager (SSM) that helps IT Admins streamline and scale their remote server management processes, now enhances the reporting and filtering experience for Managed Instances. This new feature presents filtering options applicable to your data, taking out the guess work. You no longer need to memorize and manually enter values for filtering. It automatically populates applicable filtering criteria such as instance IDs or IP addresses.

    With Fleet Manager, you can generate reports for your entire inventory of Managed Instances and download them for future use. This new feature presents, at-a-glance, information about key instance properties such as Image ID and patch status including patch installed count, patch failed count, patch critical noncompliant count and patch group. You have the ability to filter instances based on one or more criteria allowing you to focus on a subset of instances. Fleet Manager also enables adding tags to single or multiple instances directly from the service console. The new filtering adds the ability to sort your rows through a simple click and also rearrange the sequence of columns through an intuitive drag and drop mechanism.

    Fleet Manager is a console based experience in Systems Manager that provides you with visual tools to manage your Windows, Linux, and macOS servers . With it, you can easily perform common administrative tasks such as file system exploration, log management, Windows Registry operations, performance counters, and user management from a single console. Fleet Manager manages instances running both on AWS and on-premises, without needing to remotely connect to the servers.

    Fleet Manager is available in all AWS Regions where Systems Manager is offered (excluding AWS China Regions). To learn more about Fleet Manager, visit our web-page read our blog post or see our documentation and AWS Systems Manager FAQs. To get started, choose Fleet Manager from the Systems Manager left navigation pane.

    » AWS Elemental MediaConvert now supports rich text rendering of IMSC 1.1 and TTML subtitle text

    Posted On: Oct 19, 2021

    AWS Elemental MediaConvert now supports rich text rendering of IMSC 1.1 text profile subtitles and the TTML subtitle format. Both of these formats allow detailed formatting that includes text size, position, justification, color, styling, and shadowing. For many viewers, on screen subtitles are an important part of the viewing experience and this feature gives subtitle authors more creative control of how text is rendered on screen. Additionally, IMSC and TTML allow greater text localization options including right-to-left text, rubies, and vertical text.

    For more information about configuring subtitles in MediaConvert, including the new style-passthrough feature for IMSC and TTML, please see the documentation.

    With AWS Elemental MediaConvert, audio and video providers with any size content library can easily and reliably transcode on-demand content for broadcast and multiscreen delivery. MediaConvert functions independently or as part of AWS Media Services, a family of services that form the foundation of cloud-based workflows and offer the capabilities needed to transport, transcode, package, and deliver video.

    Visit the AWS region table for a full list of AWS Regions where AWS Elemental MediaConvert is available. To learn more about MediaConvert, please visit https://aws.amazon.com/mediaconvert/.

    » PostgreSQL 14 RC 1 now available in Amazon RDS Database Preview Environment

    Posted On: Oct 19, 2021

    PostgreSQL 14 RC 1 is now available in the Amazon RDS Database Preview Environment, allowing you to test the release candidate version of PostgreSQL 14 on Amazon Relational Database Service (Amazon RDS).

    You can now deploy PostgreSQL 14 RC 1 for development and testing in the Amazon RDS Database Preview Environment. This release includes support for a number of extensions, including updates to PostGIS.

    The PostgreSQL community released PostgreSQL 14 RC 1 on September 23, 2021. PostgreSQL 14 includes improved functionality and performance such as larger connection counts, faster and smaller compression of large columns, support for timeout of idle sessions, and finer groupings of time-based data.

    The Amazon RDS Database Preview Environment supports both Single-AZ and Multi-AZ deployments on the latest generation of instance classes, and can be encrypted at rest using AWS Key Management Service (KMS) keys. Amazon RDS Database Preview Environment database instances are retained for a maximum period of 60 days and are automatically deleted after the retention period. Amazon RDS database snapshots that are created in the preview environment can only be used to create or restore database instances within the preview environment. You can use standard PostgreSQL dump and load functionality to import or export your databases from the preview environment.

    Amazon RDS Database Preview Environment database instances are priced the same as production RDS instances created in the US East (Ohio) Region. The Amazon RDS Database Preview Environment Forum is available for you and the Amazon RDS team to share information and concerns about both the candidate versions of PostgreSQL 14 and the Amazon RDS Database Preview Environment.

    » Announcing AWS Data Exchange for Amazon Redshift (Preview)

    Posted On: Oct 19, 2021

    We are announcing the public preview of AWS Data Exchange for Amazon Redshift, a new feature that enables customers to find and subscribe to third-party data in AWS Data Exchange that they can query in an Amazon Redshift data warehouse in minutes. Data providers can list and offer products containing Amazon Redshift data sets in the AWS Data Exchange catalog, granting subscribers direct, read-only access to the data stored in Amazon Redshift. This feature empowers customers to quickly query, analyze, and build applications with these third-party data sets.

    With AWS Data Exchange for Amazon Redshift, customers can combine third-party data found on AWS Data Exchange with their own first-party data in their Amazon Redshift cloud data warehouse, no ETL required. Since customers are directly querying provider data warehouses, they can be certain they are using the latest data being offered. Additionally, entitlement, billing, and payment management are all automated: access to Amazon Redshift data is granted when a data subscription starts and removed when it ends, invoices are automatically generated, and payments are automatically collected and disbursed through AWS Marketplace.

    AWS Data Exchange for Amazon Redshift is available for preview in all regions where AWS Data Exchange and Amazon Redshift RA3 instances are available including: US East (N. Virginia), US East (Ohio), US West (Northern California), US West (Oregon), Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Europe (Frankfurt), Europe (Ireland), and Europe (London).

    To explore available Amazon Redshift data products see the AWS Data Exchange data catalog. If you’re a registered data provider you can learn more about licensing data in Amazon Redshift. If you’re not already a registered data provider you can see our documentation on how to become a data provider.

    » Amazon Corretto October Quarterly Updates

    Posted On: Oct 19, 2021

    On October 19th, Amazon announced quarterly security and critical updates for Amazon Corretto Long-Term Supported (LTS) versions. Corretto 11.0.13 and 8.312 are now available for download. Amazon Corretto 17 updates will be available shortly after the release is tagged in the OpenJDK 17 repository. Amazon Corretto is a no-cost, multi-platform, production-ready distribution of OpenJDK.

    Click Corretto 8, Corretto 11 or Corretto 17 to download Corretto. You can also get the updates on your Linux system by configuring a Corretto Apt or Yum repo.

    Feedback is welcomed!

    » AWS Pricing Calculator now supports Amazon CloudFront

    Posted On: Oct 19, 2021

    Amazon CloudFront is now supported by the AWS Pricing Calculator. Estimate the cost of CloudFront workloads, which primarily includes costs associated with data transfer and requests. Apart from providing tips to estimate the number of requests based on your data transfer volume, the calculator gives you a granular view of costs across different usage tiers and CloudFront regions. 

    Using the estimate produced by AWS Pricing Calculator, you can determine the optimal monthly spend commitment towards CloudFront Security Savings Bundle, a flexible self-service pricing plan that helps you save up to 30% on your CloudFront bill in exchange for a 1-year commitment. You can easily enable CloudFront Security Savings Bundle from the CloudFront console. Finally, the Calculator also determines whether you qualify for additional discounts based on your total monthly traffic levels. 

    To get started, choose Amazon CloudFront in AWS Pricing Calculator. To learn more about how to save, share, and export cost estimates, see the AWS Pricing Calculator User Guide.

    » Amazon AppFlow is now available in the AWS Africa (Cape Town) Region

    Posted On: Oct 18, 2021

    Amazon AppFlow, a fully managed integration service that helps customers securely transfer data between AWS services and cloud applications, is now available in the AWS Africa (Cape Town) Region. With AppFlow, you can run data flows at enterprise scale between Software-as-a-Service (SaaS) applications like Salesforce, SAP, Zendesk, Slack, and ServiceNow, and AWS services like Amazon S3 and Amazon Redshift, in just a few clicks. See where Amazon AppFlow is available by using the AWS Region Table.

    To learn more and get started, visit the Amazon AppFlow product page and our documentation.

    » Porting Assistant for .NET adds support for WCF, OWIN, and System.Web.Mvc application assessment and porting

    Posted On: Oct 18, 2021

    Porting Assistant for .NET now supports assessment and porting of Windows Communication Foundation (WCF), Open Web Interface for .NET (OWIN), and ASP.NET System.Web.Mvc namespaces to .NET Core 3.1 or .NET 5. Following the GA release of Core WCF project in February 2021, Porting Assistant can now assess and provide recommendations to port WCF applications to Core WCF. It also supports assessment and porting of OWIN and System.Web.Mvc namespace configurations to .NET Core 3.1 or .NET 5. Developers can use the existing Porting Assistant for .NET tool or Porting Assistant for .NET Visual Studio IDE extension to get started. 

    Porting Assistant for .NET is an open source analysis tool that reduces the manual effort and guesswork involved in porting .NET Framework applications to .NET Core or .NET 5, helping customers move to Linux faster. It identifies incompatibilities with .NET Core or .NET 5, generates an assessment report with known replacement suggestions, and assists with porting. By modernizing .NET applications to Linux, customers can take advantage of the improved performance, increased security, reduced cost, and the robust ecosystem of Linux

    Learn more about Porting Assistant for .NET in our documentation here

    » Amazon Keyspaces (for Apache Cassandra) now supports automatic data expiration by using Time to Live (TTL) settings

    Posted On: Oct 18, 2021

    Amazon Keyspaces (for Apache Cassandra), a scalable, highly available, and fully managed Apache Cassandra–compatible database service, now supports automatic data expiration by using Time to Live (TTL) settings. With TTL, you set expiration times on attributes or rows in your tables, and Keyspaces automatically deletes those expired attributes or rows.

    TTL is useful if your data is growing rapidly, and you want to control costs by deleting data that you don’t want to retain indefinitely. If you have compliance requirements corresponding to data retention, you can use TTL to delete data at the desired time. You can specify TTL settings with either INSERT or UPDATE commands, or you can set a default TTL value at the table level. When data expires, Amazon Keyspaces filters it out from application reads immediately.

    Amazon Keyspaces deletes expired data automatically, typically within 10 days after it expires. Because Keyspaces TTL is fully managed, you are not responsible for running the data cleanup process, and Keyspaces TTL does not impact the performance of your application.

    TTL pricing is based on the number and the size of rows and attributes deleted by using TTL. TTL is available in all AWS Regions where Amazon Keyspaces is offered. To learn more, see Announcing Amazon Keyspaces Time to Live (TTL) general availability on the AWS Database Blog. 

    » Introducing the AWS Networking Competency for Consulting Partners

    Posted On: Oct 18, 2021

    Today, we announced the AWS Networking Competency for Consulting Partners. These partners have deep domain expertise in developing a consistent network and security policy, as well as solutions that offer a new way of routing traffic through private backbones and cloud cores. AWS Networking Competency Consulting Partners can help customers provide secure ingresses and convenient on-ramps into clouds to mitigate latency, improve availability, enhance application experience, and provide visibility and control in cloud networking.

    Networking is fundamental to cloud adoption and critical for infrastructure expansion, redundancy, and resiliency. As companies migrate applications to public infrastructure-as-a-service clouds and software-as-a-service (SaaS) environments, both the datacenter and datacenter network become distributed. Networking teams are not always equipped to evolve at the pace of innovation of automation, orchestration, monitoring, analytics, and cost optimization. Company decision makers look to strategic networking partners to provide end-to-end solutions, templates, best practices, training, and support.

    To make it easier for customers to find validated AWS networking partners, we are excited to introduce the new AWS Networking Competency for Consulting Partners.

    AWS Networking Competency Consulting Partners have demonstrated technical proficiency and proven customer success in practice areas ranging across migration, modernization, hybrid cloud, security at the edge, and network visibility, as well as all types of industries such as healthcare, retail, and financial services.

    Find an AWS Networking Competency Consulting Partner Today >>

    Learn more here.

    » FreeRTOS adds support for symmetric multiprocessing (SMP)

    Posted On: Oct 18, 2021

    FreeRTOS adds symmetric multiprocessing (SMP) support in the kernel, enabling developers designing FreeRTOS-based applications to utilize the SMP capabilities of multi-core microcontrollers. Multi-core microcontrollers, in which two or more identical processor cores share the same memory, allow the operating system to distribute tasks between cores to balance processor load as desired by the application. This allows applications to optimize the resource utilization of multi-core microcontrollers.

    The FreeRTOS SMP kernel has a consistent set of configuration options, APIs and behaviors for systems with multiple compute cores, so developers will be able to transition between multi-core and single-core systems with minimal effort. There are reference implementations on the xcore platform from XMOS and Raspberry Pi Pico, but for more details on the FreeRTOS SMP kernel and how to port to other platforms, see Porting to FreeRTOS SMP Kernel.

    Get started by downloading FreeRTOS SMP kernel source code from GitHub, and find more information on the FreeRTOS kernel page.

    » Amazon WorkSpaces Windows Server 2019 Bundles Now Available in the AWS GovCloud (US-West) Region

    Posted On: Oct 18, 2021

    Amazon WorkSpaces now offers new bundles powered by Windows Server 2019, providing a Windows 10 desktop experience along with a 64-bit Microsoft Office 2019 Professional Plus bundle option in the AWS GovCloud (US-West) Region. The feature brings a refreshed Windows 10 desktop experience, and enables customers to run applications that require recent Windows versions.

    While Windows Server 2016 powered WorkSpaces bundles are still available, customers can now choose to run WorkSpaces powered by Windows Server 2019 and benefit from new features like Windows Subsystem for Linux (WSL 1), and OneDrive Files On-Demand. In addition, the Plus bundle for Windows Server 2019 powered WorkSpaces comes with 64-bit Microsoft Office 2019 Professional Plus, boosting Office performance on WorkSpaces.

    The new WorkSpaces Windows Server 2019 bundles, and the plus bundles with Office 2019 are now available in all AWS Regions where Amazon WorkSpaces is available. For pricing information, visit our pricing page.

    » Amazon EC2 now offers Microsoft SQL Server on Microsoft Windows Server 2022 AMIs

    Posted On: Oct 15, 2021

    Amazon EC2 now adds 8 new Amazon Machine Images (AMIs) with SQL Server 2019 and 2017 on Windows Server 2022. With these AWS managed AMIs, customers can launch SQL Server on Windows Server 2022 and take full advantage of the latest Windows features on AWS. The AMIs are available in four editions – Enterprise, Standard, Web, and Express. See the list below.

    * Windows_Server-2022-English-Full-SQL_2019_Enterprise
    * Windows_Server-2022-English-Full-SQL_2019_Standard
    * Windows_Server-2022-English-Full-SQL_2019_Web
    * Windows_Server-2022-English-Full-SQL_2019_Express
    * Windows_Server-2022-English-Full-SQL_2017_Enterprise
    * Windows_Server-2022-English-Full-SQL_2017_Standard
    * Windows_Server-2022-English-Full-SQL_2017_Web
    * Windows_Server-2022-English-Full-SQL_2017_Express

    Amazon EC2 is the proven, reliable, secure cloud for your SQL Server workloads. By running SQL Server on Windows Server 2022 on EC2, you can experience the improved security, performance and reliability of Windows Server 2022. In addition, all SQL Server AMIs come with pre-installed software such as AWS Tools for Windows PowerShell, AWS Systems Manager, AWS CloudFormation, and various network and storage drivers to make your management easier. 

    You can launch your SQL Server workloads on a broad range of instance types that best meet your needs by selecting the new SQL Server AMIs either directly from your AWS Management Console, or through API or CLI commands. These AMIs are available across all Public, AWS GovCloud (US) and China regions of AWS. For more information, visit the Microsoft SQL Server on AWS page.

    » Introducing Distributed Load Testing on AWS v2.0.0

    Posted On: Oct 15, 2021

    Distributed Load Testing on AWS is a solution that automates software applications testing at scale and at load to help you identify potential performance issues before their release. It creates and simulates thousands of connected users generating transactional records at a constant pace without the need to provision servers.

    The updated solution includes support to view data from previous test runs. The data includes test configuration, test data and results history visualized via Amazon CloudWatch dashboards. The update also includes support to utilize your existing Amazon Virtual Private Cloud (Amazon VPC) and launches AWS Fargate tasks in multiple Availability Zones. Lastly, the update includes AWS Cloud Development Kit (AWS CDK) source code to generate the AWS CloudFormation template and supports accessing the solution container image from public Elastic Container Registry image repository managed by AWS.

    Additional AWS Solutions Implementations offerings are available on the AWS Solutions page, where customers can browse common questions by category to find answers in the form of succinct Solution Briefs or comprehensive Solution Implementations, which are AWS-vetted, automated, turnkey reference implementations that address specific business needs.

    » New datasets available on the Registry of Open Data from University of Sydney, International Brain Laboratory, Taiwanese Central Weather Bureau, and others

    Posted On: Oct 15, 2021

    Read below for 26 new or updated datasets from University of Sydney, International Brain Laboratory, Taiwanese Central Weather Bureau, and others are available on the Registry of Open Data in the following categories.

    Climate and weather:

  • CAFE60 reanalysis from the Commonwealth Scientific and Industrial Research Organisation (CSIRO)
  • Fundamental Climate Data Records, Oceanic Climate Data Records, Terrestrial Climate Data Records, and Atmospheric Climate Data Records from the National Oceanic and Atmospheric Administration (NOAA)
  • Global Mosaic of Geostationary Satellite Imagery (GMGSI) from the National Oceanic and Atmospheric Administration (NOAA)
  • U.S. Climate Normals from the National Oceanic and Atmospheric Administration (NOAA)
  • U.S. Climate Gridded Dataset (NClimGrid) from the National Oceanic and Atmospheric Administration (NOAA)
  • Ocean Climate Stations Moorings from the National Oceanic and Atmospheric Administration (NOAA)
  • Central Weather Bureau Open Data from the Taiwanese Central Weather Bureau
  • North American Mesoscale (NAM) Forecast System from the National Oceanic and Atmospheric Administration (NOAA)
  • Cybersecurity:

  • NapierOne Mixed File Dataset from School of Computing at Edinburgh Napier University
  • Geospatial:

  • Natural Earth managed by the North American Cartographic Information Society
  • Global Seasonal Sentinel-1 Interferometric Coherence and Backscatter from Earth Big Data
  • Scottish Public Sector LiDAR from the Joint Nature Conservation Committee
  • Updated: Sentinel-2 L2A 120m Mosaic from Sinergise
  • Life sciences:

  • PubMed Central Article Datasets from the National Library of Medicine (NLM)
  • OpenCell from the Chan Zuckerberg Biohub
  • Google Brain Genomics Sequencing Dataset managed by Amazon Web Services (AWS)
  • 1000 Genomes Phase 3 Reanalysis with DRAGEN 3.5 - Data Lakehouse Ready managed by AWS
  • Genome Aggregation Database (gnomAD) - Data Lakehouse Ready managed by AWS
  • Australasian Genomes from the Australasian Wildlife Genomics Group at the University of Sydney
  • Pacific Ocean Sound Recordings from the Monterey Bay Aquarium Research Institute
  • IBL Neuropixels Brainwide Map from the International Brain Laboratory
  • Machine Learning:

  • VoiSeR: Voice-based refinements of product search from Amazon
  • Regulatory:

  • Legal Entity Identifier and Legal Entity Reference Data from Global Legal Entity Identifier Foundation
  • Looking to make your data available? The AWS Open Data Sponsorship Program covers the cost of storage for publicly available, high-value, cloud-optimized datasets. We work with data providers who seek to:

  • Democratize access to data by making it available for analysis on AWS
  • Develop new cloud-native techniques, formats, and tools that lower the cost of working with data
  • Encourage the development of communities that benefit from access to shared datasets
  • Learn how to propose your dataset to the AWS Open Data Sponsorship Program.
    Learn more about open data on AWS.

    » AWS Glue Crawlers support Amazon S3 event notifications

    Posted On: Oct 15, 2021

    AWS Glue includes crawlers, a capability that make discovering datasets simpler by scanning data in Amazon S3 and relational databases, extracting their schema and automatically populating the AWS Glue Data Catalog, which keeps the metadata current. This reduces the time to insight by making newly ingested data quickly available for analysis with your favorite analytics and machine learning tools.

    When configuring the AWS Glue crawler to discover data in Amazon S3, you can choose from a full scan, where all objects in a given path are processed every time the crawler runs, or incremental scan, where only the objects in a newly added folder are processed. Full scan is useful when changes to the table are non-deterministic and can effect any object or partition. Incremental crawl is useful when new partitions, or folders, are added to the table. For large, frequently changing tables, the incremental crawling mode can be enhanced to reduce the time it takes the crawler to determine which objects changed.

    Today we are launching support for Amazon S3 Event Notifications as a source for AWS Glue crawlers to incrementally update AWS Glue Data Catalog tables. Customers will be able to configure Amazon S3 Event Notifications to be sent to an Amazon Simple Queue Service (SQS) queue, which the crawler will use to identify the newly added or deleted objects. With each run of the crawler, the SQS queue is inspected for new events, if none are found, the crawler stops. If events are found in the queue, the crawler will inspect their respective folders and process the new objects. This new mode reduces the cost and time a crawler needs to update large and frequently changing tables.

    AWS Glue crawler support for Amazon S3 Event Notifications is available in all regions where AWS Glue is available, see the AWS Region Table. To learn more, visit the AWS Glue crawler documentation.

    » Announcing Amazon Forecast Weather Index for Central America, Middle East and Africa

    Posted On: Oct 15, 2021

    We’re excited to announce that Amazon Forecast Weather Index is now also available in the Central America, Middle East and Africa regions. Weather Index can increase your forecasting accuracy, by automatically including the latest local weather information in your demand forecasts with one click and at no extra cost. Weather conditions influence consumer demand patterns, product merchandizing decisions, staffing requirements and energy consumption needs – however - acquiring, cleaning, and effectively using live weather information for demand forecasting is challenging and requires ongoing maintenance. With this launch, customers who have been using Weather Index in North America, South America, Europe and Asia-Pacific, can now, with one click to your demand forecast, also include 14-day weather forecasts for Central America, Middle East and Africa.

    The Amazon Forecast Weather Index combines multiple weather metrics from historical weather events and current forecasts at a given location to increase your demand forecast model accuracy. Amazon Forecast uses machine learning to generate more accurate demand forecasts, without requiring any prior ML experience. Amazon Forecast brings the same technology used at Amazon.com to developers as a fully managed service, removing the need for developers to manage resources or re-build their systems. 

    To get started with this capability, see the details in our blog and go through the notebook in our GitHub repo that walks you through how to use the Amazon Forecast APIs to enable the Weather Index. You can use this capability in all Regions where Amazon Forecast is publicly available. For more information about Region availability, see Region Table.  

    » Amazon EMR 6.4 release version now supports Apache Spark 3.1.2

    Posted On: Oct 14, 2021

    Amazon EMR 6.4 release version now supports Apache Spark 3.1.2 and provides runtime improvements with Amazon EMR Runtime for Apache Spark. Amazon EMR 6.4 provides Presto runtime improvements for PrestoDB 0.254, and runtime improvements for Apache Hive 3.1.2 when you use AWS Glue Data Catalog for your metastore.

    Amazon EMR 6.4 supports Apache Hudi 0.8.0, Trino 359, PrestoDB 0.254, Apache HBase 2.4.4, Apache Phoenix 5.1.2, Apache Flink 1.13.1, Apache Livy 0.7.1, JupyterHub 1.4.1, Apache Zookeeper 3.5.7 and Apache MXNet 1.8.0. Please see our release guide to learn more.

    Starting Amazon EMR release versions 5.30 and 6.1 and later, you can now automatically terminate idle Amazon EMR clusters. This helps you to minimize costs without monitoring cluster activity. To get started, read our documentation here.

    Amazon EMR Studio now supports multiple languages in the same Jupyter-based notebook forSpark workloads. Please see our documentation to learn more. You can now authenticate AmazonEMR Studio users using IAM-based authentication or IAM Federation, in addition to AWS SingleSign-On. You can learn more here.

    Amazon EMR 6.4 includes Hudi 0.8.0, which allows you to use multiple applications to concurrently write to the same Hudi table. You can find more details on Hudi 0.8.0 features here. You can now report Hudi Metrics to Amazon CloudWatch, and set Hudi configurations at the cluster level using EMR Configurations API and the Reconfiguration feature.

    Amazon EMR 6.4 now supports Spark SQL to write to and update Apache Hive metadata tableson Apache Ranger enabled Amazon EMR clusters. Please see our documentation to learn more.

    Starting Amazon EMR release versions 5.7 and later, you can now create clusters with multiple custom Amazon Machine Images (AMIs). You can include both AWS Graviton and non-AWS Graviton instances in the same cluster. For more information, please read our documentation. Amazon EMR 6.4 is generally available in all regions where Amazon EMR is available. Please see Regional Availability of Amazon EMR and our release notes for more details.

    » Amazon Kendra now available in AWS GovCloud (US-West) Region

    Posted On: Oct 14, 2021

    AWS customers can now use Amazon Kendra to build intelligent search applications in the AWS GovCloud (US-West) Region.

    Amazon Kendra is a highly accurate intelligent search service powered by machine learning. Kendra reimagines enterprise search for your websites and applications so your employees and customers can easily find the content they are looking for, even when it’s scattered across multiple locations and content repositories within your organization.

    Visit the AWS Region Table to see where Amazon Kendra is available. To get started, visit the Kendra documentation page.

    » AWS RoboMaker now supports expanded configuration for any robot and simulation software

    Posted On: Oct 14, 2021

    AWS RoboMaker, a service that allows customers to simulate robotics applications at cloud scale, now supports expanded configuration for any robot and simulation software. Previously Robot Operating System (ROS) and Gazebo are the only supported robot and simulation software configuration in RoboMaker. This new feature enables customers to use and configure any robot and simulation software of their choice while running simulations in RoboMaker.

    To use this feature, you select General software suite for Robot application and Simulation runtime for Simulation Application. By choosing the Simulation Runtime configuration, RoboMaker bypasses validations for any specific robot or simulation software, and provides generic simulation features such as sourcing files to the simulation environment, logging, launching simulation tools, and streaming tool GUIs.

    AWS RoboMaker is available in the following AWS Regions: US East (N. Virginia), US East (Ohio), US West (Oregon), EU (Ireland), EU (Frankfurt), Asia Pacific (Tokyo), and Asia Pacific (Singapore). To get started, please visit the RoboMaker webpage or run a sample simulation job in the RoboMaker console.

    » Amazon MemoryDB for Redis is now available in 11 additional AWS Regions

    Posted On: Oct 14, 2021

    Starting today, Amazon MemoryDB for Redis is generally available in 11 additional AWS Regions: US East (Ohio), US West (N. California, Oregon), Canada (Central), Europe (London, Stockholm), and Asia Pacific (Hong Kong, Seoul, Singapore, Sydney, Tokyo).

    Amazon MemoryDB for Redis is a Redis-compatible, durable, in-memory database service that delivers ultra-fast performance. It is purpose built for modern applications with microservices architectures. Amazon MemoryDB is compatible with Redis, a popular open source data store, so customers can quickly build applications using the same flexible and friendly Redis data structures, APIs, and commands that they already use today. With Amazon MemoryDB, all of your data is stored in memory, which enables you to achieve microsecond read and single-digit millisecond write latency and high throughput. Amazon MemoryDB also stores data durably across multiple Availability Zones (AZs) using a Multi-AZ transactional log to enable fast failover, database recovery, and node restarts. Delivering both in-memory performance and Multi-AZ durability, Amazon MemoryDB can be used as a high-performance primary database for your microservices applications eliminating the need to separately manage both a cache and durable database.

    With these additional regions, Amazon MemoryDB for Redis is now available in 15 AWS Regions. To get started, you can create an Amazon MemoryDB cluster in minutes through the AWS Management Console, AWS Command Line Interface (CLI), or AWS Software Development Kit (SDK). To learn more, visit the Amazon MemoryDB product page or documentation. Have questions or feature requests? Email us at: memorydb-help@amazon.com.

    » AWS Batch adds console support for visualizing AWS Step Functions workflows

    Posted On: Oct 14, 2021

    You can now manage AWS Step Functions workflows in the AWS Batch console, where you can automate Batch jobs to help build long-running business-critical workflows that require machine learning, data analysis, or overnight batch processing.

    AWS Batch is a cloud-native batch scheduler that some - scientists, financial analysts, or developers, from enterprises to startups - can use to efficiently run batch jobs on AWS. AWS Step Functions is a low-code visual workflow service used to orchestrate AWS services, automate business processes, and build serverless applications.

    Organizations use AWS Batch and AWS Step Functions together to build scalable, distributed batch computing workflows. AWS Batch plans, schedules, and executes your batch computing workloads across AWS compute services and features, such as AWS Fargate, Amazon EC2, and Spot Instances. With AWS Step Functions, you can compose workflows that integrate with multiple services, handle errors, and automatically scale to meet your business needs. Together, you can use AWS Step Functions to orchestrate preprocessing of data as part of your workflow then use AWS Batch to handle the large compute executions providing an automated, scalable, and managed batch computing workflow.

    Now you can visualize where and how your Batch jobs are composed into workflows without leaving the Batch console. You can navigate more easily between your Batch jobs, the workflows they are involved in, and add their workflow executions, bringing together two core AWS services to streamline management of your business-critical workflows. With AWS Batch, there is no need to install and manage batch computing software or server clusters that you use to run your jobs, allowing you to focus on analyzing results and solving problems.

    To get started, open the Workflow Orchestration page in the Batch console. If you are new to Step Functions, select Orchestrate Batch jobs with Step Functions Workflows to deploy a sample project.

    This feature is available in all regions where both AWS Step Functions and AWS Batch are available. View the AWS Regions table for details.

    To learn more, read Visualizing AWS Step Functions workflows from the AWS Batch console or see the Orchestrating Batch jobs section in the Batch developer guide.

    » AWS RoboMaker now supports Graphics Processing Unit (GPU) based simulation jobs

    Posted On: Oct 14, 2021

    AWS RoboMaker, a service that allows customers to simulate robotics applications at cloud scale, now supports GPU based simulation jobs for compute-intensive simulation workloads such as high fidelity simulation, vision processing, and machine learning (ML). Previously, AWS RoboMaker simulation jobs ran only on central processing unit (CPU) instances; now you can choose between a CPU based or GPU based simulation job.

    Using AWS RoboMaker, developers can run, scale, and automate GPU based simulations. GPU based simulations support higher frames-per-second, higher resolutions, lower sensor latencies, and faster simulation job completion times than CPU based simulation jobs. These capabilities enable improved sensing by cameras and realistic rendering needed for use cases such as ML model training, reinforcement learning, and testing use cases that require high fidelity simulations. When running a GPU based simulation job, the AWS RoboMaker GUI tool viewer now supports higher resolutions, enabling you to see simulated objects in greater detail.

    AWS RoboMaker is available in US East (N. Virginia), US East (Ohio), US West (Oregon), EU (Ireland), EU (Frankfurt), Asia Pacific (Tokyo), and Asia Pacific (Singapore) regions. Learn more about AWS RoboMaker GPU pricing on our webpage or get started by running a sample GPU application in the AWS RoboMaker console.

    » Amazon EC2 Auto Scaling now supports describing Auto Scaling groups using tags

    Posted On: Oct 14, 2021

    Today, Amazon EC2 Auto Scaling announced the ability to describe Auto Scaling groups using tags. Tag-based filtering makes it easier for you to view and manage your Auto Scaling groups based on the tags that you are interested in. Each tag is a simple label consisting of a customer-defined key and an optional value.

    For example, you may have multiple Auto Scaling groups where you tag them to indicate what environment they are a part of (e.g., using the key "environment" whose value might be "dev", "test" or "prod"). Previously, you could determine which groups had the "environment:prod" tag by calling the describe-tags API first, and then describing the Auto Scaling groups that were returned. Now you can directly call the describe-auto-scaling-groups API and filter for the "environment:prod" tag.

    This feature is available through the AWS SDKs, and the AWS Command Line Interface (CLI). Amazon EC2 Auto Scaling is available in all public AWS Regions and AWS GovCloud (US) Regions. To learn more about this feature, visit this AWS documentation.

    » Network Load Balancer now supports TLS 1.3

    Posted On: Oct 14, 2021

    Network Load Balancer (NLB) now supports version 1.3 of the Transport Layer Security (TLS) protocol, enabling you to optimize the performance of your backend application servers while helping to keep your workloads secure. TLS 1.3 on NLB works by offloading encryption and decryption of TLS traffic from your application servers to the load balancer, and provides encryption all the way to your targets. TLS 1.3 is optimized for performance and security by using one round trip (1-RTT) TLS handshakes and only supporting ciphers that provide perfect forward secrecy. As with other versions of TLS, NLB preserves the source IP of the clients to the back-end applications while terminating TLS on the load balancer.

    NLB with TLS 1.3 provides you with the tools to more easily manage your application security, enabling you to improve the security posture of your applications. Using TLS for NLB, you can centralize the deployment of SSL certificates using NLB’s integration with AWS Certificate Manager (ACM) and AWS Identity and Access Management (IAM). You can also analyze TLS traffic patterns and troubleshoot issues. NLB also allows you to use predefined security polices, which control the ciphers and protocols that your NLB presents to your clients.

    TLS 1.3 is available on NLBs in all commercial AWS Regions and AWS GovCloud (US) Regions. Please visit the NLB documentation to learn more.

    » Amazon SageMaker Data Wrangler now supports Amazon Athena Workgroups, feature correlation, and customer managed keys

    Posted On: Oct 14, 2021

    Amazon SageMaker Data Wrangler reduces the time it takes to aggregate and prepare data for machine learning (ML) from weeks to minutes. With SageMaker Data Wrangler, you can simplify the process of data preparation and feature engineering, and complete each step of the data preparation workflow, including data selection, cleansing, exploration, and visualization from a single visual interface.

    Starting today, you can query data on Amazon Athena using workgroups, enable multi-key joins for datasets, visualize correlation and duplicate rows, and provide customer managed keys when exporting your data flows, which make it easier and faster to prepare data for ML. Below is a detailed description of these features:

  • Support for Athena Workgroups. Amazon Athena Workgroups are a resource type that can be used to separate query execution and query history between Users, Teams, or Applications running under the same AWS account. Starting today, you can now query data with Athena from SageMaker Data Wrangler using the workgroup of your choice.
  • Two new visualizations to help with data preparation:
  • With SageMaker Data Wrangler’s feature correlation visualization you can easily calculate the correlation of features in your data set and visualize them as a correlation matrix.
  • With the new duplicate row detection visualization, you can quickly detect if your data set has any duplicate rows.
  • Multi-key joins. You can now specify multiple columns when joining together two data sets in SageMaker Data Wrangler and delete intermediate steps inside of SageMaker Data Wrangler flows.
  • Support for Customer Managed Keys (CMKs) using Amazon Key Management Service (KMS). Starting today, you can now specify the KMS key when using both the “Export to S3” feature in addition to the exported notebooks from within SageMaker Data Wrangler.
  • To get started with new capabilities of Amazon SageMaker Data Wrangler, you can open Amazon SageMaker Studio after upgrading to the latest release and click File > New > Flow from the menu or “new data flow” from the SageMaker Studio launcher. To learn more about the new features, view the documentation

    » Amazon SageMaker Projects now supports Image Building CI/CD templates

    Posted On: Oct 13, 2021

    Amazon SageMaker Projects, the first purpose-built service that manages continuous integration and continuous delivery (CI/CD) resources for machine learning (ML) projects, now has CI/CD templates for building Docker images used in training, processing, and inference.

    SageMaker Projects already provides templates that enable customers to easily provision CI/CD resources for training and deploying ML models; this allows customers to incorporate engineering best practices in their ML projects. Now, customers can use image building CI/CD templates and leverage the same best practices to build Docker images used in ML projects.

    Using the 1P image building CI/CD templates customers can maintain a repository of dependencies used to build docker images and build new Docker images upon changes in the repository. They can create and update the docker containers that drive each step of the ML process in an automated and source-controlled fashion. In addition, customers can also trigger model training/deployment pipelines which use newly built images to enable CI/CD across image building, training, and deployment.

    To get started, create a new SageMaker Project from the SageMaker Studio or the command-line interface using the new 1P image building CI/CD template. To learn more visit our documentation page and read our blog on Image Building CI/CD templates.

    » Amazon Kinesis Data Analytics now supports Apache Flink v1.13

    Posted On: Oct 13, 2021

    You can now build and run stream processing applications using Apache Flink version 1.13 in Amazon Kinesis Data Analytics. Apache Flink v1.13 provides enhancements to the Table/SQL API, improved interoperability between the Table and DataStream APIs, stateful operations using the Python Datastream API, features to analyze application performance, an exactly-once JDBC sink, and more. With this launch, you also get an Apache Kafka connector that works with AWS IAM authentication when you’re using Amazon Managed Streaming for Apache Kafka(Amazon MSK) as your application’s data source.

    Kinesis Data Analytics now supports Apache Flink applications built using JDK 11, Scala 2.12, Python 3.8, and Apache Beam v2.32 Java applications. With Amazon Kinesis Data Analytics Studio, you can interactively query data streams and rapidly develop stream processing applications using an interactive development environment powered by Apache Zeppelin notebooks. Kinesis Data Analytics Studio now supports Apache Flink 1.13 and Apache Zeppelin 0.9.

    Kinesis Data Analytics makes it easier to transform and analyze streaming data in real time with Apache Flink. Apache Flink is an open source framework and engine for processing data streams. Kinesis Data Analytics reduces the complexity of building and managing Apache Flink applications. Kinesis Data Analytics for Apache Flink integrates with Amazon MSK, Amazon Kinesis Data Streams, Amazon Opensearch Service, Amazon DynamoDB streams, Amazon Simple Storage Service (Amazon S3), custom integrations, and more using built-in connectors.

    You can learn more about Kinesis Data Analytics for Apache Flink in our documentation and what Flink 1.13 offers by visiting the official website.

    For Kinesis Data Analytics region availability, refer to the AWS Region Table.

    » AWS Outposts adds new CloudWatch dimension for capacity monitoring

    Posted On: Oct 13, 2021

    Today we are announcing the availability of a new Amazon CloudWatch dimension for metrics in the AWS Outposts namespace. CloudWatch dimensions are unique identifiers for metrics that allow customers to search and filter results.

    In order to help Outposts customers gain visibility into what accounts and services are using the capacity in their Outposts deployment, we’ve added the Account dimension to the following Outposts metrics:

  • InstanceFamilyCapacityUtilization
  • InstanceTypeCapacityUtilization
  • UsedInstanceType_Count
  • Outposts customers can now create alarms to be triggered when utilization of a specific instance type or family reach conditions they specify on a per account or per service basis. 

    CloudWatch metrics for AWS Outposts are available to customers in all public AWS Regions and AWS GovCloud (US) at no additional cost. You can start using these metrics through the AWS Management Console, AWS CLI, or AWS SDK. To learn more, visit the AWS Outposts documentation. To learn best practices for planning, monitoring, and managing capacity with AWS Outposts, join this webinar.

    » AWS Elemental MediaTailor adds prefetch ad support for personalized ad insertion

    Posted On: Oct 13, 2021

    AWS Elemental MediaTailor now supports prefetch ad requests for personalized ad insertion. Prefetching manages the request of ads in advance of ad breaks, increasing the time an ad decision server (ADS) has to respond.

    Live events with spikes in audience viewership benefit from prefetch ad requests as they provide additional time for real-time bidding for ad impression transactions and for new ad assets to transcode which can increase revenue and ad-fill rates. To learn more, visit the MediaTailor documentation pages.

    AWS Elemental MediaTailor is a channel assembly and ad-insertion service for creating linear OTT channels using existing video content and monetizing those channels, other live streams, or video-on-demand (VOD) content with personalized advertising

    Visit the AWS global region table for a full list of AWS Regions where AWS Elemental MediaTailor is available.

    » Amazon VPC Flow Logs now supports Apache Parquet, Hive-compatible prefixes and Hourly partitioned files

    Posted On: Oct 13, 2021

    Amazon Virtual Public Cloud (VPC) is introducing three new features to make it faster, easier and more cost efficient to store and run analytics on your Amazon VPC Flow Logs. First, VPC Flow Logs can now be delivered to Amazon S3 in the Apache Parquet file format. Second, they can be stored in S3 with Hive-compatible prefixes. And third, your VPC Flow Logs can be delivered as hourly partitioned files. All of these features are available when you choose S3 as the destination for your VPC Flow Logs.

    Queries on VPC Flow Logs stored in Apache Parquet format are more efficient as a result of the compact, columnar format of the Parquet files. In addition, you can save on query costs using tools such as Amazon Athena and Amazon Elastic Map Reduce (EMR), as your queries run faster and need to scan lesser volume of data using Parquet files. You can save up to 25% in S3 storage costs due to the better compression on the Parquet formatted files, and eliminate the need to build and manage an Apache Parquet conversion application. The Hive-compatible prefixes make it easier to discover and load new data into your Hive tools, and log files partitioned by the hour make it more efficient to query logs over specific time intervals.

    To get started, create a new VPC Flow Log subscription with S3 as the destination and specify delivery options of Parquet format, Hive-compatible prefixes and/or hourly partitioned files. This functionality is available through the Amazon Web Services Management Console, the Amazon Command Line Interface (Amazon CLI), and the Amazon Software Development Kit (Amazon SDK). To learn more, please refer to the documentation and read the blog post. See CloudWatch Logs pricing page for pricing of log delivery in Apache Parquet format for VPC Flow Logs.

    » Amazon QuickSight doubles SPICE capacity limit to 500m row

    Posted On: Oct 13, 2021

    Amazon QuickSight now supports larger SPICE datasets on the Enterprise Edition. Earlier each SPICE dataset could hold up to 250 million rows and 500GB of data. Now, all new SPICE datasets can accommodate up to 500 million rows (or 500GB) of data in the Enterprise Edition and 25 million rows (or 25GB) for Standard Edition. This raises the limit for your datasets, letting you accelerate dashboards with more data. See here for details.

    If you have an existing dataset that you've filtered to stay below the prior 250M row maximum, you can use SPICE's new capacity by removing or relaxing that filter in Data Prep. If you're creating a new dataset, there's no additional requirements or steps you need to take. You can just set up your dataset and watch it import all 500 million rows.

    SPICE is Amazon QuickSight's Super-fast, Parallel, In-memory Calculation Engine. SPICE is a query acceleration layer for QuickSight customers to analyze their data. It's engineered to rapidly perform advanced calculations and serve data. By using SPICE, you save time because you don't need to retrieve the data every time you change an analysis or update a visual. For more information about using SPICE, see here.

    SPICE’s new 500M data set maximum is available in Enterprise Edition in all supported QuickSight regions: US East (Ohio), US East (N. Virginia), US West (Oregon), Europe (Frankfurt), Europe (Ireland), Europe (London), Asia Pacific (Mumbai), Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific (Sydney), and Asia Pacific (Tokyo). For more details, visit here.

    » Amazon CodeGuru Reviewer adds detectors for AWS Java SDK v2’s best practices and features

    Posted On: Oct 13, 2021

    Amazon CodeGuru Reviewer is a developer tool that leverages automated reasoning and machine learning to detect potential code defects that are difficult to find and offers suggestions for improvements. Today, we are building on our set of detectors for the AWS SDKs with the addition of detectors for the AWS Java SDK v2. These new detectors help to ensure customers are following the Java SDK v2’s best practices, such as using client builders over client constructors, waiters over custom polling, or auto-pagination over manual pagination. The detectors can also find bugs customers create while using the new SDK’s AWS service clients, such as identifying data loss in the Amazon Kinesis v2 client. After detecting an issue or bug, CodeGuru Reviewer provides recommendations for how the developer can remediate it.

    You can get started with Amazon CodeGuru Reviewer by heading to the AWS console page or by integrating CodeGuru Reviewer into your CI pipeline via the GitHub Action.

    To learn more about CodeGuru Reviewer, take a look at the Amazon CodeGuru page. To contact the team visit the Amazon CodeGuru developer forum. For more information about automating code reviews and application profiling with Amazon CodeGuru check out the AWS ML Blog. For more details on how to get started visit the documentation.

    » Amazon MQ now supports ActiveMQ version 5.16.3

    Posted On: Oct 13, 2021

    You can now launch Apache ActiveMQ 5.16.3 brokers on Amazon MQ. This version update to ActiveMQ contains several fixes and improvements compared to the previously supported version, ActiveMQ 5.16.2.

    Amazon MQ is a managed message broker service for Apache ActiveMQ and RabbitMQ that makes it easy to set up and operate message brokers on AWS. Amazon MQ reduces your operational responsibilities by managing the provisioning, setup, and maintenance of message brokers for you. Because Amazon MQ connects to your current applications with industry-standard APIs and protocols, you can easily migrate to AWS without having to rewrite code.

    We encourage you to consider upgrading ActiveMQ with just a few clicks in the AWS Management Console. If your broker has automatic minor version upgrade enabled, it will be automatically upgraded during your next maintenance window. To learn more about upgrading, please see: Editing Broker Engine Version, CloudWatch Logs, and Maintenance Preferences in the Amazon MQ Developer Guide.

    Apache ActiveMQ 5.16.3 includes the fixes and features of all previous releases of ActiveMQ. To learn more, read the ActiveMQ 5.16.3 Release Notes.

    » AWS FPGA developer kit now supports Jumbo frames in virtual ethernet frameworks for Amazon EC2 F1 instances

    Posted On: Oct 12, 2021

    Today we are announcing support for jumbo frames via the virtual ethernet framework in the AWS FPGA Developer kit. With this support, developers using  Amazon EC2 F1 instances can use jumbo frames to get the maximum allowed networking bandwidth for the instance delivering up to double the networking performance. 

    AWS FPGA Development Kit is a set of development and runtime tools to develop, simulate, debug, compile and run hardware accelerated applications on F1 instances. The kit is available on GitHub and includes all documentation on F1, internal FPGA interfaces, and compiler scripts for generating Amazon FPGA Images (AFIs). The Virtual Ethernet framework with support for shell versions F1.X.1.4 and F1.S.10, facilitates streaming ethernet frames from a network interface to the FPGA on F1 instances for processing and back. With support for Jumbo frames, customers can now support Ethernet frames with more than 1500 bytes of payload. With this update, customers can take advantage of the full networking bandwidth available to the F1 instance running their workloads, resulting in up to 2x higher networking performance than was previously attained with the previous version of the Virtual Ethernet framework.

    Customers can upgrade to the latest virtual ethernet solution using the step by step instructions in the application guide and join the discussion on the FPGA Development forum

    » AWS Console Mobile Application adds support for Amazon Elastic Container Service

    Posted On: Oct 12, 2021

    AWS Console Mobile Application users can now use Amazon Elastic Container Service (Amazon ECS) on both the iOS and Android applications. The Console Mobile Application provides a secure and easy-to-use on-the-go solution for monitoring ECS clusters, services, configurations, tasks and container workloads. Customers can also stop ECS tasks and launch desired number of tasks for an ECS service.

    The Console Mobile Application lets customers view and manage a select set of resources to support incident response while on-the-go. Customers can view ongoing issues and follow through to the relevant CloudWatch alarm screen for a detailed view with graphs and configuration options. In addition, customers can check on the status of specific AWS services, view detailed resource screens, and perform select actions.

    Visit the product page for more information about the Console Mobile Application.

    » AWS CloudFormation customers can now manage their applications in AWS Systems Manager

    Posted On: Oct 12, 2021

    AWS CloudFormation customers can now view operational data and quickly take action to resolve issues involving CloudFormation stack resources through Application Manager, a capability of AWS Systems Manager. Using this feature, customers can obtain an application view of resources provisioned via a CloudFormation stack. With the operational metrics, logs, alerts, and cost information obtained from the Application Manager Dashboard, developers can manage their stack resources efficiently throughout their lifecycle.

    To get started in the AWS CloudFormation console, you can select “View in Application Manager” from the stack actions for a specific stack which will guide you to the Application Manager dashboard for the selected stack. Using contextual stack operational data, you can diagnose and resolve issues by initiating remediation actions such as restarting an Amazon Elastic Compute Cloud (Amazon EC2) instance or taking a snapshot of an Amazon Elastic Block Store (Amazon EBS) volume.

    This new capability in the CloudFormation console is now available for no additional charge in 23 AWS Regions including US East (N. Virginia, Ohio), US West (Oregon, N. California), AWS GovCloud (US-East, US-West), Canada (Central), Europe (Frankfurt, Ireland, London, Milan, Paris, Stockholm), Asia Pacific (Hong Kong, Mumbai, Osaka, Seoul, Singapore, Sydney, Tokyo), Middle East (Bahrain), Africa (Cape Town), and South America (São Paulo). To learn more, refer to the CloudFormation user guide.

    » CDK for Kubernetes (CDK8s) now Generally Available

    Posted On: Oct 12, 2021

    Cloud Development Kit for Kubernetes (CDK8s) is now Generally Available and ready for production usage with any conformant Kubernetes cluster. To ensure continued community involvement, cdk8s is also now an official CNCF Sandbox project and has moved from the AWS Labs GitHub organization to a dedicated home on GitHub, cdk8s-team.

    CDK8s is a software development framework for defining Kubernetes applications and reusable abstractions using familiar programming languages and object-oriented APIs. Developers using cdk8s can write and share Kubernetes applications and API resources using the languages of their choice, and synthesize their configuration into standard Kubernetes YAML manifests which can be applied to any Kubernetes cluster.

    cdk8s was first developed by AWS and announced in May 2020. Since then, the project has incubated on GitHub as part of AWS Labs and has grown to include over 30 community contributors with hundreds of community contributions, bug fixes, and improvements.

    To learn more, read our blog or visit cdk8s.io.

    » New AWS Solutions Implementation: Automated Account Configuration

    Posted On: Oct 12, 2021

    Automated Account Configuration helps you automate operational processes in an efficient, error-free, standardized and consistent way, to ensure that your AWS accounts are set up properly and with the necessary resources to meet your business and production needs. You can use the solutions implementation to configure and deploy the following business critical services:

  • AWS Backup to centrally managed the backups of AWS services including Amazon EC2 instances, Amazon RDS, and Amazon EFS.
  • AWS Systems Manager Patch Manager to automate the patching of managed instances such as EC2 instances.
  • Additionally, you can extend the solution to add additional operational processes, including set up and maintenance updates for IAM roles and AWS Key Management Services (AWS KMS). You can use this solution as a template or framework to set up the operational tasks that are essential to your organization.

    The key features that Automated Account Configuration offers are:

  • An automated process to install core operational capabilities including backup and patching in all AWS accounts.
  • A customizable configuration file that lets you control and manage the operational services that you want deployed in your AWS accounts.
  • Supports AWS Managed Services (AMS) accounts, creating the request for change forms.
  • The ability to extend the solution by adding additional configuration steps to meet your business requirements.
  • To learn more and get started, please visit the solutions implementation web page.

    Additional AWS Solutions Implementations are available on the AWS Solutions Implementations webpage, where you can browse technical reference implementations that are vetted by AWS architects, offering detailed architecture and instructions for deployment to help build faster to solve common problems.

    » AWS CDK releases v1.121.0 - v1.125.0 with features for faster development cycles using hotswap deployments and rollback control

    Posted On: Oct 12, 2021

    During September, 2021, 5 new versions of the AWS Cloud Development Kit  (CDK) for JavaScript, TypeScript, Java, Python, .NET and Go were released (v1.121.0 through v.125.0). With these releases, the CDK CLI now has support for hotswap deployments for faster inner-loop development iterations on the application code in your CDK project. Hotswap initially supports AWS Lambda handler code, but support is planned for additional resource types and a “watch” mode which continually watches for changes and deploys any updates. Additionally, users can preserve successfully provisioned resources by disabling automatic stack rollbacks, further reducing deployment and iteration time. These releases also resolve 21 issues and introduce 40 new features that span over 30 different modules across the library. Many of these changes were contributed by the developer community.

    The AWS CDK is a software development framework for defining cloud applications using familiar programming languages. The AWS CDK simplifies cloud development on AWS by hiding infrastructure and application complexity behind intent-based, object-oriented APIs for each AWS service.

    To get started, see the following resources:

  • Read the full release notes for 1.121.0, 1.122.0, 1.123.0, 1.124.0, 1.125.0
  • Get started with the AWS CDK in all supported languages by taking CDK Workshop.
  • Read our Developer Guide and API Reference.
  • Find useful constructs published by AWS, partners and the community in Construct Hub.
  • Connect with the community in the cdk.dev Slack workspace.
  • Follow our Contribution Guide to learn how to contribute fixes and features to the CDK.
  • » Amazon Connect Tasks is now HIPAA eligible

    Posted On: Oct 11, 2021

    Amazon Connect Tasks is now HIPAA (Health Insurance Portability and Accountability Act) eligible. Connect Tasks empowers contact center managers to prioritize, assign, track, and automate customer service tasks across the disparate applications used by agents. HIPAA eligibility means you can prioritize and automate tasks with Protected Health Information (PHI), and even provide agents with this information they need to resolve your customers’ inquiries or service requests. You can prioritize or automate tasks from customer relationship management (CRM) applications such as Salesforce or Zendesk, electronic health records (EHR) systems such as Epic or Cerner, or with your homegrown and business-specific applications. Amazon Connect has been HIPAA eligible since 2017.

    If you have a HIPAA Business Associate Addendum (BAA) in place with AWS, you can now start using Amazon Connect Tasks for HIPAA eligible workloads or use cases. If you don't have a BAA in place with AWS, or if you have any other questions about running HIPAA-regulated workloads on AWS, please contact us. For information and best practices about configuring AWS HIPAA Eligible Services to store, process, and transmit PHI, see the Architecting for HIPAA Security and Compliance on Amazon Web Services Whitepaper.

    Amazon Connect Tasks is available in US East (N. Virginia), US West (Oregon), Canada (Central), Europe (London), Europe (Frankfurt), Asia Pacific (Sydney), Asia Pacific (Singapore), and Asia Pacific (Tokyo). To learn more, see the API reference guide, help documentation, visit our webpage, or read this blog post that provides instructions on how to setup Amazon Connect Tasks for your contact center.

    » Amazon RDS supports T3 instance type for MySQL and MariaDB databases in AWS GovCloud (US) Regions

    Posted On: Oct 11, 2021

    You can now launch the T3 database instance type when using Amazon Relational Database Service (Amazon RDS) for MySQL and Amazon RDS for MariaDB in AWS GovCloud (US) Regions.

    T3 database instances are the latest generation of x86-based burstable general-purpose instances that provide a baseline level of CPU performance with the ability to burst CPU usage at any time for as long as required. T3 instances offer a balance of compute, memory, and network resources and are ideal for database workloads with moderate CPU usage that experience temporary spikes in use.

    Amazon RDS database instances running MySQL versions 5.7.16 and higher, MySQL versions 8.0.17 and higher, MariaDB versions 10.4.8 and higher, and MariaDB versions 10.5.8 and higher are supported with T3. You can easily upgrade to T3 instance class by modifying your existing DB instance in the Amazon RDS Management Console.

    Please refer to the Amazon RDS User Guide for more details and refer to Amazon RDS Pricing for pricing details.

    » Amazon Fraud Detector launches new ML model for online transaction fraud detection

    Posted On: Oct 11, 2021

    Amazon Fraud Detector is excited to announce the Transaction Fraud Insights model, a low-latency fraud detection machine learning (ML) model specifically designed to detect online card-not-present transaction fraud. Like other Amazon Fraud Detector models, Transaction Fraud Insights leverages more than 20 years of fraud detection expertise from Amazon and AWS. The new Transaction Fraud Insights model type detects up to 30% more fraudulent transactions and maintains its performance up to six times longer than Amazon Fraud Detector’s previous model type, Online Fraud Insights.

    Online transaction fraud is on the rise. As more merchants transition from brick-and-mortar to online, bad actors are following suit using increasingly sophisticated attacks. Merchants eventually bear the cost of fraudulent charges in the form of chargeback fees, non-refundable transaction fees, lost merchandise, and operational costs. The Transaction Fraud Insights model detects more fraudulent transactions by automatically computing risk patterns such as whether a buyer is making a repeat purchase and how frequently the buyer makes purchases. Amazon Fraud Detector handles the heavy lifting of calculating these values related to a buyer’s history, keeping them updated, and ensuring they are available for fraud predictions and model re-trainings. Transaction Fraud Insights models maintain their performance longer than previous Amazon Fraud Detector models because these auto-computed values are updated in near real-time and used in each low-latency fraud prediction.

    To get started, define your transaction event and create your event dataset by uploading your historical transactions using Amazon Fraud Detector’s batch import feature. Next, train your Transaction Fraud Insights model in a few clicks. Once your model is trained, you can embed the Fraud Detector API into your checkout flow to generate fraud predictions in real-time, or use Fraud Detector’s batch prediction API to perform offline or scheduled predictions. For each transaction, Amazon Fraud Detector automatically updates your stored event dataset, which means you can start a model retraining in seconds.

    The Transaction Fraud Insights model type is available in the US East (N. Virginia), US East (Ohio), US West (Oregon), Europe (Ireland), Asia Pacific (Singapore) and Asia Pacific (Sydney) regions. For additional details, see our documentation and pricing page.

    » AWS announces a price reduction of up to 56% for Amazon Fraud Detector machine learning fraud predictions

    Posted On: Oct 11, 2021

    We are excited to announce that we are lowering the price of Amazon Fraud Detector machine learning (ML) based fraud predictions. Fraud Detector is a fully managed service that makes it easy to identify potentially fraudulent online activities, such as the creation of fake accounts or online payment fraud. Using ML under the hood and based on over 20 years of fraud detection expertise from Amazon, Fraud Detector automatically identifies potentially fraudulent activity in milliseconds—with no ML expertise required.

    With Fraud Detector, you are charged per fraud prediction. We are adjusting the usage tiers for fraud predictions that use Fraud Detector’s Online Fraud Insights ML model so that you reach the highest discount tier sooner. Previously, you needed to use 400,000 fraud predictions per month to reach the 50% discount tier and 1,200,000 fraud predictions per month to reach the 75% discount tier. Now, you will reach the 75% discount after just 100,000 fraud predictions per month.

    The new pricing for Online Fraud Insights fraud predictions is as follows:

  • $0.03 per prediction for the first 100,000 fraud predictions per month
  • $0.0075 per prediction for usage above 100,000 fraud predictions per month
  • This represents significant savings for higher volume ML-based fraud prediction workloads. For example, a customer generating 500,000 Online Fraud Insights predictions per month would previously have paid $13,500. With this price reduction, that same customer will pay $6,000, a 56% savings over the previous pricing. 

    Amazon Fraud Detector customers automatically benefit from this new reduced pricing, which takes effect October 11, 2021 in all regions where Amazon Fraud Detector is available. For more details, see the Amazon Fraud Detector pricing page.

    » Amazon WorkMail adds Mobile Device Access Override API and MDM integration capabilities

    Posted On: Oct 11, 2021

    Amazon WorkMail now offers an expanded capability around its Mobile Device Access Rules (MDARs). The new Mobile Device Access Override API (MDOA) allows customers to adjust existing MDARs, either manually through the CLI, or in an automated fashion when using a third-party Mobile Device Management (MDM) tool. Customers use trusted third-party MDM tools to perform security posture assessments before granting devices access to corporate resources. The new API simplifies the creation and management of exceptions to default MDARs, either because there is a need to permit an out-of-posture device to connect to WorkMail, or because a user has reported a specific device to be stolen or lost. In that case, the individual device can be blocked to reduce the risk of data leakage.

    To get started you can create a new mobile device access override using the API or the AWS CLI. For more information see Managing mobile device access overrides.

    To learn more about MDM integration see Integrating with mobile device management solutions and AWS Sample application.

    This feature is available today in all Amazon WorkMail AWS Regions. To learn more about Amazon WorkMail, or to start your trial, please visit Amazon WorkMail.

    » NoSQL Workbench for Amazon DynamoDB now enables you to import and automatically populate sample data to help build and visualize your data models

    Posted On: Oct 11, 2021

    NoSQL Workbench for DynamoDB, a client-side tool that helps you design, visualize, and query nonrelational data models by using a point-and-click interface, now helps you import and automatically populate sample data to help build and visualize your data models. Now, you can import sample data from .csv files into new and existing data models. You also can export your query results in .csv format from the NoSQL Workbench operation builder.

    Designing scalable data models is essential to building massive-scale, operational, nonrelational databases. However, designing data models can be challenging, particularly when you are designing data models for new applications that have data access patterns you are still developing. With NoSQL Workbench, you can design and visualize nonrelational data models more easily by using a point-and-click interface, and help ensure that the data models can support your application’s queries and access patterns. Now you can import as many as 150 records from .csv files to new and existing data models in NoSQL Workbench to test and adapt them for your applications. You also can save and export your query results in .csv files from the operation builder for collaboration, documentation, and presentations.

    NoSQL Workbench is available for macOS, Linux, or Windows. For more information, see Adding Sample Data to a Data Model.

    » AWS Marketplace now supports viewing agreements and canceling and extending offers for Professional Services

    Posted On: Oct 11, 2021

    AWS Marketplace sellers, including Independent Software Vendors (ISVs) and consulting partners, can now view agreements, cancel offers, and extend offer expiration dates for Professional Services from the AWS Marketplace Management Portal (AMMP). Professional Services in AWS Marketplace enables ISVs and consulting partners to create new Professional Services listings in AWS Marketplace and extend Private Offers to AWS customers. 

    From the “Agreements” tab in AMMP, you can now see a list of accepted Professional Service offers and navigate into individual agreements to view more details such as dates, product dimensions, and service agreement. You can also now navigate into your individual offers from the “Offers” tab to cancel or extend an offer’s expiration date to provide a customer more time to respond to your offer.

    » Amazon Fraud Detector now supports event datasets

    Posted On: Oct 11, 2021

    We are excited to announce event dataset storage for Amazon Fraud Detector. The new capability enables customers to easily send and store their production fraud data directly within Amazon Fraud Detector. Customers can use their event datasets to train machine learning (ML) models with higher predictive performance since the models can apply historical context to new events by automatically calculating values such as account age and purchase frequency. Customers can also move faster by retraining models without needing to upload a new training dataset to S3, and they can close the feedback loop from offline fraud investigations by updating their fraud labels for stored events.

    Prior to this launch, customers could only train models on data stored in S3. To retrain a model, customers would need to manually update their dataset, upload the latest dataset to S3, and then point Amazon Fraud Detector to it. These data preparation steps made model retraining time consuming, increasing the chances that a model could go “stale”.

    Using the newly launched event datasets, customers can upload their historical event data directly into Amazon Fraud Detector for training models. The event dataset is also automatically updated with each new prediction so there is no need to upload new datasets for each model retraining. Event dataset metrics, such as the number of events and size of the dataset, are updated automatically and can also be refreshed on-demand. Customers can update event labels (e.g., fraud, legitimate) based on offline reviews to close the ML feedback loop. With their event dataset stored in Amazon Fraud Detector, customers can now train a new model or retrain an existing model in even fewer clicks.

    To get started, create a new event type or select an existing one, and then navigate to the ‘Stored events’ tab in the Fraud Detector console. In this tab, you can enable real-time event storage for predictions. To store historic data, you can upload a CSV file of event data or use the new SendEvent API to stream the events to Amazon Fraud Detector. Once you have a stored dataset, you can quickly train or retrain model versions by selecting ‘stored events’ as your model training data source. Event data storage costs $0.10 per GB per month and is available in the US East (N. Virginia), US East (Ohio), US West (Oregon), Europe (Ireland), Asia Pacific (Singapore) and Asia Pacific (Sydney) regions. For additional details about event data storage, see our documentation.

    » AWS Backup adds an additional layer for backup protection with the availability of AWS Backup Vault Lock

    Posted On: Oct 8, 2021

    Today, AWS Backup announced the availability of AWS Backup Vault Lock. This new feature enhances customers’ ability to protect backups from inadvertent or malicious actions. It helps customers implement safeguards that ensure they are storing their backups using a Write-Once-Read-Many (WORM) model*.

    Customers can set up multiple layers of data protection in AWS Backup, including independent copies of backups across multiple AWS Regions and accounts, separate resource access policies, and long-term data retention. Using a simple setting, they can now prevent any user from deleting their backups or changing their specified retention periods, and have an additional layer of data protection. 

    AWS Backup Vault Lock is available through the AWS CLI and API, in all AWS Regions where AWS Backup is available except AWS China Regions, at no additional charge. AWS Backup enables customers to centralize and automate data protection across AWS services through a fully managed and cost-effective solution. 

    To learn more about AWS Backup Vault Lock, please visit our:

  • AWS Backup product page
  •  AWS Backup documentation
  • * The feature has not yet been assessed for compliance with the Securities and Exchange Commission (SEC) rule 17a-4(f) and the Commodity Futures Trading Commission (CFTC) in regulation 17 C.F.R. 1.31(b)-(c).

    » Amazon ECS Anywhere now supports GPU-based workloads

    Posted On: Oct 8, 2021

    Amazon Elastic Container Services (ECS) now enables customers to manage containerized GPU-based workloads running on-premises infrastructure using Amazon ECS Anywhere. With Amazon ECS Anywhere GPU support, customers can deploy GPU based applications that need to remain on premise due to regulatory, network latency, data residency, or other requirements. Additionally, enterprises can make use of their existing investment on GPU compute capacity to run machine learning, 3D visualization, image processing and big data workloads without the need to transfer data to the cloud.

    Amazon ECS Anywhere, launched in May 2021, is a capability in Amazon ECS that enables customers to more easily run and manage container-based applications on-premises, including virtual machines (VMs), bare metal servers, and other customer-managed infrastructure. With this release, customers can add GPU instances by adding the --enable-gpu flag to the Amazon ECS Anywhere installation script. Once the script is installed, customers will be able to assign a number of GPUs to particular containers in their task definition. Amazon ECS uses this as a scheduling mechanism to pin physical GPUs to the desired containers for workload isolation and optimal performance.

    Customers can use Nvidia and CUDA drivers with Amazon ECS Anywhere by following the steps to install the drivers as provided here. To learn more, read our blog or check out our documentation. This feature is now available across all regions globally.

    » Amazon Neptune now supports Auto Scaling for Read Replicas

    Posted On: Oct 8, 2021

    You can now use Amazon Neptune Auto Scaling to automatically add or remove Read Replicas in response to changes in performance metrics you specify. Neptune Read Replicas share the same underlying volume as the primary instance and are well suited for read scaling. With Neptune Auto Scaling, you can specify a desired value for CloudWatch (CW) metrics for your Replicas such as average CPU utilization. Neptune Auto Scaling adjusts the number of Read Replicas to keep the CW metric closest to the value you specify.

    For example, an increase in traffic could cause the average CPU utilization of your Replicas to go up and beyond your specified value. New Read Replicas are automatically added by Neptune Auto Scaling to support this increased traffic. Similarly, when CPU utilization goes below your set value, Read Replicas are terminated so that you don't pay for unused database instances.

    Neptune Auto Scaling works with Amazon CloudWatch to continuously monitor performance metrics of your Read Replicas. You can create an Neptune Auto Scaling policy for any of your existing or new Neptune clusters. To get started, download the latest CLI or SDKs to create Neptune Auto Scaling policies programmatically or refer our documentation for more information. There is no additional cost to use Neptune Auto Scaling beyond what you already pay for Neptune and CloudWatch alarms.

    » New AWS Solution: Maintaining Personalized Experiences using Machine Learning

    Posted On: Oct 8, 2021

    We are pleased to announce the launch of the Maintaining Personalized Experiences with Machine Learning, an AWS Solutions Implementation that provides end-to-end automation and scheduling for your Amazon Personalize resources. This solution keeps your item and user data current and manages re-training for your models to ensure that recommendations are kept up-to-date with recent user activity and to retain their relevance for your users. This solution publishes Amazon Personalize model offline metrics to Amazon CloudWatch to provide a directional sense of the quality of your models over time.

    To learn more about the Maintaining Personalized Experiences with Machine Learning solution, see the AWS Solutions Implementation webpage.

    Additional AWS Solutions are available on the AWS Solutions Implementation webpage, where customers can browse solutions by product category or industry to find AWS-vetted, automated, turnkey reference implementations that address specific business needs.

    » Introducing New AWS Solution: AWS QnABot, a self-service conversational chatbot built on Amazon Lex

    Posted On: Oct 8, 2021

    We are pleased to announce that AWS QnABot has now been released as an official AWS Solution Implementation. The AWS QnABot is an open source, multi-channel, multi-language conversational chatbot built on Amazon Lex, that responds to your customer’s questions, answers, and feedback. Without programming, the AWS QnABot solution allows customers to quickly deploy self-service conversational AI on multiple channels including their contact centers, web sites, social media channels, SMS text messaging, or Amazon Alexa.

    Customers can configure curated answers to frequently asked questions using an integrated content management system, supporting rich text and rich voice responses optimized for each channel, or they can expand the solution's answer knowledge base to include unstructured documents, PDFs, and existing web page content (via an optional seamless integration with Amazon Kendra). The solution supports multiple languages with optional automatic translation of answers to the user’s local language. The AWS QnABot solution can also ask questions making it suitable for quickly building authentication flows, and data capture such as surveys, questionnaires and decision trees.

    For more advanced scenarios the AWS QnABot can be extended to integrate with backend systems, supporting data dips for personalized and dynamic responses, and for integrating with your Customer Relationship Management (CRM) and ticketing systems. Integrated user feedback and monitoring provides visibility into customer queries, concerns, sentiment, and facilitates tuning and enriching content. AWS QnABot uses Amazon Lex and can easily be added to existing or new bots to enrich customer experience and reduce call center loads.

    For details about this solution, visit the solution’s AWS Solutions Implementation webpage.

    Additional AWS Solutions are available on the AWS Solutions Implementation webpage, where customers can browse solutions by product category or industry to find AWS-vetted, automated, turnkey reference implementations that address specific business needs.

    » Amazon Personalize launches new recipe that increases the relevance of similar items recommendations

    Posted On: Oct 7, 2021

    We are excited to announce a new recipe in Amazon Personalize that, when given an item, will recommend similar items based on both user-item interaction data and item metadata. The combination of your users’ historical interactions and the information you have about your items increases the relevance of recommendations and ensures similar items capture your users’ attention. To assess the similarity of items we measure how frequently the items are found together in users’ histories. As a benchmark, we found that the new recipe is 10.2% more accurate in identifying similar items than recipes that use interactions data alone. This means your users will be more likely to find the items most related to what they are viewing.

    Using the new recipe is simple. Provide metadata about your items along with users’ interactions with your items and Amazon Personalize automatically identifies the most relevant data to recommend similar items in your application.

    Amazon Personalize enables you to personalize your website, app, ads, emails, and more, using the same machine learning technology as used by Amazon, without requiring any prior machine learning experience. To get started with Amazon Personalize, visit our documentation.

    » Amazon EC2 Mac instances are now available in seven additional AWS Regions

    Posted On: Oct 7, 2021

    Starting today, Amazon EC2 Mac instances are available in Europe (Stockholm), Europe (London), Europe (Frankfurt), Asia Pacific (Seoul), Asia Pacific (Tokyo), Asia Pacific (Mumbai), and Asia Pacific (Sydney) Regions. Built on Apple Mac mini computers, EC2 Mac instances enable customers to run on-demand macOS workloads in the AWS cloud for the first time, extending the flexibility, scalability, and cost benefits of AWS to all Apple developers. With EC2 Mac instances, developers building apps for iPhone, iPad, Mac, Apple Watch, Apple TV, and Safari can now provision and access macOS environments within minutes. EC2 Mac enables developers to dynamically scale capacity as needed, and benefit from AWS’s pay-as-you-go pricing to develop, build, test, sign, and publish their apps.

    Today, millions of Apple developers rely on macOS and its innovative tools, frameworks, and APIs to develop, build, test, and sign apps for Apple’s industry-leading platforms that serve more than a billion customers globally. With EC2 Mac instances, Apple developers are now able to leverage the flexibility, elasticity, and scale of AWS so they can increase their focus on core innovation such as developing creative and useful apps and spend less time on managing infrastructure. Customers can also consolidate development of Apple, Windows, and Android apps onto AWS, leading to increased developer productivity and accelerated time to market. Similar to other EC2 instances, customers can easily use EC2 Mac instances together with AWS services and features like Amazon Virtual Private Cloud (VPC) for network security, Amazon Elastic Block Storage (EBS) for fast and expandable storage, Amazon Elastic Load Balancer (ELB) for distributing build queues, Amazon FSx for scalable file storage, and AWS Systems Manager (SSM) for configuring, managing, and patching macOS environments. The availability of EC2 Mac instances also offloads the heavy lifting that comes with managing infrastructure to AWS, which means Apple developers can focus entirely on building great apps.

    EC2 Mac instances are powered by a combination of Mac mini computers—featuring Intel’s 8th generation 3.2 GHz (4.6 GHz turbo) Core i7 processors, 6 physical/12 logical cores, and 32 GiB of memory - and the AWS Nitro System, providing up to 10 Gbps of VPC network bandwidth and 8 Gbps of EBS storage bandwidth through high-speed Thunderbolt 3 connections. Amazon EC2 Mac instances are uniquely enabled by the AWS Nitro System, which makes it possible to offer Mac mini computers as a fully integrated and managed compute instances with Amazon VPC networking and Amazon EBS storage just like any other Amazon EC2 instance. EC2 Mac instances are available in bare metal instance size (mac1.metal), and support macOS Mojave 10.14, macOS Catalina 10.15, and macOS Big Sur 11. Customers can connect to Mac instances via both SSH for Command Line Interface and active remote screen sharing using a VNC client for a graphical interface.

    With this expansion, EC2 Mac instances are now available across US East (N.Virginia, Ohio), US West (Oregon), Europe (Ireland, Frankfurt, London, Stockholm) and Asia Pacific (Singapore, Seoul, Tokyo, Mumbai, and Sydney) regions. EC2 Mac instances are available for purchase On-Demand or as part of Savings Plan (1 year and 3 year). To learn more, visit our EC2 Mac page, refer to EC2 Mac documentation, or get started by spinning up a Mac instance on AWS console here.

    » Amazon Lex launches progress updates for fulfillment

    Posted On: Oct 7, 2021

    Starting today, you can configure your Amazon Lex bots to provide periodic updates to users while their requests are processed. Customer support conversations often require execution of business logic that can take some time to complete. For example, updating an itinerary on an airline reservation system may take a couple of minutes during peak hours. Typically, support agents put the call on hold and provide periodic updates (e.g., “We are still processing your request; thank you for your patience”) until the request is fulfilled. Now, you can easily configure your bot to automatically provide such periodic updates in a conversation. With progress updates capability, bot builders can quickly enhance the ability of virtual contact center agents and smart assistants.

    Previously, you had to manage attributes and implement Lambda code to handle fulfillment of tasks that took longer than 30 seconds. You can now directly configure the bot to inform the user about the progress with messages such as “We are working on your request. Thank you for your patience.” In addition, you can configure messages to indicate the fulfillment start (e.g., “We have issued the itinerary update; It may take a couple of minutes to process”) and the fulfillment completion (e.g., “The itinerary is now updated. Your new confirmation code is ABC123”). Native support for fulfillment progress updates enables a simplified bot design and an improved conversational experience.

    You can use the fulfillment progress updates in all the AWS regions where Amazon Lex operates. To learn more, visit the Amazon Lex documentation page.

    » Amazon QuickSight adds support for Pixel-Perfect dashboards

    Posted On: Oct 7, 2021

    Amazon QuickSight now supports pixel-perfect dashboards with the new free-form layout mode. Free-form layouts provide authors with precise, pixel-level control over the size and placement of visual elements on QuickSight dashboards, including support for overlapping content. In addition, authors can also set additional attributes for QuickSight visuals in free-form layout, including background color, transparency, border color, selection color as well as visibility of the loading animation, visual context menu and on-visual menu. Free-form layout also supports conditional rendering of visual elements, which allows authors to show or hide content based on QuickSight parameter values, enabling context sensitive display of text, visuals and images. The combination of these options allows QuickSight authors to showcase their creativity by creating complex, interactive dashboards that allow end-users to understand key insights from their data.

    You can learn more about creating dashboards with free-form layout and conditional rendering in this blog, or in the QuickSight user guide.

    Free-form layout and conditional rendering are now available in all supported regions Amazon QuickSight regions - US East (N. Virginia and Ohio), US West (Oregon), EU (Frankfurt, Ireland and London), Asia Pacific (Mumbai, Seoul, Singapore, Sydney and Tokyo), Canada (Central), South America (São Paulo), and AWS GovCloud (US-West).

    » AWS IoT SiteWise is now available in Asia Pacific (Mumbai), Asia Pacific (Seoul), and Asia Pacific (Tokyo) AWS regions

    Posted On: Oct 7, 2021

    AWS IoT SiteWise is now available in the Mumbai, Seoul, and Tokyo AWS Regions, extending the footprint to 11 AWS Regions.

    AWS IoT SiteWise is a managed service that makes it easy to collect, store, organize and monitor data from industrial equipment at scale to help you make better, data-driven decisions. You can use AWS IoT SiteWise to monitor operations across facilities, quickly compute common industrial performance metrics, and create applications that analyze industrial equipment data to prevent costly equipment issues and reduce gaps in production. This allows you to collect data consistently across devices, identify issues with remote monitoring more quickly, and improve multi-site processes with centralized data. With AWS IoT SiteWise, you can focus on understanding and optimizing your operations, rather than building costly in-house data collection and management applications.

    To get started, log into the AWS Management Console and navigate to AWS IoT SiteWise console and check out a demo to see what you can achieve with AWS IoT SiteWise. For a full list of AWS Regions where AWS IoT SiteWise is available, visit the AWS Region table. To learn more, please visit the AWS IoT SiteWise website or the developer guide.

    Visit the AWS IoT website to learn more about other AWS IoT services.

    » Amazon OpenSearch Service (successor to Amazon Elasticsearch Service) now comes with an improved management console

    Posted On: Oct 7, 2021

    We have updated the Amazon OpenSearch Service (successor to Amazon Elasticsearch Service) management console to improve your overall experience with configuring and managing your OpenSearch and Elasticsearch clusters on the service. The new console helps you create and update domains, as well helps you get information about your domains, more easily.

    A new dashboard allows you to grasp the overall health of your domains, and helps you troubleshoot issues. You can now create a domain through a single page, making it more intuitive when configuring inter-dependent settings, such as the availability of certain instance types or features for certain software versions. Viewing and modifying a domain is now organized into Cluster and Security sections, helping faster updation of configuration such as instance count, instance type, and access policies. The updated console also improves searching, sorting and filtering capabilities for domains and notifications, making the overall experience of managing your Amazon OpenSearch Service domains better.

    We would love to hear your feedback. Please use the Feedback option at the bottom of the management console pages to share any feedback that you may have.

    The updated Amazon OpenSearch Service management console is now available across 25 regions globally. Please refer to the AWS Region Table for more information about Amazon OpenSearch Service availability.

    To learn more about Amazon OpenSearch Service, please visit the product page.

    » Amazon Chime SDK media capture pipelines adds the ability to configure APIs for customizable media capture

    Posted On: Oct 7, 2021

    The Amazon Chime SDK lets developers add real-time audio, video, screen share, and messaging capabilities to their web or mobile applications. With media capture pipelines, developers can capture the contents of their Amazon Chime SDK meeting and save them to an Amazon Simple Storage Service (Amazon S3) bucket of their choice. Starting today developers can configure APIs to customize the media capture experience for their applications, by easily switching the way they capture audio, video and content streams.

    Media capture pipeline with the Amazon Chime SDK can now be configured using APIs, without requiring additional support. Developers can switch the audio stream capture to choose between AudioOnly and AudioWithActiveSpeakerVideo mode, as well as choosing to enable or disable capture of individual video streams and content share streams. Developers can also now use SourceConfiguration to select specific attendee video streams to capture, and do not need to capture all individual video streams.

    To learn more about the Amazon Chime SDK, and the media capture pipeline feature, review the following resources:

  • Amazon Chime SDK
  • Amazon Chime SDK API Reference 
  • Amazon Chime SDK for JavaScript 
  • Blog - Capture Amazon Chime SDK Meeting Using Media Capture Pipelines
  • » Announcing Fast File Mode for Amazon SageMaker

    Posted On: Oct 7, 2021

    Amazon SageMaker now supports Fast File Mode for accessing data in training jobs. This enables high performance data access by streaming directly from Amazon S3 with no code changes from the existing File Mode. For example, training a K-Means clustering model on a 100GB dataset took 28 minutes with File Mode but only 5 minutes with Fast File Mode (82% decrease).

    Training machine learning models often requires large amounts of data. Efficiently accessing that data helps improve model training performance. Until now, SageMaker offered two modes for reading data directly from Amazon S3: File Mode and Pipe Mode. File Mode downloads training data to an encrypted Amazon EBS volume attached to the training instance. This download needs to finish before model training starts. Pipe Mode streams the data directly to the training algorithm, which can lead to better performance, but requires code changes.

    Fast File Mode combines the ease of use of the existing File Mode with the performance of Pipe Mode. This provides convenient access to data as if it was downloaded locally, while offering the performance benefit of streaming the data directly from Amazon S3. As a result, training can start without waiting for the entire dataset to be downloaded to the training instances. Fast File Mode is available to use without additional charges.

    To learn more, please view the documentation for accessing training data in SageMaker. To get started, log into the Amazon SageMaker console.

    » AWS Lambda now supports IAM authentication for Amazon MSK as an event source

    Posted On: Oct 7, 2021

    AWS Lambda functions that are triggered from Amazon MSK topics can now access MSK clusters secured by IAM Access Control. This is in addition to SASL/SCRAM, which is already supported on Lambda. To get started, customers who select MSK as the event source for their Lambda function can configure their function's execution role to allow Lambda to connect to their clusters and read from their topics. This feature requires no additional charge to use, and is available in all AWS Regions where Amazon MSK is supported as an event source for AWS Lambda.

    To learn more about using IAM authentication for your Lambda functions triggered from Amazon MSK topics, read the Lambda Developer Guide.

    » Amazon Kendra launches support for 34 additional languages

    Posted On: Oct 7, 2021

    We are excited to announce that Amazon Kendra is adding support for 34 languages for keyword-based search over documents and FAQs. Amazon Kendra is an intelligent search service powered by machine learning. Customers with content in one or more of the supported languages can now use Amazon Kendra to index and search their content with native language support.

    The new languages are available in all Amazon Kendra regions - US East (N. Virginia), US East (Ohio), US West (Oregon), Europe (Ireland), Asia Pacific (Sydney), Asia Pacific (Singapore), Canada (Central). For more information about the new languages and supported features, please visit our documentation.

    » AWS Elastic Beanstalk supports Database Decoupling in an Elastic Beanstalk Environment

    Posted On: Oct 6, 2021

    AWS Elastic Beanstalk now supports decoupling a database running in an Elastic Beanstalk environment. Previously, a database instance created by Elastic Beanstalk was tied to the lifecycle of the environment. With this launch, the lifecycle of your database instance will not be tied to your application’s environment lifecycle, and you can decouple a database managed by Elastic Beanstalk from a Beanstalk environment. The environment’s health is not affected by the decoupling operation and you can keep the database operational as an external database, available for multiple environments to connect to it. You also have the option to terminate an Elastic Beanstalk environment while leaving the database operational.

    You can configure the database lifecycle using the Elastic Beanstalk Console or with HasCoupledDatabase and DBDeletionPolicy options in the aws:rds:dbinstance namespace. For more information, see Adding a database to your Elastic Beanstalk environment in the AWS Elastic Beanstalk Developer Guide.

    » Amazon EMR now supports Apache Spark SQL to insert data into and update Apache Hive metadata tables when Apache Ranger integration is enabled

    Posted On: Oct 6, 2021

    We are announcing the support of using Apache Spark SQL to update Apache Hive metadata tables when using Amazon EMR integration with Apache Ranger.

    This January, we launched Amazon EMR integration with Apache Ranger, a feature that allows you to define and enforce database, table, and column-level permissions when Apache Spark users access data in Amazon S3 through the Hive Metastore. Previously, with Apache Ranger is enabled, you were limited to only being able to read data using Spark SQL statements such as SHOW DATABASES and DESCRIBE TABLE. Now, you can also insert data into, or update the Apache Hive metadata tables with these statements: INSERT INTO, INSERT OVERWRITE, and ALTER TABLE.

    This feature is enabled on Amazon EMR 6.4 in the following AWS Regions: US East (N. Virginia), US East (Ohio), US West (N. California), US West  (Oregon), Europe (Frankfurt), Europe (Ireland), Europe (London), Europe (Paris), (Milan), Europe (Stockholm), Canada (Central), Asia Pacific (Mumbai), Asia Pacific  (Seoul), Asia Pacific (Singapore), Asia Pacific (Hong Kong), Asia Pacific (Tokyo), Asia Pacific (Sydney), South America (São Paulo), Middle East (Bahrain), and Africa (Cape Town).

    To get started, see the following list of resources:·        

    AWS Big Data Blog post: 

  • Authorize SparkSQL data manipulation on Amazon EMR using Apache Ranger
  • Introducing Amazon EMR integration with Apache Ranger 
  • Amazon EMR Management Guide: 

  • Using Apache Spark SQL with Apache Ranger plugin
  • » AWS Network Firewall Adds New Configuration Options for Rule Ordering and Default Drop

    Posted On: Oct 6, 2021

    AWS Network Firewall now offers new configuration options for rule ordering and default drop, making it easier to write and process rules to monitor your virtual private cloud (VPC) traffic.

    AWS Network Firewall enables you to create pass, drop, and alert rules based on their action type. Before today, AWS Network Firewall would evaluate all pass rules before evaluating any drop or alert rules and would evaluate all drop rules before evaluating any alert rules. Starting today, you can configure AWS Network Firewall to evaluate rules in the precise order you specify, regardless of their action type. For example, you can choose to evaluate a drop rule before a pass rule, or you can choose to evaluate an alert rule followed by a drop rule, followed by another alert rule. Strict rule ordering is an optional feature that can be applied to both stateful firewall rule groups and firewall policies. Additionally, you can now configure AWS Network Firewall to drop all non-matching traffic by default without having to write additional rules.

    You can access the new configuration options for rule ordering and default drop from the Amazon VPC console or the Network Firewall API. Now available in 23 AWS Regions, AWS Network Firewall is a managed firewall service that makes it easy to deploy essential network protections for all your Amazon VPCs. The service automatically scales with network traffic volume to provide high-availability protections without the need to set up or maintain the underlying infrastructure. AWS Network Firewall is integrated with AWS Firewall Manager to provide you with central visibility and control of your firewall policies across multiple AWS accounts. To get started with AWS Network Firewall, please see the AWS Network Firewall product page and service documentation.

    » Prepare and visualize time series datasets in Amazon SageMaker Data Wrangler

    Posted On: Oct 6, 2021

    Amazon SageMaker Data Wrangler reduces the time it takes to aggregate and prepare data for machine learning (ML) from weeks to minutes. With SageMaker Data Wrangler, you can simplify the process of data preparation and feature engineering, and complete each step of the data preparation workflow, including data selection, cleansing, exploration, and visualization from a single visual interface.

    Starting today, you can use new capabilities of Amazon SageMaker Data Wrangler that help make it easier and faster to prepare data for ML including a new collection of time series transformations and two new time series visualizations to quickly generate insights from your time series data. The new time series transformations support missing value imputations, featurization of time series (e.g. Fourier coefficients, autocorrelation statistics, entropy, etc.), resampling operators to downsample or upsample data sets to a uniform frequency, time lag features, and rolling window functions. The new transformations also support more general operations such as grouping, unifying length, flattening, and exporting of vector-valued columns.

    Additionally, you can now visualize seasonality and trends in your data and identify anomalies with new time series visualizations in Amazon SageMaker Data Wrangler. For example, with the seasonality and trend visualization, you can separate seasonal effects from trends in your sales data. Additionally, with the outlier detection visualization, you can identify outliers within your customer purchase datasets to detect changes in customer purchase behavior.

    To get started with new capabilities of Amazon SageMaker Data Wrangler, you can open Amazon SageMaker Studio after upgrading to the latest release and click File > New > Flow from the menu or “new data flow” from the SageMaker Studio launcher. To learn more about the new time series transformations and visualizations, view the documentation.

    » AWS Application Migration Service is now available in the Asia Pacific (Hong Kong), Asia Pacific (Mumbai), Asia Pacific (Seoul), and Europe (London) Regions

    Posted On: Oct 6, 2021

    AWS Application Migration Service (AWS MGN) is now available in four additional AWS Regions: Asia Pacific (Hong Kong), Asia Pacific (Mumbai), Asia Pacific (Seoul), and Europe (London). 

    AWS Application Migration Service is the primary service that we recommend for lift-and-shift migrations to AWS. The service minimizes time-intensive, error-prone manual processes by automatically converting your source servers from physical, virtual, and cloud infrastructure to run natively on AWS. You can use the same automated process to migrate a wide range of applications to AWS without making changes to applications, their architecture, or the migrated servers. 

    By using AWS Application Migration Service, you can more quickly realize the benefits of the AWS Cloud - and leverage additional AWS services to further modernize your applications.

    With this launch, AWS Application Migration Service is now available in 17 AWS Regions: US East (Ohio), US East (N. Virginia), US West (N. California), US West (Oregon), Asia Pacific (Hong Kong), Asia Pacific (Mumbai), Asia Pacific (Osaka), Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Canada (Central), Europe (Frankfurt), Europe (Ireland), Europe (London), Europe (Stockholm), and South America (São Paulo). Access the AWS Regional Services List for the most up-to-date availability information.

    For more information about AWS Application Migration Service, visit the product page or get started for free in the AWS Console.

    » AWS Backup Audit Manager adds compliance reports

    Posted On: Oct 5, 2021

    AWS Backup Audit Manager now allows you to generate reports to track the compliance of your defined data protection policies in AWS Backup. You can create a report plan in AWS Backup Audit Manager to deliver compliance reports in your designated Amazon S3 bucket. You can use these reports to identify violations of your data protection policies, perform remediation, and demonstrate compliance of your data protection policies to meet regulatory requirements.

    AWS Backup enables you to centralize and automate data protection policies across AWS services based on organizational best practices and regulatory standards, and AWS Backup Audit Manager helps you maintain and demonstrate compliance with those policies.

    AWS Backup Audit Manager compliance reports are available in the US East (N. Virginia), US East (Ohio), US West (N. California), US West (Oregon), Canada (Central), Europe (Frankfurt), Europe (Ireland), Europe (London), Europe (Paris), Europe (Stockholm), South America (Sao Paulo), Asia Pacific (Hong Kong), Asia Pacific (Mumbai), Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), and Middle East (Bahrain) Regions. For more information on AWS Backup availability and pricing, see the AWS Regional Services List and pricing page. To learn more about AWS Backup Audit Manager, visit the product page, documentation, and AWS News launch blog. To get started, visit the AWS Backup console.

    » Amazon Braket offers D-Wave’s Advantage 4.1 system for quantum annealing

    Posted On: Oct 5, 2021

    You can now access D-Wave’s Advantage 4.1 quantum annealing system on Amazon Braket, the AWS quantum computing service. According to D-Wave, the new Advantage quantum processing unit (QPU) has more than 5,000 active qubits with 15-way connectivity to enable researchers and developers to explore larger and more complex optimization problems.

    With today’s launch, Amazon Braket now offers customers access to three D-Wave QPUs: the new Advantage 4.1 QPU, the existing Advantage 1.1 QPU, and the 2000Q QPU. Based on D-Wave specifications, the Advantage systems have two-and-a-half times more qubits than the D-Wave 2000Q, with more than twice the connectivity of the 2000Q. Furthermore, in comparison to the Advantage 1.1, the latest Advantage 4.1 QPU offers an increased yield for available qubits and couplers enabling customers to run larger programs more compactly. Additional details on these QPUs can be found on the Amazon Braket console devices page. You can use any of the three D-Wave QPUs and switch between them using the Amazon Braket SDK or the Amazon Braket plugin for the D-Wave Ocean SDK.

    You can access D-Wave systems in the AWS US West (Oregon) Region. Pricing information for the Advantage 4.1 QPU is available here. To learn more and get started, visit the Amazon Braket D-Wave page, console, and documentation.

    » Amazon OpenSearch Service (successor to Amazon Elasticsearch Service) announces support for Cross-Cluster Replication

    Posted On: Oct 5, 2021

    Amazon OpenSearch Service (successor to Amazon Elasticsearch Service) now supports cross-cluster replication, enabling you to automate copying and synchronizing of indices from one domain to another at low latency in same or different AWS accounts or Regions. With cross-cluster replication, you can achieve high availability for your mission critical applications with sequential data consistency. 

    Amazon OpenSearch Service makes it easy for you to perform interactive log analytics, real-time application monitoring, website search, and more. To ensure redundancy and availability customers configure replicas and deploy their domains across multiple availability zones, protecting them against instance failures and availability zone outages. However, the domain itself can still be a single point of failure. To protect against such failures, customers previously had to create a second domain, fork their input data streams to the two clusters, and place a load balancer in front of the two domains to balance incoming search requests. This set-up adds complexity and cost as it requires you to use additional technologies like Apache Kafka or AWS Lambda to monitor and correct data inconsistencies between the domains. 

    With cross-cluster replication for Amazon OpenSearch Service, you can replicate indices at low latency from one domain to another in same or different AWS Regions without needing additional technologies. Cross-cluster replication provides sequential consistency while continuously copying data from the leader index to the follower index. Sequential consistency ensures the leader and the follower index return the same result set after operations are applied on the indices in the same order. Cross-cluster replication is designed to minimize delivery lag between the leader and the follower index. You can continuously monitor the replication status via APIs. Additionally, if you have indices that follow an index pattern, you can create automatic follow rules and they will be automatically replicated.

    Cross-cluster replication is available on the service today for domains running Elasticsearch 7.10. Cross cluster replication is also available as an open-source feature in OpenSearch 1.1, which is planned to be available on the service soon. 

    Cross-cluster replication is available for Amazon OpenSearch Service across 25 regions globally. Please refer to the AWS Region Table for more information about Amazon OpenSearch Service availability. To learn more about cross-cluster replication, please see the documentation. To learn more about Amazon OpenSearch Service, please visit the product page.

    » RDS Performance Insights now available in four more regions

    Posted On: Oct 5, 2021

    Amazon Relational Database Service (Amazon RDS) Performance Insights is now available in the Middle East (Bahrain), Africa (Cape Town), Europe (Milan), Asia Pacific (Osaka) regions. Amazon RDS Performance Insights is a database performance tuning and monitoring feature of RDS and Aurora that helps you quickly assess the load on your database and determine when and where to take action.

    RDS Performance Insights allows non-experts to measure database performance with an easy-to-understand dashboard that visualizes database load. With one click, you can add a fully managed performance monitoring solution to your Amazon Aurora clusters and Amazon RDS instances. RDS Performance Insights automatically gathers all necessary performance metrics and visualizes them in a dynamic dashboard on the RDS console. You can identify your database’s top performance bottlenecks from a single graph.

    To get started, log into the Amazon RDS Management Console and enable RDS Performance Insights when creating or modifying an instance of a supported RDS engine. Then go to the RDS Performance Insights dashboard to start monitoring performance.

    RDS Performance Insights is included with supported Amazon Aurora clusters and Amazon RDS instances and stores seven days of performance history in a rolling window at no additional cost. If you need longer-term retention, you can choose to pay for up to two years of performance history retention. For a complete list of regions where RDS Performance Insights is offered, see AWS Regions. To learn more about RDS Performance Insights and supported database engines, read the Amazon RDS User Guide.

    » Amazon Location Service adds change detection to tracking

    Posted On: Oct 5, 2021

    Today, Amazon Location Service is adding distance-based filtering of device position updates.

    Amazon Location Service is a fully managed service that helps developers easily and securely add maps, points of interest, geocoding, routing, tracking, and geofencing to their applications without compromising on data security, user privacy, or cost. With Amazon Location Service, you retain control of your location data, protecting your privacy and reducing enterprise security risks. Amazon Location Service provides a consistent API across high-quality LBS data providers (Esri and HERE), all managed through one AWS console.

    With the new distance-based filtering enabled, each position update from a device is compared to the previous position update, and position changes of less than 30m are ignored. These new positions are not stored nor evaluated against associated geofences. This reduces a customer's cost of implementing a tracking application because only significant position changes are saved or trigger geofence evaluations. The feature also reduces the effect of jitter caused by inaccurate positioning systems (such as mobile phones), reducing false geofence entry and exit events, and improves map visualization of position updates. 

    Amazon Location Service is available in the following AWS Regions: US East (N. Virginia), US East (Ohio), US West (Oregon), Europe (Frankfurt), Europe (Ireland), Europe (Stockholm), Asia Pacific (Singapore), Asia Pacific (Sydney) Region, and Asia Pacific (Tokyo).

    To learn more, visit to the Amazon Location Service start tracking page.
     

    » Announcing Amazon EC2 Capacity Reservation Fleet a way to easily migrate Amazon EC2 Capacity Reservations across instance types

    Posted On: Oct 5, 2021

    Using Amazon EC2 Capacity Reservation Fleet, you can easily migrate your reserved Amazon EC2 capacity to new generation instance types. Capacity Reservations allow you to reserve capacity for your immediate use in a specific instance type and Availability Zone and can be cancelled by you at any time. With Capacity Reservation Fleet, you can reserve capacity across a prioritized list of instance types. When your reservations for lower priority instance types are unused, it will automatically convert them to capacity reservations for higher priority instance types.

    For example, when Amazon EC2 launches a new instance type, you can include that new instance type as your highest-priority instance type in your Capacity Reservation Fleet configuration. As Amazon EC2 adds more capacity for the new instance type across different Availability Zones, Capacity Reservation Fleet will automatically shift your reserved capacity footprint to your preferred instance type, allowing you to maintain a single global configuration and seamlessly migrate your reservations.

    Capacity Reservation Fleet is available in all public AWS regions.

    To learn more about using Capacity Reservation Fleet, please visit this page.

    » AWS Transfer Family customers can now use Amazon S3 Access Point aliases for granular and simplified data access controls

    Posted On: Oct 5, 2021

    AWS Transfer Family now supports Amazon S3 Access Points, a feature of Amazon S3 that allows you to easily manage granular access to shared data sets. Now, you can use S3 Access Point aliases anywhere an S3 bucket name is used today for shared datasets that are utilized by hundreds of SFTP, FTP, and FTPS users and groups.

    AWS Transfer Family provides fully managed file transfers over SFTP, FTPS, and FTP for Amazon S3 and Amazon EFS. You can create hundreds of access points in Amazon S3 for users who have different permissions to access shared data in an Amazon S3 bucket. AWS Transfer Family’s logical directories allow you to map multiple S3 buckets and files to a unified logical namespace for your users. Using AWS Transfer Family’s logical directories combined with S3 Access Points, you can provide granular access for a large set of data without having to manage a single bucket policy that spans hundreds of use cases.

    AWS Transfer Family support for S3 Access Points is available in all AWS Regions where AWS Transfer Family is available. To learn more, visit this blog post or read the documentation on S3 Access Points and Access Points aliases.  

    » AWS Glue DataBrew is now available in the AWS Africa (Cape Town) Region

    Posted On: Oct 5, 2021

    AWS Glue DataBrew, a visual data preparation tool that makes it easy for data analysts and data scientists to clean and normalize data for analytics and machine learning, is now available in the AWS Africa (Cape Town) Region. See where DataBrew is available by using the AWS Region Table.

    AWS Glue DataBrew is a visual data preparation tool that makes it easy to clean and normalize data using over 250 pre-built transformations, all without the need to write any code. You can automate filtering anomalies, converting data to standard formats, correcting invalid values, and other tasks. 

    To learn more, view this getting started video and refer to the DataBrew documentation. To start using DataBrew, visit the AWS Management Console or install the DataBrew plugin in your Notebook environment

    » Announcing general availability of VMware Cloud on AWS Outposts

    Posted On: Oct 5, 2021

    We are announcing the general availability of VMware Cloud on AWS Outposts, a jointly-engineered solution that delivers VMware Cloud on AWS as a fully managed experience to virtually any datacenter, co-location space, or on-premises facility with AWS Outposts. It runs VMware’s enterprise-class Software-Defined Data Center (SDDC) software on dedicated AWS Nitro System-based EC2 bare metal Outposts instances. VMware Cloud on AWS Outposts is built for VMware workloads that require low-latency access to on-premises systems, local data processing, or data residency.

    VMware Cloud on AWS Outposts simplifies IT operations. AWS delivers and installs the Outpost at your on-premises location, monitors, patches, and updates it, and handles all maintenance and replacement of the hardware. VMware provides continuous lifecycle management of VMware SDDC and serves as your first line of support.

    VMware Cloud on AWS and VMware Cloud on AWS Outposts share the same infrastructure, architecture, and operations, offering a truly consistent hybrid experience. With access to over 200 AWS services, VMware Cloud on AWS Outposts empowers you to innovate faster wherever your workloads need to be deployed.

    To get started with VMware Cloud on AWS Outposts, you can contact your AWS or VMware sales representative to place an order. VMware Cloud on AWS Outposts can be shipped to the US and connected to US East (N. Virginia) or US West (Oregon). If you want to deploy VMware Cloud on AWS Outposts outside of the US or connect VMware Cloud on AWS Outposts to other AWS regions, please contact your AWS or VMware sales representative. 

    For more information on VMware Cloud on AWS Outposts, visit our website or read our blog.

    » AWS Backup Audit Manager now supports AWS CloudFormation

    Posted On: Oct 5, 2021

    AWS Backup Audit Manager now supports AWS CloudFormation, allowing you to audit and report on the compliance of your data protection policies using AWS CloudFormation templates. You can now deploy AWS Backup Audit Manager's pre-built, customizable controls using AWS CloudFormation templates and evaluate whether all your backups are in compliance with your policies. You can also generate audit reports that help you monitor your operational posture and demonstrate compliance of your backups with regulatory requirements.

    AWS Backup Audit Manager’s support for AWS CloudFormation is available in the US East (N. Virginia), US East (Ohio), US West (N. California), US West (Oregon), Canada (Central), Europe (Frankfurt), Europe (Ireland), Europe (London), Europe (Paris), Europe (Stockholm), South America (Sao Paulo), Asia Pacific (Hong Kong), Asia Pacific (Mumbai), Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo) Regions, and Middle East (Bahrain) Regions. For more information on AWS Backup availability and pricing, see the AWS Regional Services List and pricing page. To learn more about AWS Backup Audit Manager, visit the product page, documentation, and AWS News launch blog. To get started, visit the AWS Backup console.

    » AWS Firewall Manager now supports centralized logging of AWS Network Firewall logs

    Posted On: Oct 5, 2021

    AWS Firewall Manager now enables you to configure logging for your AWS Network Firewalls provisioned using a Firewall Manager policy. When you set up a Firewall Manager policy for Network Firewall, you can now enable logging for all the accounts that are in scope of the policy and have the logs centralized under your Firewall Manager administrator account. This makes it easy to enable logging for AWS Network Firewall across multiple accounts and VPCs through a single Firewall Manager policy. 

    You can get started by enabling centralized logging through the Firewall Manager policy and selecting the type of logs - alert, flow, or both - along with the Amazon S3 bucket to send the logs to. After you enable centralized logging through the Firewall Manager policy, logs from each Network Firewall provisioned by Firewall Manager are delivered to a single Amazon S3 bucket for storage. Each log entry provides information such as the name of the firewall, the Availability Zone associated with the firewall endpoint, the timestamp the log was created, and detailed information about the event.

    AWS Firewall Manager is a security management service which allows customers to centrally configure and manage firewall rules across their accounts and resources in AWS Organizations. With Firewall Manager, customers can configure and monitor rules for AWS WAF, AWS Shield Advanced, VPC security groups, AWS Network Firewall, and Amazon Route 53 Resolver DNS Firewall across their entire organization, while ensuring that all security rules are consistently enforced, even as new accounts and resources are created. 

    To get started with AWS Firewall Manager, please see the product page and service documentation for more details. See the AWS Region Table for the list of regions where AWS Firewall Manager is currently available.

    » AWS License Manager now supports Delegated Administrator for Managed entitlements

    Posted On: Oct 4, 2021

    AWS License Manager announces Delegated Administrator support for Managed entitlements. This feature allows license administrators to manage and distribute licenses across their AWS accounts from a delegated account outside of the management account. Using delegated administrator, you can grant licenses from AWS Marketplace and Independent Software Vendors across your organization and benefit from the administrative capabilities previously afforded to the management account only.

    Delegated Administrator provides you flexibility to separate license management activities from billing activities. You can configure AWS License Manager in your management account and select another AWS account as the delegated administrator. Once the delegated administrator account is configured, you can use this account to grant licenses to AWS Account IDs and Organizations ID, enable license auto acceptance, and activate licenses on behalf of the accounts in your organization. 

    Managed entitlements allows AWS customers to more easily distribute entitlement access to their AWS accounts or organization for licenses granted from Independent Software Vendors and from AWS Marketplace supported product types such as AMIs, Containers, Machine Learning, and Data Exchange products. Visit the Managed entitlements feature page, and delegated administration documentation to learn more. 

    » AWS IoT Events is available in the Asia Pacific (Mumbai) Region

    Posted On: Oct 4, 2021

    AWS IoT Events is now available in the Asia Pacific (Mumbai) Region, extending the footprint to 13 AWS regions.

    AWS IoT Events is a fully managed service that makes it easy to detect and respond to changes indicated by IoT sensors and applications. For example, you can use AWS IoT Events to detect malfunctioning machinery, a stuck conveyor belt, or a slowdown in production output. When an event is detected, AWS IoT Events automatically triggers actions or alerts so that you can resolve issues quickly, reduce maintenance costs, and increase operational efficiency.

    Detecting events based on data from thousands of devices requires companies to write code to evaluate the data, deploy infrastructure to host the code, and secure the architecture from end-to-end, which is undifferentiated heavy lifting that customers want to avoid. Using AWS IoT Events, customers can now easily detect events like this at scale by analyzing data from a single sensor or across thousands of IoT sensors and hundreds of equipment management applications in near real time. With AWS IoT Events, customers use a simple interface to create detectors that evaluate device data and trigger AWS Lambda functions or notifications via Amazon Simple Notification Service (SNS) in response to events. For example, when temperature changes indicate that a freezer door is not sealing properly, AWS IoT Events can automatically trigger a text message to a service technician to address the issue.

    To get started with AWS IoT Events, launch a sample detector model and test inputs to it from the AWS IoT Events console. For a full list of AWS Regions where AWS IoT Events is available, visit the AWS Region table. Visit our AWS IoT website to learn more about AWS IoT services.

    » Amazon CodeGuru announces Security detectors for Python applications and security analysis powered by Bandit

    Posted On: Oct 4, 2021

    Amazon CodeGuru is a developer tool powered by machine learning that provides intelligent recommendations for improving code quality and identifying an application’s most expensive lines of code.

    Today we are announcing two new features for Amazon CodeGuru Reviewer that can help detect and prevent security vulnerabilities in Python applications. Security detectors for Python identify security risks from the top ten Open Web Application Security Project (OWASP) categories, security best practices for AWS APIs, and incorrect use of common crypto libraries. CodeGuru now also performs an analysis of your code using Bandit (https://github.com/PyCQA/bandit), an open source tool that specializes in scanning Python code for security issues.

    Amazon CodeGuru Reviewer makes it easy to add thorough security analysis—that combines CodeGuru’s machine learning-based detectors and the widely-used security analysis tool for Python—to your development workflow. There is nothing to deploy or configure, no infrastructure to maintain or updates to manage. Engineering and security teams can integrate the service with their pull request workflows or CI/CD pipelines to catch vulnerabilities before they go to production.

    You can get started from the CodeGuru console by running a full repository scan or integrating CodeGuru Reviewer with your CI/CD pipeline. Code analysis from Bandit is included as part of the CodeGuru Reviewer service at no additional cost. 

    To learn more about CodeGuru Reviewer, take a look at the Amazon CodeGuru page. To contact the team visit the Amazon CodeGuru developer forum. For more information about automating code reviews and application profiling with Amazon CodeGuru check out the AWS ML Blog. For more details on how to get started visit the documentation.

    » Introducing Amazon Workspaces Cost Optimizer v2.4

    Posted On: Oct 4, 2021

    The AWS Solutions team recently updated Amazon Workspaces Cost Optimizer, a solution that analyzes all of your Amazon WorkSpaces usage data and automatically converts the WorkSpace to the most cost-effective billing option (hourly or monthly), depending on your individual usage. This solution also helps you monitor your WorkSpace usage and optimize costs.

    The updated solution includes a new feature to delete unused workspaces which will reduce costs for customers by terminating the workspaces which have not been used for a month. Customers can now deploy the solution in Gov cloud to monitor their workspaces. With this release, customers can also provide a list of regions to monitor their workspace loads. The release also generates aggregated reports across the Directories for easier analysis of the workspaces.

    Additional AWS Solutions Implementations offerings are available on the AWS Solutions page, where customers can browse common questions by category to find answers in the form of succinct Solution Briefs or comprehensive Solution Implementations, which are AWS-vetted, automated, turnkey reference implementations that address specific business needs.

    » Amazon EC2 Hibernation adds support for Ubuntu 20.04 LTS

    Posted On: Oct 4, 2021

    Amazon EC2 now supports Hibernation for Ubuntu 20.04 LTS operating system. Hibernation allows you to pause your EC2 Instances and resume them at a later time, rather than fully terminating and restarting them. Resuming your instance lets your applications continue from where they left off so that you don’t have to restart your OS and application from scratch. Hibernation is useful for cases where rebuilding application state is time-consuming (e.g., developer desktops) or an application’s start-up steps can be prepared in advance of a scale-out.

    For Ubuntu 20.04 LTS, Hibernation is supported for On-Demand Instances running on C3, C4, C5, C5d, I3, M3, M4, M5, M5a, M5ad, M5d, R3, R4, R5, R5a, R5ad, R5d, T2, T3, and T3a with up to 150 GB of RAM.

    Hibernation is available in all commercial AWS Regions and AWS GovCloud (US) Regions except Asia Pacific (Osaka).

    Hibernation is available through AWS CloudFormation, AWS Management Console, the AWS SDKs, AWS Tools for Powershell, or the AWS Command Line Interface (CLI). To learn more about Hibernation, see our FAQs, technical documentation, and blog.

    » Amazon Textract extends support for AWS PrivateLink to AWS GovCloud (US) Regions

    Posted On: Oct 4, 2021

    Starting today, Amazon Textract now extends support for AWS PrivateLink to both AWS GovCloud (US) Regions. Customers can now access Amazon Textract from their Amazon Virtual Private Cloud (Amazon VPC) in AWS GovCloud (US) without using public IPs and without requiring the traffic to traverse across the Internet.

    Amazon Textract is a fully managed machine learning service that automatically extracts text and data from scanned documents and goes beyond simple optical character recognition (OCR), to identify the contents of fields in forms and information stored in tables.

    AWS PrivateLink provides private connectivity between VPCs and AWS services, without leaving the AWS network. Using AWS PrivateLink, you can access Amazon Textract securely by keeping your network traffic within the AWS network, while simplifying your internal network architecture. You do not need to use an Internet Gateway, Network Address Translation (NAT) devices, or firewall proxies to connect to Amazon Textract.

    To learn more about using PrivateLink to access Amazon Textract without leaving the AWS network, click here.

    » Amazon RDS for PostgreSQL Supports PostGIS 3.1

    Posted On: Oct 4, 2021

    Amazon Relational Database Service (Amazon RDS) for PostgreSQL now supports PostGIS major version 3.1. This new version of PostGIS is available on PostgreSQL versions 13.4, 12.8, 11.13, 10.18, and higher.

    PostGIS allows you to store, query and analyze geospatial data within a PostgreSQL database. PostGIS 3.1 significantly improves performance such as spatial joins, which now run up to 6.8X faster on PostgreSQL 13. As an example, you could use a spatial join to count the number of people living in an area defined by the reception of mobile phones from radio towers.

    PostGIS 3.1 is the new default version on PostgreSQL 10 and higher starting with the new minor versions. However, you can still create older versions of PostGIS in your PostgreSQL database, e.g., if you require version stability. Learn more about working with PostGIS in the Amazon RDS User Guide.

    Amazon RDS for PostgreSQL makes it easy to set up, operate, and scale PostgreSQL deployments in the cloud. See Amazon RDS for PostgreSQL Pricing for pricing details and regional availability. Create or update a fully managed Amazon RDS database in the Amazon RDS Management Console.

    » Amazon CodeGuru now includes recommendations powered by Infer

    Posted On: Oct 4, 2021

    Amazon CodeGuru is a developer tool powered by machine learning that provides intelligent recommendations for improving code quality and identifying an application’s most expensive lines of code.

    Today were are announcing that in addition to existing code analysis performed by Amazon CodeGuru, customers can analyze Java code using Infer, an open source tool that specializes in finding concurrency defects among other issues. Defects found by Infer are shown in the CodeGuru console, in pull requests comments, or through CI/CD integrations, alongside code recommendations generated by CodeGuru’s code quality and security detectors.

    By combining the capabilities of Amazon CodeGuru Reviewer’s machine learning-based detectors with one of the most popular open source code analysis tools, CodeGuru makes it easy to integrate and maintain a thorough static analysis solution that detects and helps prevent code issues across the most important defect categories. There is nothing to deploy or configure, no infrastructure to maintain or updates to manage.

    You can get started from the CodeGuru console by running a full repository scan or integrating CodeGuru Reviewer with your CI/CD pipeline. Code analysis from Infer is included as part of the CodeGuru Reviewer service at no additional cost.

    To learn more about CodeGuru Reviewer, take a look at the Amazon CodeGuru page. To contact the team visit the Amazon CodeGuru developer forum. For more information about automating code reviews and application profiling with Amazon CodeGuru check out the AWS ML Blog. For more details on how to get started visit the documentation.

    » Now use Apache Spark, Hive, and Presto on Amazon EMR clusters directly from Amazon Sagemaker Studio for large-scale data processing and machine learning

    Posted On: Oct 1, 2021

    You can now use open source frameworks such as Apache Spark, Apache Hive, and Presto running on Amazon EMR clusters directly from Amazon SageMaker Studio notebooks to run petabyte-scale data analytics and machine learning. Amazon EMR automatically installs and configures open source frameworks and provides a performance-optimized runtime that is compatible with and faster than standard open source. For e.g. Spark 3.0 on Amazon EMR is 1.7x faster than it’s open source equivalent. Amazon SageMaker Studio provides a single, web-based visual interface where you can perform all ML development steps required to prepare data, as well as build, train, and deploy models. Analyzing, transforming and preparing large amounts of data is a foundational step of any data science and ML workflow. This release makes it simple to use popular frameworks such as Apache Spark, Hive, and Presto running on EMR clusters directly from Sagemaker Studio to help simplify data science and ML workflows.

    With this release, you can now visually browse a list of EMR clusters directly from SageMaker Studio and connect to them in a few simple clicks. Once connected to an EMR cluster, you can use Spark SQL, Scala, Python, and HiveQL to interactively query, explore and visualize data, and run Apache Spark, Hive and Presto jobs to process data. Jobs run fast because they use EMR’s performance-optimized versions of Spark, Hive, and Presto. Further, clusters can automatically scale up or down based on the workloads and integrate with Spot instances and Graviton2 based processors to lower costs. Finally, Sagemaker Studio users can authenticate when they connect to Amazon EMR clusters using LDAP-based credentials or Kerberos.

    These features are supported on EMR 5.9.0 and above, and are generally available in all AWS Regions where SageMaker Studio is available. To learn more, watch the demo Interactive data processing on Amazon EMR from Amazon SageMaker, read the blog Perform interactive data engineering and data science workflows from Amazon SageMaker Studio notebooks or the SageMaker Studio documentation here.
     

    » Amazon RDS for PostgreSQL Supports New Minor Versions 13.4, 12.8, 11.13, 10.18, and 9.6.23; Amazon RDS on Outposts Supports New PostgreSQL Minor Versions 13.4 and 12.8

    Posted On: Oct 1, 2021

    Following the announcement of updates to the PostgreSQL database, we have added support in Amazon Relational Database Service (Amazon RDS) for PostgreSQL minor versions 13.4, 12.8, 11.13, 10.18, and 9.6.23. We have also added support in Amazon RDS on Outposts for PostgreSQL minor versions 13.4 and 12.8. This release closes security vulnerabilities in PostgreSQL and contains bug fixes and improvements done by the PostgreSQL community.

    This release also adds support for PostGIS 3.1 and updates the pglogical extension to version 2.4.0.  PostGIS allows you to store, query and analyze the geospatial data within a PostgreSQL database. pglogical replication provides fine-grained control over replicating and synchronizing parts of a database. For example, you can use logical replication to replicate an individual table of a database. Please see the list of supported extensions in the Amazon RDS User Guide for specific versions.

    Amazon RDS for PostgreSQL makes it easy to set up, operate, and scale PostgreSQL deployments in the cloud. Learn more about upgrading your database instances from the Amazon RDS User Guide. See Amazon RDS for PostgreSQL Pricing for pricing details and regional availability. Create or update a fully managed Amazon RDS database in the Amazon RDS Management Console.

    » Now programmatically manage alternate contacts on AWS accounts

    Posted On: Oct 1, 2021

    Today, we are making it easier for customers to view and update the alternate contacts on their AWS accounts using the AWS Command Line Interface (CLI) and AWS SDK. Customers can now programmatically keep their billing, operations, and security contacts for their accounts up to date to ensure that they receive important notifications about their AWS accounts. Support for additional account settings will be available in future releases.

    For customers using AWS Organizations, organization administrators can now centrally manage alternate contacts for member accounts using the management account or a delegated administrator account without requiring credentials for each AWS account. For example, you can now set the same security alternate contact on all of your accounts so your Cloud Center of Excellence (CCoE) team can receive important security notifications about your AWS accounts. To learn how you can update the alternate contacts across your organization, please see our blog post, Programmatically managing alternate contacts on member accounts with AWS Organizations.

    The ability to update alternate contacts on AWS accounts is available at no additional charge in all commercial AWS Regions and AWS China Regions. To learn more about account management, see the documentation.
     

    » Amazon MSK adds support for Apache Kafka version 2.8.1

    Posted On: Oct 1, 2021

    Amazon Managed Streaming for Apache Kafka (Amazon MSK) now supports Apache Kafka version 2.8.1 for new and existing clusters. Apache Kafka 2.8.1 includes several bug fixes. To learn more about these fixes you can review the Apache Kafka release notes for 2.8.1.

    Amazon MSK is a fully managed service for Apache Kafka and Kafka Connect that makes it easy for you to build and run applications that use Apache Kafka as a data store. Amazon MSK is fully compatible with Apache Kafka, which enables you to quickly migrate your existing Apache Kafka workloads to Amazon MSK with confidence or build new ones from scratch. With Amazon MSK, you can spend more time innovating on applications and less time managing clusters. To learn how to get started, see the Amazon MSK Developer Guide.

    Support for Apache Kafka version 2.8.1 is offered in all AWS regions where Amazon MSK is available.

    »

    Page 1|Page 2|Page 3|Page 4