Contents of this page is copied directly from AWS blog sites to make it Kindle friendly. Some styles & sections from these pages are removed to render this properly in 'Article Mode' of Kindle e-Reader browser. All the contents of this page is property of AWS.

Page 1|Page 2|Page 3|Page 4

AWS Data Exchange now supports automatic exports of third-party data updates

Posted On: Sep 30, 2021

AWS Data Exchange subscribers can now use auto-export to automatically copy newly published revisions from their 3rd party data subscriptions to an Amazon S3 bucket of their choice in just a few clicks. With auto-export, subscribers no longer have to manually export new revisions or dedicate engineering resources to build ingestion pipelines that export new revisions as soon as they are published. For data subscribers that manage frequent updates to their file-based 3rd party data, auto-export saves significant time and effort.

Once you set up auto-export for a data set in the AWS Data Exchange console, any new data that providers publish is automatically copied directly to the Amazon S3 bucket of your choice. You can configure naming patterns to store exported revisions in a structured way; if there are multiple teams in your organization using the same data set, you can export the data to up to 5 separate Amazon S3 bucket locations.

AWS Data Exchange makes it easy to find, subscribe to, and use third-party data in the cloud. Customers use AWS Data Exchange to subscribe to a diverse selection of third-party data products. Once subscribed, customers can use the AWS Data Exchange API or the AWS Data Exchange console to export the data they’ve subscribed to into Amazon S3, making it directly available for analysis using AWS analytics and machine learning services.

To learn more, see Subscribing to Data Products in the AWS Data Exchange User Guide.

» Amazon SES now supports 2048-bit DKIM keys

Posted On: Sep 30, 2021

Amazon Simple Email Service (Amazon SES) customers can now use 2048-bit DomainKeys Identified Mail (DKIM) keys to enhance their email security. DKIM is an email security standard designed to make sure that an email that claims to have come from a specific domain was indeed authorized by the owner of that domain. It uses public-key cryptography to sign an email with a private key. Recipient servers can then use a public key published to a domain's DNS to verify that parts of the email have not been modified during the transit.

Until now, Amazon SES supported a DKIM key length of 1024-bit, which is the current industry standard. With this launch, customers can choose to use either 1024-bit keys or 2048-bit keys regardless of whether they use the Amazon SES Easy DKIM feature or Bring Your Own DKIM. The longer 2048-bit length makes it more challenging to break the DKIM key, thus providing customers enhanced email protection and domain authentication. To learn more about using the 2048-bit DKIM keys, see this page in the Amazon SES Developer Guide.

Amazon SES is a scalable, cost-effective, and flexible cloud-based email service that allows digital marketers and application developers to send marketing, notification, and transactional emails from within any application. To learn more about Amazon SES, visit this page.

» Announcing General Availability of Amplify Geo for AWS Amplify

Posted On: Sep 30, 2021

Today, we are announcing that Amplify Geo for JavaScript is generally available, following our initial Developer Preview release in August. Amplify Geo enables frontend developers to quickly add location-aware features to their web applications. Extending existing Amplify use case categories like Auth, DataStore and Storage, Amplify Geo includes a set of abstracted client libraries built on top of Amazon Location Service, and includes ready-to-use map UI components based on the popular MapLibre open-source library. Amplify Geo also updates the Amplify Command Line Interface (CLI) tool to make it simple for people who aren’t familiar with AWS to achieve common mapping use cases by provisioning all required cloud services

With this release, developers can add modern, interactive maps with location markers to their production web apps, using geographical data sourced from Amazon Location Service. Developers can either add these maps to script tags in HTML pages with just a few lines of code, or bundle them into their React apps via an NPM package. Amplify Geo allows end-users to search for points of interest (POIs), business names, or street addresses, and have their results presented as both a list and as markers on a map. Amplify Geo also provides map styling capabilities, so developers can tweak embedded maps to complement their apps’ theme, or developers can choose from many community-developed MapLibre plugins for more flexibility and advanced visualization options.

Get started with Amplify Geo for JavaScript-based web frameworks like React through an NPM package or for a simple HTML page through script tags today. You can also get started by reading this blog post on how to add a map with markers to your React app.

» AWS announces the general availability of AWS Cloud Control API

Posted On: Sep 30, 2021

AWS announces the general availability of AWS Cloud Control API, a set of common application programming interfaces (APIs) that is designed to make it easy for developers to manage their cloud infrastructure in a consistent manner and leverage the latest AWS capabilities faster. Using Cloud Control API, developers can manage the lifecycle of hundreds of AWS resources and over a dozen third-party resources with five consistent APIs instead of using distinct service-specific APIs. With this launch, AWS Partner Network (APN) Partners can now automate how their solutions integrate with existing and future AWS features and services through a one-time integration, instead of spending weeks of custom development work as new resources become available. Terraform by HashiCorp and Pulumi have integrated their solutions as part of this launch.

Cloud Control API enable developers to create, read, update, delete, and list (CRUDL) AWS and third-party service resources with consistent APIs. Resources include schema (properties and handler permissions) and handlers that control API interactions with the underlying services. Using Cloud Control API, developers have a uniform method to manage supported services throughout their lifecycle, so there are fewer APIs to learn as developers add services to their infrastructure. For instance, developers can create supported cloud resources using Cloud Control API’s CreateResource API, be it an AWS Lambda function, an Amazon Elastic Container Service (ECS) cluster, or hundreds of other AWS resources along with over a dozen third-party solutions available on the CloudFormation Registry spanning monitoring, databases, or security management resources. Developers can move faster by removing the need to author, maintain, and set up custom code across distinct service-specific APIs. Furthermore, Cloud Control API is up-to-date with the latest AWS resources as soon as they are available on the CloudFormation Registry, enabling APN partners to integrate their own solutions with Cloud Control API just once, and then automatically access new AWS resources without additional integration work.

Cloud Control API is generally available in the following AWS Regions: US East (N. Virginia, Ohio), US West (Oregon, N. California), Canada (Central), Europe (Ireland, Frankfurt, London, Stockholm, Paris, Milan), Asia Pacific (Hong Kong, Mumbai, Osaka, Singapore, Sydney, Seoul, Tokyo), South America (Sao Paulo), Middle East (Bahrain), Africa (Cape Town), and AWS GovCloud (US).

You can use the AWS CLI or AWS SDKs to get started with Cloud Control API. To learn more:

  • Visit the AWS Cloud Control API product page
  • Check out the AWS News Blog post 
  • Refer to the User Guide and API reference
  • » AWS Step Functions adds support for over 200 AWS Services with AWS SDK Integration

    Posted On: Sep 30, 2021

    AWS Step Functions now integrates with the AWS SDK, expanding the number of supported AWS Services from 17 to over 200 and AWS API Actions from 46 to over 9,000.

    AWS Step Functions is a low-code, visual workflow service that developers use to build distributed applications, automate IT and business processes, and build data and machine learning pipelines using AWS services. Developers define workflows visually using the Workflow Studio, in their programming language of choice using the CDK, or in Python using the AWS Step Functions Data Science SDK. These workflows use Step Functions Service Integrations to compose code (running in AWS Lambda or Amazon ECS) or AWS Resources (including DynamoDB Tables, AWS Glue Jobs, and AWS EventBridge event buses) into components of modern applications.

    Now, with the AWS SDK integration, it’s even simpler to build on AWS. SaaS developers can take data stored in Amazon S3, augment it with information stored in Amazon DynamoDB, then process with AWS machine learning services such as Amazon Textract or Amazon Comprehend to add new capabilities for their users. Security operations engineers can build reliable, observable, and auditable workflows that react to events from Amazon EventBridge then execute actions in Amazon EC2 to enforce IT controls. Mobile application developers can build a synchronous API that uses Amazon Personalize, Amazon Location Service, and Amazon Pinpoint to enrich the experience of their users. These solutions can be faster to build, easier to scale, and cheaper to maintain because Step Functions manages the complexity, so developers can focus on business logic. And developers gain this advantage with the full selection of AWS Services.

    The AWS SDK integration is generally available in the following regions: US East (Ohio and N. Virginia), US West (Oregon and N. California), Canada (Central), EU (Ireland and Frankfurt), and Asia Pacific (Tokyo). It will be generally available in all other commercial regions where Step Functions is available in the coming days. For a complete list of regions and service offerings, see AWS Regions.

    To learn more about these new integrations, read the launch blog, view the Developer Guide, and try building a state machine using our AWS SDK integration tutorial.

    » AWS IoT Core now makes it optional for customers to send the entire trust chain when provisioning devices using Just-in-Time Provisioning and Just-in-Time Registration

    Posted On: Sep 30, 2021

    You can now provision devices using AWS IoT Core Just-in-Time Provisioning and Just-in-Time Registration features without having to send the entire trust chain on devices’ first connection to IoT Core. Until now, customers were required to configure their devices to present both the registered CA certificate and the client certificate signed by the registered CA certificate as part of the TLS handshake on devices’ first connection to IoT Core. Effective today, AWS IoT core makes it optional for customers to present the CA certificate on devices’ first connection to IoT Core when using Just-in-Time Provisioning and Just-in-Time Registration. This enhancement makes it easy for customers to migrate brownfield devices to AWS IoT Core, example, from customers’ self-managed cloud solutions.

    AWS IoT Core is a managed cloud service that lets connected devices easily and securely interact with cloud applications and other devices. Before devices can securely connect and communicate with AWS IoT Core, customers need to provision their devices. Provisioning refers to the process of registering devices’ digital identities with the cloud service, attaching permissions for the devices to access cloud resources, and associating contextual information (like device serial numbers, location) with registered digital identities. With AWS IoT Core Just-in-Time Provisioning and Just-in-Time Registration features, customers can have their devices provisioned automatically when devices first attempt to connect to AWS IoT Core.

    You can visit the AWS IoT Core Just-in-Time Provisioning and Just-in-Time Registration documentations to learn more.

    » Amazon ECS Service Discovery Now Available in AWS GovCloud (US) Regions

    Posted On: Sep 30, 2021

    Today, Amazon Elastic Container Service (ECS) launches integrated service discovery in the AWS GovCloud (US) Regions.

    Amazon ECS Service Discovery makes it easy for your containerized services to discover and connect with each other. With Service Discovery, Amazon ECS creates and manages a registry of service names using AWS Cloud Map so you can refer to a service by name in your code and use DNS to resolve the service name to the service’s endpoint at runtime.

    ECS Service Discovery is available for all networking modes for EC2 or AWS Fargate launch types. To learn more, visit the Amazon ECS Service Discovery documentation.

    » Amazon Monitron launches iOS app

    Posted On: Sep 30, 2021

    Today, we are announcing the launch of Amazon Monitron iOS app. The iOS app joins the existing Android app, giving customers more options for using Amazon Monitron. iPhone users can now use the Amazon Monitron iOS app to set up their sensors and gateway devices, and receive reports on operating behavior and alerts to potential failures in their equipment.

    Amazon Monitron is an end-to-end system that uses machine learning (ML) to detect abnormal conditions in industrial equipment, enabling customers to implement predictive maintenance and reduce unplanned downtime. It includes sensors to capture vibration and temperature data from equipment, a gateway device to securely transfer data to AWS, the Amazon Monitron service that analyzes the data for abnormal equipment conditions using machine learning, and a companion mobile app for setup, analytics and alert notifications.

    Amazon Monitron helps monitor and detect potential failures in a broad range of rotating equipment such as motors, gearboxes, pumps, fans, bearings, and compressors. Amazon Monitron Sensors and Gateways are available to purchase separately or bundled in starter packs on Amazon.com or with your Amazon Business account, in US, UK, Germany, Spain, France, Italy, and Canada. The Amazon Monitron service is available in the US East (N. Virginia) and Europe (Ireland) regions and you can download the Amazon Monitron app from the Google Play Store and the Apple App Store.

    » AWS Lambda now supports triggering Lambda functions from an Amazon SQS queue in a different account

    Posted On: Sep 30, 2021

    AWS Lambda now allows customers to trigger functions from Amazon Simple Queue Service (Amazon SQS) queues that are in a different AWS account. Previously, customers could trigger Lambda functions from SQS queues in the same account only. Starting today, customers can create Lambda functions in multiple AWS accounts without needing to replicate the event source in each account.

    To get started, customers can select Amazon SQS as their event source when adding a trigger for their Lambda function, and then provide the Amazon Resource Name (ARN) for their SQS queue in any AWS account. The Lambda function will need permissions to manage messages in the SQS queue, which can be handled by updating the function’s execution role permissions. The SQS queue will also need to grant cross-account permissions to Lambda in order to allow the function to process messages from the queue.

    This functionality is available in all AWS Regions where Amazon SQS is supported as an event source for AWS Lambda. This feature requires no additional charge to use. Both the Lambda function and the SQS queue must be in the same region, though they can be in different accounts. To learn more about using SQS as an event source for Lambda using different accounts, read the Lambda Developer Guide.

    » Amazon SageMaker JumpStart introduces new multimodal (long-form text, tabular) financial analysis tools

    Posted On: Sep 30, 2021

    Amazon SageMaker JumpStart helps you quickly and easily get started with machine learning. SageMaker JumpStart provides a set of solutions for the most common use cases that can be deployed readily with just a few clicks and one-click deployment and fine-tuning of popular open source models. Starting today, you can now access a collection of multimodal financial text analysis tools, including example notebooks, text models, and a solution. 

    With this new release, you can use the new set of multimodal financial analysis tools within Amazon SageMaker JumpStart. With these new tools, you can enhance your tabular ML workflows with new insights from financial text documents and potentially help save up to weeks of development time. Using the new SageMaker JumpStart Industry SDK, you can easily retrieve common public financial documents, including SEC filings, and further process financial text documents with features such as summarization and scoring for sentiment, litigiousness, risk, readability etc. In addition, you can access pre-trained language models trained on financial text for transfer learning, and use example notebooks for data retrieval, text feature engineering, multimodal classification and regression models. Lastly, you can access a solution for corporate credit scoring, which is fully customizable and showcases the use of AWS CloudFormation templates and reference architectures so you can accelerate your machine learning journey. 

    Amazon SageMaker JumpStart is available in all regions where Amazon SageMaker Studio is available. To learn more about the new multimodal financial text analysis tools, view these two blogs:  "Use SEC text for ratings classification using multimodal ML in Amazon SageMaker JumpStart" and "Use pre-trained financial language models for transfer learning in Amazon SageMaker JumpStart." To get started with SageMaker JumpStart, refer to the documentation.

    » AQUA for Amazon Redshift launches in three additional AWS regions

    Posted On: Sep 30, 2021

    AQUA (Advanced Query Accelerator) for Amazon Redshift is now generally available in three additional AWS regions: Europe (Stockholm), Asia Pacific (Seoul), and US West (N. California).

    AQUA is a new distributed and hardware-accelerated cache that enables Amazon Redshift to run up to 10x faster than other enterprise cloud data warehouses by automatically boosting certain types of queries. AQUA uses AWS-designed processors with AWS Nitro chips adapted to speed up data encryption and compression, and custom analytics processors, implemented in FPGAs, to accelerate operations such as scans, filtering, and aggregation.

    AQUA is available with the RA3.16xlarge, RA3.4xlarge, or RA3.xlplus nodes at no additional charge and with no code changes. You can enable AQUA for your existing Redshift RA3 clusters or launch a new AQUA enabled RA3 cluster via the AWS Management Console, API, or CLI. To learn more about AQUA, visit the documentation.

    AQUA is now generally available in US East (Ohio), US East (N. Virginia), US West (Oregon), US West (N. California), Europe (Ireland), Europe (Frankfurt), Europe (Stockholm), Asia Pacific (Tokyo), Asia Pacific (Sydney), Asia Pacific (Singapore), and Asia Pacific (Seoul) regions, and will expand to additional regions in the coming months.

    » Amazon Comprehend adds two Trusted Advisor checks

    Posted On: Sep 30, 2021

    Amazon Comprehend now supports two new AWS Trusted Advisor checks to help customers optimize the cost and security of Amazon Comprehend endpoints.

    AWS Trusted Advisor provides recommendations that help you follow AWS best practices. Trusted Advisor evaluates your account by using checks. These checks identify ways to optimize your AWS infrastructure, improve security and performance, reduce costs, and monitor service quotas. Today, Amazon Comprehend checks are available in the AWS Business Support and AWS Enterprise Support plans. The new checks are:

    1. Underutilized endpoints: Checks the throughput configuration of your endpoints and generates a warning when they are not actively used for any real-time inference requests.
    2. Endpoint permissions: Checks the KMS key permissions for an endpoint whose underlying model was encrypted using customer managed keys. If the customer managed key has been disabled or the key policy has been changed to alter the granted permissions for Amazon Comprehend for any reason, the endpoint availability might be impacted.

    The new checks are available to view in the Trusted Advisor portion of the AWS Console as well as accessible via the AWS Support API. Customers can set up alerts based on the results of Trusted Advisor checks.

    To learn more about setting up alarms using Amazon CloudWatch, see: Creating Trusted Advisor alarms using CloudWatch. For a full set of Trusted Advisor Best Practice Checks, see: AWS Trusted Advisor best practice checklist. Go here for Comprehend documentation.

    » AWS announces AWS Snowcone SSD

    Posted On: Sep 30, 2021

    AWS Snowcone is now available in solid state drives (SSD) with 14TB storage capacity. AWS Snowcone is the smallest AWS Snow Family device equipped to handle edge computing, edge storage, and data transfers. With this launch, AWS Snowcone is now available in both hard disk drive (HDD) and solid state drive (SSD). Snowcone SSD has the same motherboard and industrial design as Snowcone, but Snowcone SSD will enable new data transfer and edge computing use cases that require 1) higher throughput performance 2) stronger vibration resistance operation 3) expanded durability and, 4) increased storage capacity (14TB Snowcone SSD vs. 8TB in Snowcone).

    Like Snowcone, Snowcone SSD is also a portable and rugged device built with the security of your data in mind. It is compact enough to fit in a backpack and designed to withstand harsh environments. Customers can deploy a Snowcone device at the rugged edge or directly mount the device on autonomous vehicles (AV) to collect data, process it locally, and migrate it to AWS. With Snowcone and Snowcone SSD, data can be transferred either offline by shipping the device directly to AWS, or online by using AWS DataSync, pre-installed on all Snowcone devices, to send the data to AWS over the network.

    Rugged edge locations often lack the space, power, and cooling needed for data center IT equipment to run applications. With 2 vCPUs, 4 GB of memory, 14 TB of usable SSD storage, and wired networking, Snowcone SSD runs edge computing workloads with select Amazon EC2 instances or AWS IoT Greengrass. Snowcone SSD is small, approximately 9 inches x 6 inches x 3 inches, weighs 4 lbs., and supports an optional battery for extended mobility. 

    To setup and manage Snowcone SSD, you can use AWS OpsHub, a graphical user interface that enables you to rapidly deploy edge computing workloads and simplify data migration to the cloud. DataSync comes pre-installed on the device to move data online to and from Amazon S3, Amazon EFS, or Amazon FSx for Windows File Server, as well as between AWS Storage services.

    Access the AWS Region services list to see where AWS Snowcone SSD is available. To learn more, visit the AWS Snowcone documentation and AWS Snowcone product page. To get started, order Snowcone SSD in the AWS Snow Family console.

    » Monitoring clock accuracy on AWS Fargate with Amazon ECS

    Posted On: Sep 30, 2021

    You can now monitor the system time accuracy for your Amazon ECS tasks running on AWS Fargate. For time-sensitive workloads running on Fargate, this gives customers the ability to monitor the clock error bound, which is used as a proxy for clock error, to know if the difference between reference time and system time exceeds a threshold. This capability leverages Amazon Time Sync Service to measure clock accuracy and provide the clock error bound for containers.

    The clock error bound information is available to query through the task metadata endpoint version 4 in all AWS Regions where AWS Fargate is available. To read more about clock accuracy and clock error bound, refer to this blog post.

    » Amazon Redshift launches RA3.xlplus in AWS GovCloud (US) Regions

    Posted On: Sep 30, 2021

    Amazon Redshift RA3.xlplus nodes are now available in the AWS GovCloud (US) Regions. Amazon Redshift RA3 instances with managed storage allow you to scale compute and storage independently for fast query performance and lower costs. RA3 is available in three different node types to allow you to balance price and performance depending upon your workload requirements. RA3.xlplus nodes offer one-third compute (4 vCPU) and memory (32 GiB) compared to RA3.4xlarge at one-third of the price. RA3 nodes are built on the AWS Nitro System and feature high bandwidth networking and large high-performance SSDs as local caches.

    AWS GovCloud (US) Regions are designed to host sensitive data, regulated workloads, and address the most stringent U.S. government security and compliance requirements. Learn more in Introduction to AWS GovCloud (US) Regions.

    To upgrade your cluster to an RA3 cluster, you can take a snapshot of your existing cluster and restore it to an RA3 cluster, or do a resize from your existing cluster to a new RA3 cluster. To learn more about RA3 nodes, see the Amazon Redshift RA3 feature page. You can find more information on pricing by visiting the Amazon Redshift pricing page.

    » Amazon ECR Public adds the ability to launch containers directly to AWS App Runner

    Posted On: Sep 29, 2021

    Today, Amazon Elastic Container Registry Public (Amazon ECR Public) announced the ability to launch containers directly from the ECR Public Gallery to AWS App Runner to quickly test popular web application container images. AWS App Runner is a fully managed service that makes it easier for developers to quickly deploy web applications and APIs, at scale with no prior infrastructure experience required.

    While browsing the gallery, you will see a 'Launch with App Runner' button for web application images compatible with App Runner. For example, you will see the button on the hello-app-runner image or Nginx Clicking this button opens the ‘Deploy Service’ page on the App Runner Management Console where you can change any of the default settings before running the container. On the other hand, this button is not available on the AWS for Fluent Bit image which is not a web application. To get started, you can also see our documentation here and App Runner’s regional availability here.

    » Announcing general availability of Amazon RDS for MySQL and Amazon Aurora MySQL databases as new data sources for federated querying

    Posted On: Sep 29, 2021

    With Amazon Redshift federated query capability, many customers have been able to combine live data from operational databases with the data in Amazon Redshift data warehouse and the data in Amazon S3 data lake environment in order to get unified analytics view across all the data in the enterprise. Now Amazon Redshift federated query support is generally available for Amazon Aurora MySQL and Amazon RDS for MySQL databases in addition to the existing Amazon Aurora PostgreSQL and Amazon RDS for PostgreSQL databases.

    Amazon Redshift federated query allows you to incorporate live data from the transactional databases as part of your business intelligence (BI) and reporting applications to enable operational analytics. The intelligent optimizer in Amazon Redshift pushes down and distributes a portion of the computation directly into the remote operational databases to help speed up performance by reducing data moved over the network. Amazon Redshift complements subsequent execution of the query by leveraging its massively parallel processing capabilities for further speed up. Federated query also makes it more easier to ingest data into Amazon Redshift by letting you query operational databases directly, applying transformations on the fly, and loading data into the target tables without requiring complex ETL pipelines.

    Federated query support for Amazon Aurora MySQL and Amazon RDS MySQL databases is available to all Amazon Redshift customers. To get started and learn more, visit the documentation. Refer to the AWS Region Table for Amazon Redshift availability.

    » Amazon Redshift announces the next generation of Amazon Redshift Query Editor

    Posted On: Sep 29, 2021

    Amazon Redshift Query Editor V2 makes data in your Amazon Redshift data warehouse and data lake more accessible with a web-based tool for SQL users such as data analysts, data scientists, and database developers. With Query Editor V2, users can explore, analyze, and collaborate on data. It reduces the operational costs of managing query tools by providing a web-based application that allows you to focus on exploring your data without managing your infrastructure.

    End-users can now easily gain access to a web-based SQL editor using their Single Sign-on (SSO) provider without requiring privileges to access to the Amazon Redshift console. You can access the Query Editor V2 by navigating from the Redshift console or using a direct URL. 

    Amazon Redshift Query Editor V2 provides a powerful editor to author queries, user-defined functions, and stored procedures that you run on Amazon Redshift. It supports running multiple SQL statements at once and lets you view the results for each statement in separate tabs on the results pane. As a data analyst or data engineer, you can now use session variables and temporary tables in your queries. You can run long-running queries without having to leave your browser window open and retrieve the results later within 24 hours.

    With the Query Editor V2, you can gain insight faster by performing in-place visual analysis of your data by switching to the charts view with a single click. Query Editor also enables you to collaborate with your team by enabling you to share your saved queries securely. It also helps you manage versions of your saved queries automatically.

    The Query Editor V2 is available in all AWS commercial regions except AP-Northeast-3 region. To get started and learn more, read the documentation, watch this demo or read this blog post.

    » AWS Snowcone is now available in the US East (Ohio), US West (San Francisco) and South America (Sao Paulo) regions

    Posted On: Sep 29, 2021

    The AWS Snowcone service is now available for customer orders in the US East (Ohio), US West (San Francisco) and South America (Sao Paulo). With this launch, Snowcone is now available for order in US East (Ohio), US West (San Francisco) and South America (Sao Paulo), AWS Asia Pacific (Singapore), Asia Pacific (Tokyo), Canada (Central), Asia Pacific (Sydney), EU (Frankfurt), EU (Ireland), US East (N. Virginia), and US West (Oregon) Regions. AWS Snowcone is the smallest member of the AWS Snow Family of edge computing, edge storage, and data transfer devices. Snowcone is portable, rugged, and secure – small and light enough to fit in a backpack, and able to withstand harsh environments. Customers use Snowcone to deploy applications at the edge, and to collect data, process it locally, and move it to AWS either offline (by shipping the device to AWS) or online (by using AWS DataSync on Snowcone to send the data to AWS over the network).

    Edge locations often lack the space, power, and cooling needed for data center IT equipment to run applications. With 2 CPUs, 4 GB of memory, 8 TB of usable storage, and wired networking, Snowcone runs edge computing workloads with select Amazon EC2 instances or AWS IoT Greengrass. Snowcone is small, approximately 9 inches x 6 inches x 3 inches, weighs 4.5 lbs., and supports operation via battery for mobility.

    To setup and manage Snowcone, you can use AWS OpsHub, a graphical user interface that enables you to rapidly deploy edge computing workloads and simplify data migration to the cloud. DataSync comes pre-installed on the device to move data online to and from Amazon S3, Amazon EFS, or Amazon FSx for Windows File Server, as well as between AWS Storage services.

    Access the AWS Region services list to see where AWS Snowcone is available. To learn more, visit the AWS Snowcone documentation and AWS Snowcone product page. To get started, order Snowcone in the AWS Snow Family console.

    » AWS IoT SiteWise is now available in the AWS GovCloud (US-West) Region

    Posted On: Sep 29, 2021

    AWS IoT SiteWise is now available in the AWS GovCloud (US-West) Region, extending the footprint to 8 AWS Regions.

    AWS IoT SiteWise is a managed service that makes it easy to collect, store, organize and monitor data from industrial equipment at scale to help you make better, data-driven decisions. You can use AWS IoT SiteWise to monitor operations across facilities, quickly compute common industrial performance metrics, and create applications that analyze industrial equipment data to prevent costly equipment issues and reduce gaps in production. This allows you to collect data consistently across devices, identify issues with remote monitoring more quickly, and improve multi-site processes with centralized data. With AWS IoT SiteWise, you can focus on understanding and optimizing your operations, rather than building costly in-house data collection and management applications.

    To get started, log into the AWS Management Console and navigate to AWS IoT SiteWise console and check out a demo to see what you can achieve with AWS IoT SiteWise. For a full list of AWS Regions where AWS IoT SiteWise is available, visit the AWS Region table. To learn more, please visit the AWS IoT SiteWise website or the developer guide.

    Visit the AWS IoT website to learn more about other AWS IoT services.

    » Achieve up to 34% better price/performance with AWS Lambda Functions powered by AWS Graviton2 processor

    Posted On: Sep 29, 2021

    AWS Lambda functions powered by next-generation AWS Graviton2 processors are now generally available. Graviton2 functions, using an Arm-based processor architecture, are designed to deliver up to 19% better performance at 20% lower cost for a variety of Serverless workloads, such as web and mobile backends, data, and media processing. With lower latency and better performance, functions powered by AWS Graviton2 processors are ideal for powering mission critical Serverless applications.

    Customers can configure existing x86-based functions to target the AWS Graviton2 processor or create new functions powered by AWS Graviton2 using the Console, API, AWS CloudFormation, and AWS CDK. AWS Lambda Layers will also support targeting x86-based or Arm-based functions using either zip files or container images.

    AWS Lambda functions powered by AWS Graviton2 processors are available in Asia Pacific (Mumbai), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Europe (Frankfurt), Europe (Ireland), Europe (London), US East (N. Virginia), US East (Ohio), and US West (Oregon).

    Click here to launch your first AWS Lambda function powered by AWS Graviton2 processor. For complete information on pricing and regional availability, please refer to the AWS Lambda pricing page.

    » Amazon Managed Service for Prometheus is now Generally Available with support for alert manager and rules

    Posted On: Sep 29, 2021

    Amazon Managed Service for Prometheus is now generally available. Amazon Managed Service for Prometheus is a fully managed Prometheus-compatible monitoring service that makes it easy to monitor and alarm on operational metrics at scale. Prometheus is a popular Cloud Native Computing Foundation open-source project for monitoring and alerting that is optimized for container environments.

    As part of this launch, we are introducing additional capabilities such as the Prometheus-compatible alert manager, recording rules and alerting rules. We also added support for provisioning Amazon Managed Service for Prometheus workspaces, rules configurations, and alert manager configurations with AWS CloudFormation. You can track and audit changes made to your Amazon Managed Service for Prometheus workspaces with an expanded set of CloudTrail logs, such as when recording rules are deleted, alert configurations are changed, and more. Customers can also tag their Amazon Managed Service for Prometheus workspaces to help manage, identify, organize, filter, and control access to them.

    Amazon Managed Service for Prometheus is generally available in the following regions: US East (Ohio), US East (N. Virginia), US West (Oregon), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Europe (Frankfurt), Europe (Ireland), and Europe (Stockholm). To get started, check out the user guide and AWS News Blog. To learn more, visit the Amazon Managed Service for Prometheus product page and pricing page for more information.

    » AWS IoT Events is available in the AWS GovCloud (US-West) Region

    Posted On: Sep 29, 2021

    AWS IoT Events is now available in the AWS GovCloud (US-West) Region.

    AWS IoT Events is a fully managed service that makes it easy to detect and respond to changes indicated by IoT sensors and applications. For example, you can use AWS IoT Events to detect malfunctioning machinery, a stuck conveyor belt, or a slowdown in production output. When an event is detected, AWS IoT Events automatically triggers actions or alerts so that you can resolve issues quickly, reduce maintenance costs, and increase operational efficiency.

    Detecting events based on data from thousands of devices requires companies to write code to evaluate the data, deploy infrastructure to host the code, and secure the architecture from end-to-end, which is undifferentiated heavy lifting that customers want to avoid. Using AWS IoT Events, customers can now easily detect events like this at scale by analyzing data from a single sensor or across thousands of IoT sensors and hundreds of equipment management applications in near real time. With AWS IoT Events, customers use a simple interface to create detectors that evaluate device data and trigger AWS Lambda functions or notifications via Amazon Simple Notification Service (SNS) in response to events. For example, when temperature changes indicate that a freezer door is not sealing properly, AWS IoT Events can automatically trigger a text message to a service technician to address the issue.

    To get started with AWS IoT Events, launch a sample detector model and test inputs to it from the AWS IoT Events console. For a full list of AWS Regions where AWS IoT Events is available, visit the AWS Region table. Visit our AWS IoT website to learn more about AWS IoT services.

    » AWS App Mesh is now available in the AWS China (Beijing) Region and AWS China (Ningxia) Region

    Posted On: Sep 29, 2021

    AWS App Mesh is now available in the Amazon Web Services China (Beijing) Region, operated by Sinnet, and Amazon Web Services China (Ningxia) Region, operated by NWCD. AWS App Mesh is a service mesh that provides application-level networking to make it easy for your services to communicate with each other across multiple types of compute infrastructure. AWS App Mesh standardizes how your services communicate, giving you end-to-end visibility and options to tune for high-availability of your applications.

    For more information, please visit the AWS App Mesh product page.

    » AWS Device Farm announces support for testing web apps on Microsoft Edge browser

    Posted On: Sep 28, 2021

    AWS Device Farm’s Desktop Browser Testing feature lets you test your web applications on different versions of Chrome, Firefox, and Internet Explorer browsers. With today’s launch, we are adding support for the Microsoft Edge browser.

    You can now start testing your web apps on different desktop versions of the Microsoft Edge browser by simply changing a few lines of code in your existing Selenium tests running on Device Farm. With Device Farm, you only pay for the time your tests are executing on a browser; and can test concurrently on multiple browser instances without having to incur any additional costs. For every browser instance your test is executed on, Device Farm generates videos, action logs, console logs, and WebDriver logs so you can quickly identify and fix issues with your web applications.

    To learn more about the feature and to get started please visit our documentation.

    » Amazon EC2 now offers Global View on the console to view all resources across regions together

    Posted On: Sep 28, 2021

    You can now view your AWS resources such as Instances, VPCs, Subnets, Security Groups, Volumes across AWS Regions. Previously, finding specific resources, monitoring for their status or taking inventory in the console was manual and time consuming. You had to know which region a particular instance resided in, or had to manually switch across multiple different regions to look for it. Global View provides visibility to all your resources in a single pane of glass across AWS regions. It helps monitor resource counts, notice abnormalities sooner rather than later, and find stray resources.

    Global View can be accessed from the EC2 and VPC Consoles. It provides the ability to search across resources by resource ID or tag values. You can filter resources by regions or resource type. Once a resource is located, you can access the existing management screens of the selected resource in the right region.

    This capability is currently available for 5 resources: Instances, VPCs, Subnets, Security Groups, Volumes. in all AWS Regions except Amazon Web Services China (Beijing) Region, Amazon Web Services China (Ningxia) Region and AWS GovCloud (US). To learn more about Global View, please refer to our documentation here.

    » Amazon Connect Wisdom is now generally available

    Posted On: Sep 27, 2021

    Amazon Connect Wisdom, now generally available, delivers agents the information they need to solve customer issues as they’re actively speaking with customers. Contact centers often use knowledge management systems and document repositories that are separate from their agents' desktop application, which forces agents to spend valuable time searching for answers while speaking with customers, leading to poor customer experiences.

    With Amazon Connect Wisdom, agents can search across connected repositories from within their agent desktop to find answers quickly. Wisdom connects to knowledge repositories with built-in connectors for third-party applications including Salesforce and ServiceNow. Customers can also ingest content from other knowledge stores using the Wisdom ingestion APIs. Contact centers can easily provide agents access to Wisdom using the Connect agent application or by embedding Wisdom into their own agent application using Amazon Connect Streams, a browser-based contact center integration API.

    When used with Contact Lens for Amazon Connect, Amazon Connect Wisdom leverages machine learning powered speech analytics to automatically detect customer issues during calls and then recommend content in real-time to help resolve the issue, so that agents don't have to manually search. For example, when a customer calls a business about a malfunctioning washing machine, Amazon Connect Wisdom detects the reference to a broken part and presents the agent a suggested article on warranty claim procedures.

    Amazon Connect Wisdom is available in the US East (N. Virginia), US West (Oregon), Europe (London), Europe (Frankfurt), Asia Pacific (Sydney), and Asia Pacific (Tokyo) AWS regions. To enable real-time recommendations, you must use Contact Lens for Amazon Connect. To learn more about Amazon Connect Wisdom, please visit the Amazon Connect Wisdom website or see the help documentation.

    » Amazon Connect now offers, in Public Preview, high-volume outbound communications for calls, texts, and emails

    Posted On: Sep 27, 2021

    Starting today, Amazon Connect offers high-volume outbound communications for calls, texts, and emails. Amazon Connect now offers organizations a simple, embedded, cost-effective way to contact millions of customers daily for communications like marketing promotions, appointment reminders, and upcoming delivery notifications without having to integrate third-party tools. Contact center managers can more easily schedule and launch high-volume outbound communications by simply specifying the communications channel, contact list, and content that will be sent to customers. Today, many businesses are constrained by legacy contact center technologies that only allow inbound communications and rely on separate applications and tools to reach customers with outbound communications. Integrating tools for outbound communications into contact centers is time consuming, expensive, and difficult to manage because each outbound communication channel (calls, texts, or emails) requires separate applications – resulting in a solution that lacks flexibility and is difficult to scale to high volumes.

    The new communication capabilities include a predictive dialer that automatically calls customers in a list. The integrated list management capability is provided by Amazon Pinpoint. The dialer also uses a machine learning powered answering machine detection to distinguish between a live customer, voicemail greeting, or busy signal to increase agent efficiency by only connecting agents to a live customer. For example, a large hospital healthcare network can send texts and emails to ask patients to confirm upcoming appointments, and then automatically call all patients that fail to respond. High-volume outbound communications give companies the ability to more efficiently communicate with their customers across channels at scale without having to perform difficult and expensive third-party integrations.

    Amazon Connect high-volume outbound communications is now available in US East (N. Virginia), US West (Oregon), and Europe (London) AWS regions. To learn more about Amazon Connect High-volume outbound communications, visit our product page or refer to our Admin Guide. Sign up for the preview here.

    » Now auto-terminate idle EMR clusters to lower cost

    Posted On: Sep 27, 2021

    Amazon EMR is the industry-leading cloud big data platform for processing vast amounts of data using open source tools such as Apache Spark, Apache Hive, Apache HBase, Apache Flink, Apache Hudi, and Presto. Today, we are excited to announce that Amazon EMR now supports auto-terminating idle EMR clusters, a new feature that automatically terminates your EMR cluster if it has been idle, to reduce the cost without the need to manually monitor cluster activity. You can specify the idle timeout value when enabling auto-termination for both existing and new clusters and EMR will automatically terminate the cluster when it has been idle for specified time.

    With this feature, EMR continuously samples key metrics associated with the workloads running on the clusters, and auto-terminates when the cluster is idle. This feature is available when launching EMR clusters from EMR Console, AWS CLI, and AWS SDK.

    Auto-termination of idle EMR clusters is available on Amazon EMR release version 5.30.0 and 6.1.0 and later. You can use this feature across 16 regions globally: US East (Ohio), US East (N. Virginia), US West (Oregon), US West (N. California), Canada (Central), Europe (Ireland), Europe (Frankfurt), Europe (London), Europe (Paris), Europe (Stockholm), South America (São Paulo), Asia Pacific (Mumbai), Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific (Sydney), and Asia Pacific (Tokyo).

    To get started, read our auto-terminating idle EMR clusters documentation here.

    » Amazon Connect Voice ID is now generally available

    Posted On: Sep 27, 2021

    Amazon Connect Voice ID is a Machine Learning (ML) powered voice authentication feature for Amazon Connect that makes voice interactions in contact centers more secure and efficient. Historically, contact centers have used a time-consuming knowledge-based authentication process where callers have to answer multiple questions based on personal details, such as social security number or date of birth. Amazon Connect Voice ID analyzes caller's unique voice characteristics using machine learning to verify identity in real-time without changing the natural flow of conversation. This helps improve agent productivity and contact center operating costs. Amazon Connect Voice ID also detects fraudsters in real-time from a custom watch-list for a contact center instance, improving security of contact center operations.

    Amazon Connect Voice ID's authentication and fraud risk detection can be enabled in an Amazon Connect Instance using a simple Console interface. Drag and drop integration with Amazon Connect contact flows make it easy to setup and configure Voice ID, providing a high degree of control and flexibility in managing Interactive Voice Response sequence for both authentication and fraud risk detection. Amazon Connect Voice ID exposes these capabilities to agents in the Connect agent application, making it simple to optimize agents' time in verifying the caller's identity. With Amazon Connect Voice ID, you only pay for what you use based on the number of enrollment, authentication or fraud risk detection transactions. There are no required up-front payments, long-term commitments or minimum monthly fees. Please visit the Amazon Connect pricing page for more details.

    Amazon Connect Voice ID is now available in US East (N. Virginia), US West (Oregon), Asia Pacific (Tokyo), Europe (Frankfurt), Asia Pacific (Singapore), Asia Pacific (Sydney), and Europe (London) AWS regions. To learn more about Amazon Connect Voice ID visit our webpage or read the documentation.

    » AWS Backup simplifies recovery point deletions

    Posted On: Sep 27, 2021

    AWS Backup now makes it easier to delete recovery points that customers no longer need. Customers can use the new asynchronous delete operation from the console, CLI or APIs, to clean up existing recovery points in bulk and manage their backups more cost-effectively.

    This capability is available in all the AWS Regions where AWS Backup is supported. See the AWS Region Table for more information. AWS Backup centralizes and automates data protection across AWS services with compliance auditing and reporting capabilities for your business and regulatory needs. To learn more about AWS Backup, visit the product page and documentation. To get started, visit the AWS Backup console.

    » AWS Launch Wizard now supports Microsoft SQL Server Always On deployments on Red Hat Enterprise Linux

    Posted On: Sep 27, 2021

    Following the launch of Red Hat Enterprise Linux with Microsoft SQL Server for Amazon EC2, you can now easily deploy RHEL SQL Server Always On availability groups using AWS Launch Wizard.

    AWS Launch Wizard offers a guided way of sizing, configuring, and deploying AWS resources for third party applications such as Microsoft SQL Server Always On, allowing you to get your deployments up and running within a few hours. With this launch, you can now leverage the same ease of use to perform SQL Server Always On deployments on Red Hat Enterprise Linux, without the need to manually provision and configure individual AWS resources.

    AWS Launch Wizard for SQL Server is available at no additional charge. You only pay for the AWS resources that are provisioned for running your SQL Server application. To learn more about using AWS Launch Wizard to accelerate your SQL Server Always On deployments, visit the AWS Launch Wizard page and overview documentation.

    » Introducing AWS WAF Security Automations v3.2

    Posted On: Sep 27, 2021

    The AWS Solutions team recently updated AWS WAF Security Automations, a solution that automatically deploys a set of AWS WAF (web application firewall) rules that filter common web-based attacks. Users can select from preconfigured protective features that define the rules included in an AWS WAF web access control list (web ACL). Once deployed, AWS WAF protects your Amazon CloudFront distributions or Application Load Balancers by inspecting web requests.

    The updated solution now supports IP retention on Allowed and Denied IP sets. Customers can enter a retention period for these IP sets respectively to activate the feature. Once activated, the IP addresses due to expire will be removed from their IP set after reaching the defined retention period. This gives customers more flexibility to manage these IP sets.

    Additional AWS Solutions Implementations offerings are available on the AWS Solutions page, where customers can browse common questions by category to find answers in the form of succinct Solution Briefs or comprehensive Solution Implementations, which are AWS-vetted, automated, turnkey reference implementations that address specific business needs.

    » Application Load Balancer now enables AWS PrivateLink and static IP addresses by direct integration with Network Load Balancer

    Posted On: Sep 27, 2021

    Elastic Load Balancing now supports forwarding traffic directly from Network Load Balancer (NLB) to Application Load Balancer (ALB). With this feature, you can now use AWS PrivateLink and expose static IP addresses for applications built on ALB.

    ALB is a managed layer 7 proxy that provides advanced request-based routing. NLB operates at layer 4 and provides support for PrivateLink and zonal static IP addresses. PrivateLink enables private connectivity to your applications without exposing traffic to the public internet, and zonal static IP addresses can simplify your clients’ network and security configurations. Prior to this launch, customers who wanted to integrate NLB with ALB had to set up custom mechanisms, like AWS Lambda functions, to manage ALB IP address changes. With this launch, you can register ALB as a target of NLB to forward traffic from NLB to ALB without needing to actively manage ALB IP address changes, allowing you to combine the benefits of NLB, including PrivateLink and zonal static IP addresses, with the advanced request-based routing of ALB to load balance traffic to your applications. ALB-type target groups are available for NLBs in all commercial AWS Regions and AWS GovCloud (US) Regions.

    To get started, create a new ALB-type target group, register your ALB, and configure your NLB to forward traffic to the ALB-type target group. You can also enable PrivateLink on your NLB to provide your services privately to clients. Please visit the blog post and the NLB documentation to learn more.

    » Amazon RDS for Oracle now supports Oracle Application Express (APEX) Version 21.1

    Posted On: Sep 27, 2021

    Amazon Relational Database Service (RDS) for Oracle now supports version 21.1 of Oracle Application Express (APEX) for 12.1, 12.2 and 19c versions of Oracle Database. Using APEX, developers can build applications entirely within their web browser. To learn more about the latest features of APEX 21.1, please refer to Oracle’s blog post.

    For more details on supported APEX versions and how to add or modify APEX options for your RDS Oracle database, please refer to the Amazon RDS for Oracle APEX Documentation.

    Amazon RDS for Oracle makes it easy to set up, operate, and scale Oracle Database deployments in the cloud. See Amazon RDS for Oracle Database Pricing for regional availability.

    » Amazon RDS for Oracle now supports sqlnet.ora client parameters for the Oracle Native Network Encryption (NNE)option

    Posted On: Sep 27, 2021

    Amazon Relational Database Service (Amazon RDS) for Oracle now supports four new customer modifiable sqlnet.ora client parameters for the Oracle Native Network Encryption (NNE) option. Amazon RDS for Oracle already supports server parameters which define encryption properties for incoming sessions. These client parameters apply to outgoing connections such as those used by database links.

    You can use the SQLNET.ENCRYPTION_CLIENT parameter to turn encryption on for the client, SQLNET.ENCRYPTION_TYPES_CLIENT to specify a list of encryption algorithms for the client to use, SQLNET.CRYPTO_CHECKSUM_CLIENT to specify the checksum behavior for the client, and SQLNET.CRYPTO_CHECKSUM_TYPES_CLIENT to specify a list of crypto-checksum algorithms for the client to use.

    Amazon RDS for Oracle has discontinued SHA1 and MD5 from the default list of ciphers. The recommended ciphers to use are SHA256, SHA384, SHA512 in the NNE option. If you need to use SHA1 and MD5, you have to explicitly set “SQLNET.CRYPTO_CHECKSUM_TYPES_SERVER” and “SQLNET.CRYPTO_CHECKSUM_TYPES_CLIENT” values to use “SHA1” or “MD5” in the “options” parameter for the active DB connection to work.

    You can change the settings of the sqlnet.ora parameters for the Oracle Native Network Encryption (NNE) option for the client as described in the Amazon RDS for Oracle documentation.

    Amazon RDS for Oracle makes it easy to set up, operate, and scale Oracle database deployments in the cloud. See Amazon RDS for Oracle Pricing for up-to-date pricing and regional availability.

    » Amazon Genomics CLI is now Generally Available

    Posted On: Sep 27, 2021

    Today, we announced the general availability of Amazon Genomics CLI, an open-source tool for genomics and life science customers to process genomics data at petabyte scale on AWS.

    Amazon Genomics CLI simplifies and automates the deployment of cloud resources like workflow engines and compute clusters, providing genomics and life science customers with an easy-to-use command line to quickly setup and run genomics workflows like Cromwell and Nextflow on Amazon Web Services (AWS). By removing the heavy lifting from setting up and running genomics workflows in the cloud, software developers and researchers can automatically provision, configure and scale cloud resources to enable faster and more cost-effective population-level genetics studies, drug discovery cycles, and more.

    Amazon Genomics CLI is available on most commercial regions at launch except China, AWS Gov Cloud (US), and Air-gapped regions.

    To learn more about and get started with Amazon Genomics CLI, visit:

  • News blog 
  • Product detail page 
  • GitHub repository
  • » Amazon QuickSight Q is now generally available

    Posted On: Sep 24, 2021

    Today, we are excited to announce the general availability for Amazon QuickSight Q. Q is a machine learning-powered natural language capability that gives anyone in an organization the ability to ask business questions in natural language and receive accurate answers with relevant visualizations. For example, users simply type “what is our year-over-year growth rate” and get an instant answer in QuickSight as a visualization.

    Previously, when business users couldn’t find an answer to their question from their data dashboards, they had to submit ad-hoc requests to their BI teams, which could take several weeks to complete. With Amazon QuickSight Q, business users can now get answers to their questions instantly and reduce the burden on their BI teams.

    Amazon QuickSight Q is now available in US East (N. Virginia), US East (Ohio), US West (Oregon), Europe (Ireland), Europe (Frankfurt), and Europe (London).

    Learn more about Q in the AWS News Blog here.

    » AWS IoT Device Defender now supports Detect alarm verification states

    Posted On: Sep 24, 2021

    With AWS IoT Device Defender, customers can now verify an alarm based on their investigation of detected behavior anomalies. They can verify an alarm as True positive, Benign positive, False positive, or Unknown and provide a description of their verification. Users, such as a security or operational team, can use this to manage alarms and improve response time.

    Customers can view or filter AWS IoT Device Defender Detect alarms using one of the four verification states. They can mark alarm verification states so that other members of their team can take follow-up actions (for example, performing mitigation actions on ‘True positive’ alarms, skipping ‘Benign positive’ alarms, or continuing investigation on ‘Unknown’ alarms). Additionally, they can verify an alarm as ‘False positive’ to let AWS know that they believe AWS IoT Device Defender identified behavior anomalies incorrectly.

    Detect alarm verification states is available in all AWS Regions where AWS IoT Device Defender is available. To learn more, see AWS IoT Device Defender developer guide.

    » AWS Storage Gateway simplifies tape management for Tape Gateway

    Posted On: Sep 24, 2021

    AWS Storage Gateway now makes it easier and faster for you to search, view, and manage your virtual tapes stored in AWS using Tape Gateway. From the Tapes page in the Storage Gateway management console, you can now quickly search for tapes using common filters such as tape barcode and status. You simply select the desired filter from the drop-down menu and quickly narrow down your search to the relevant set of tapes, saving you time and management overhead. For example, to delete archived tapes exceeding your defined retention period, you can use the Archived filter, select the desired date range, and delete all tapes that match your specified filter in just a few clicks.

    Tape Gateway supports all leading backup applications and enables you to replace physical tapes on premises with virtual tapes in AWS without changing backup workflows. Tape Gateway caches data on premises for low latency access, compresses and encrypts data in transit to AWS, and transitions virtual tapes to Amazon S3 Glacier or Amazon S3 Glacier Deep Archive to help you minimize storage costs.

    This capability is available starting today in all AWS Regions. Visit the Storage Gateway User guide to learn more, or log into the Storage Gateway console to get started.

    » Amazon ElastiCache for Redis now supports auto scaling in 17 additional public regions

    Posted On: Sep 24, 2021

    Amazon ElastiCache for Redis auto scaling is now generally available in all public AWS regions excluding AWS GovCloud (US) and AWS China (Beijing and Ningxia) Regions.

    Amazon ElastiCache for Redis auto scaling enables you to automatically adjust capacity to maintain steady, predictable performance at lower costs. It uses AWS Application Auto Scaling to manage scaling and Amazon CloudWatch metrics to determine when it is time to scale up or down.

    ElastiCache for Redis supports target tracking and scheduled auto scaling policies. With target tracking, you define a target metric and ElastiCache for Redis adjusts resource capacity in response to live changes in resource utilization. The intention is to provide enough capacity to maintain utilization at the target value specified. For instance, when memory utilization rises, ElastiCache for Redis will add nodes to your cluster to increase memory capacity and reduce utilization back to the target level. This enables your cluster to adjust capacity automatically to maintain high performance. Conversely, when memory utilization drops below the target amount, ElastiCache for Redis will remove nodes from your cluster to reduce over-provisioning and lower costs. With scheduled scaling, you can set specific days and times for ElastiCache to scale your cluster to accommodate predictable workload capacity changes.

    Learn more about auto scaling in ElastiCache for Redis on the feature page or the ElastiCache documentation.

    » Now Use Lifecycle Configurations to Customize Amazon SageMaker Studio

    Posted On: Sep 24, 2021

    Amazon SageMaker Studio is the first fully integrated development environment (IDE) for machine learning (ML). SageMaker Studio provides a single, web-based visual interface where you can perform all ML development steps required to prepare data, as well as build, train, and deploy models. With a single click, data scientists and ML developers can quickly spin up SageMaker Studio Notebooks for exploring datasets and building models. Now you can use lifecycle configurations to automate customizations for your Studio development environment.

    Lifecycle configurations are shell scripts triggered by SageMaker Studio lifecycle events, for example, starting a new Studio notebook. You can use the scripts to customize Studio, for example, install custom packages, configure notebook extensions, preload datasets, and set up source code repositories. Lifecycle configurations in conjunction with the capability to bring your own container image to SageMaker Studio gives you complete flexibility and control to configure Studio to meet your specific needs. For example, you can create a minimal set of base container images with the most commonly used packages and libraries, and then use lifecycle configurations to install additional packages for specific use cases across your data science and ML teams.

    The lifecycle configurations feature is now available in all AWS regions where SageMaker Studio is available. You can create lifecycle configurations and attach them to your Studio domain or to an individual user using AWS CLI and AWS SDK. You can quickly get started using our sample scripts and examples. To learn more about this new capability visit the SageMaker Studio user guide.

    » AWS announces availability of Microsoft Windows Server 2022 images on Amazon EC2

    Posted On: Sep 24, 2021

    AWS customers can now use managed Amazon Machine Images (AMIs) with Microsoft Windows Server 2022. With these AMIs, customers can launch Windows Server 2022 and take full advantage of the latest Windows features on AWS. Amazon EC2 makes launching and running Windows Server 2022 easy whether it is for testing and enabling new features in the software, or for scalable adoption of the latest capabilities on compute instances globally.

    Amazon creates and manages Microsoft Windows Server 2022 AMIs providing a reliable and quick way to launch Windows Server 2022 on EC2 instances. By running Windows Server 2022 on Amazon EC2 instances, you can experience the improved security, performance and reliability of Windows Server 2022 together with the enterprise focused cloud services on AWS. You can either use the managed stock Windows Server 2022 AMIs or create your own custom AMIs tailored for your requirements from these managed AMIs.

    Customers can find and launch instances directly from the Amazon EC2 Console or through API or CLI commands. AMIs and Windows Server can be run with all available pricing options for EC2 instances and are enabled across all Public, GovCloud and China regions of AWS. For more details on getting the best out of AWS EC2 instances running Windows Server 2022 check out the Windows on AWS page and the guide on AWS Windows AMIs.

    » Amazon Macie adds support for selecting managed data identifiers

    Posted On: Sep 23, 2021

    Amazon Macie now allows you to select which managed data identifiers to use when you create a sensitive data discovery job. This allows you to customize what data types you deem sensitive and would like Macie to alert on per specific data governance and privacy needs in your organization. When you create a job, choose from the growing list of managed data identifiers such as personally identifiable information (PII), financial information, or credential materials that you would like to target for each sensitive data discovery job you configure and run with Macie.

    Amazon Macie uses a combination of criteria and techniques, including machine learning and pattern matching, to detect sensitive data. These criteria and techniques, referred to as managed data identifiers, can detect a large and growing list of sensitive data types for many countries and regions, including multiple types of financial data, personal health information (PHI), and personally identifiable information (PII). Each managed data identifier is designed to detect a specific type of sensitive data—for example, credit card numbers, AWS secret keys, or passport numbers for a particular country or region. When you create a sensitive data discovery job, you can configure the job to use these identifiers to analyze objects in Amazon Simple Storage Service (Amazon S3) buckets that you specify.

    Getting started with Amazon Macie is fast and easy with one-click in the AWS Management Console or with a single API call. In addition, Macie has multi-account support using AWS Organizations, which makes it easy for you to enable Macie across all of your AWS accounts. Once enabled, Macie automatically gathers a complete S3 inventory at the bucket level and automatically and continually evaluates every bucket to alert on any publicly accessible buckets, unencrypted buckets, or buckets shared or replicated with AWS accounts outside of a customer’s organization. Then, Macie applies machine learning and pattern matching techniques to the buckets you select to identify and alert you to sensitive data, such as names, addresses, credit card numbers, or credential materials. This can help you comply with regulations, such as the Health Insurance Portability and Accountability Act (HIPAA) and General Data Privacy Regulation (GDPR).

    Amazon Macie comes with a 30-day free trial for S3 bucket level inventory and evaluation of access control and encryption. Sensitive data discovery is free for the first 1 GB per account per region each month with additional scanning charged according to the Amazon Macie pricing plan. To learn more, see the Amazon Macie documentation page.

    » Amazon Connect Customer Profiles adds product purchase history to personalize customer interactions

    Posted On: Sep 23, 2021

    Amazon Connect Customer Profiles now supports out-of-the-box integration with product purchase history from Salesforce. When a customer calls or messages a contact center for service, Amazon Connect Customer Profiles equips contact center agents with the customer information they need to deliver personalized customer service and resolve issues quickly. Customer Profiles helps make it simple to bring together customer information (e.g., name, address, phone number, contact history, purchase history, open issues) from multiple applications into a unified customer profile, delivering the profile directly to the agent as soon as they begin interacting with the customer. If an agent wants to understand previous interactions to service a customer, they can visit Contact Trace Record (CTR) details page by clicking “CTR Details” to review information such as call categorization, call sentiments and transcripts. Customer Profiles can be used out of the box by agents or embedded in your existing agent application.

    Amazon Connect Customer Profiles is available in Europe (London), Europe (Frankfurt), Asia Pacific (Sydney), Asia Pacific (Singapore), Asia Pacific (Tokyo), US West (Oregon), Canada (Central) and US East (N. Virginia). To learn more about Amazon Connect Customer Profiles please visit our website.

    » AWS WAF now offers in-line regular expressions

    Posted On: Sep 23, 2021

    AWS WAF extends its regular expression (regex) support, allowing regex patterns to be expressed in-line within a rule statement. Previously, you had to create a regex pattern set, which provides a collection of regex patterns in a rule statement, even if you wanted to use just a single regex pattern in your WAF rule logic. With in-line regex, you can now include a single regex pattern directly inside a WAF rule statement, simplifying how WAF rules are expressed within your web ACL.

    In addition, in-line regex patterns may consume less Web ACL Capacity Units (WCUs) as each pattern consumes 3 WCUs whereas a regex pattern set consumes 25 WCUs. For example, if you want to use a regular expression in a scope-down statement to apply AWS WAF Bot Control to dynamic content only, you can save on WCUs by using an in-line regex pattern instead of a regex pattern set.

    There is no additional cost for using regex patterns in rule statements, but standard service charges for AWS WAF still apply. Support for in-line regex match is available in all AWS WAF regions and for each supported service, including Amazon CloudFront, Application Load Balancer, Amazon API Gateway, and AWS AppSync. For more information, see the AWS WAF developer guide.

    » Amazon Simple Email Service is now available in the Asia Pacific (Osaka) Region

    Posted On: Sep 23, 2021

    Amazon Simple Email Service (Amazon SES) is now available in the Asia Pacific (Osaka) AWS Region. Amazon SES is a scalable, cost-effective, and flexible cloud-based email service that allows digital marketers and application developers to send marketing, notification, and transactional emails from within any application. To learn more about Amazon SES, visit this page.

    With this launch, Amazon SES is available in 21 AWS regions globally: US East (Virginia, Ohio), US West (N. California, Oregon), AWS GovCloud (US-West), Asia Pacific (Mumbai, Sydney, Singapore, Seoul, Tokyo, Osaka), Canada (Central), EU (Ireland, Frankfurt, London, Paris, Stockholm, Milano), Middle East (Bahrain), South America (Sao Paulo), and Africa (Cape Town).

    For a complete list of all of the regional endpoints for Amazon SES, see AWS Service Endpoints in the AWS General Reference.

    » Amazon ElastiCache now supports M6g and R6g Graviton2-based instances in additional regions

    Posted On: Sep 23, 2021

    Amazon ElastiCache for Redis and Memcached now supports Graviton2 M6g and R6g instance families in additional regions: South America (Sao Paulo), Asia Pacific (Hong Kong, Seoul), Europe (London, Stockholm), North America (Montreal), US East (GovCloud US East), US West (GovCloud US West), and mainland China (Ningxia, Beijing). Customers choose Amazon ElastiCache for workloads that require blazing-fast performance with sub-millisecond latency and high throughput. Now, with Graviton2 M6g and R6g instances, customers can enjoy up to a 45% price/performance improvement over previous generation instances. Graviton2 instances are now the default choice for Amazon ElastiCache customers.

    AWS Graviton2 processors are custom built by AWS using 64-bit Arm Neoverse cores to deliver the best price performance for your cloud workloads. Amazon ElastiCache Graviton2 instances are available in sizes large to 16xlarge to offer memory flexibility. The Amazon ElastiCache memory optimized R6g offers memory from 13.07 GiB to 419.10 GiB and the general purpose M6g cache instance offers memory that ranges from 6.38 GiB to 209.55 GiB. The Graviton2-based instances leverage the AWS Nitro System and ENA (Elastic Network Adapter) to deliver higher network bandwidths on smaller instances compared to R5 and M5 instance families. Amazon ElastiCache Graviton2 instances also come with the latest Amazon Linux 2, which replaces the soon-to-be deprecated Amazon Linux 1. Furthermore, Graviton2 instances support the latest versions of Redis and Memcached with an upgrade from previous generation instances.

    M6g and R6g instances are now available for Amazon ElastiCache in the US East (N. Virginia, Ohio, and GovCloud), US West (Oregon, N. California, and GovCloud), North America (Montreal), Europe (Frankfurt, Ireland, Stockholm, and London), South America (Sao Paulo), Mainland China (Beijing and Ningxia), and Asia Pacific (Singapore, Sydney, Tokyo, Seoul, Hong Kong, and Mumbai) Regions. You can purchase these instances as On-Demand or as Reserved Nodes with 1 or 3 year commitment periods. For complete information on pricing and regional availability, please refer to the Amazon ElastiCache pricing page or review our technical documentation for additional details including upgrading your existing instances.

    » AQUA is now available for Amazon Redshift RA3.xlplus nodes

    Posted On: Sep 23, 2021

    AQUA (Advanced Query Accelerator) for Amazon Redshift is now generally available for Amazon Redshift RA3.xlplus nodes.

    AQUA is a new distributed and hardware-accelerated cache that enables Amazon Redshift to run up to 10x faster than other enterprise cloud data warehouses by automatically boosting certain types of queries. AQUA uses AWS-designed processors with AWS Nitro chips adapted to speed up data encryption and compression, and custom analytics processors, implemented in FPGAs, to accelerate operations such as scans, filtering, and aggregation. Amazon Redshift RA3 is the latest generation node type that allows you to scale compute and storage for your data warehouses independently. The RA3 node family includes RA3.16xlarge, RA3.4xlarge, and RA3.xlplus nodes for large, medium, and small workloads.

    AQUA is now available with the RA3.16xlarge, RA3.4xlarge, or RA3.xlplus nodes at no additional charge and with no code changes. You can enable AQUA for your existing Redshift RA3 clusters or launch a new AQUA enabled RA3 cluster via the AWS Management Console, API, or CLI. To learn more about AQUA, visit the documentation.

    AQUA is available in US East (Ohio), US East (N. Virginia), US West (Oregon), Europe (Ireland), Europe (Frankfurt), Asia Pacific (Tokyo), Asia Pacific (Sydney), and Asia Pacific (Singapore) regions, and will expand to additional regions in the coming months.

    » Announcing General Availability of Tracing Support in AWS Distro for OpenTelemetry

    Posted On: Sep 23, 2021

    Today, we are announcing the general availability of AWS Distro for OpenTelemetry (ADOT) for tracing, a secure, production-ready, AWS-supported distribution of the OpenTelemetry project. With this launch, customers can use OpenTelemetry APIs and SDKs in Java, .Net, Python, Go, and JavaScript to collect and send traces to AWS X-Ray and monitoring destinations supported by the OpenTelemetry Protocol (OTLP).

    Part of the Cloud Native Computing Foundation, OpenTelemetry provides open source APIs, libraries, and agents to collect distributed traces and metrics for application monitoring. With AWS Distro for OpenTelemetry, you can instrument your applications just once to send correlated metrics and traces to multiple monitoring solutions and use auto-instrumentation agents to collect traces without changing your code. AWS Distro for OpenTelemetry also collects metadata from your AWS resources and managed services, so you can correlate application performance data with underlying infrastructure data, reducing the mean time to problem resolution. Use AWS Distro for OpenTelemetry to instrument your applications running on Amazon Elastic Compute Cloud (EC2), Amazon Elastic Container Service (ECS), and Amazon Elastic Kubernetes Service (EKS) on EC2 and AWS Fargate, AWS Lambda functions as well as on-premises.

    For your observability needs, you can choose AWS X-Ray as your tracing destination or one of the AWS partner destinations. You can configure and deploy the latest version of the AWS Distro for OpenTelemetry for container services and Amazon EC2 by using AWS CloudFormation templates, the AWS Command Line Interface, or Kubectl commands. Developers can use auto-instrumentation in Java and Python as well as OpenTelemetry SDKs to instrument their applications for collecting correlated metrics and traces. In addition, you can add the AWS managed Lambda layer for ADOT to collect traces from AWS Lambda functions.

    Visit our developer portal to learn more about AWS Distro for OpenTelemetry and download the latest release. Read more about the launch and instrumentation in various languages.

    » Customers can now manage AWS Service Catalog AppRegistry applications in AWS Systems Manager

    Posted On: Sep 23, 2021

    AWS Service Catalog AppRegistry and AWS Systems Manager Application Manager now provide an end-to-end AWS application management experience. With this release, customers can use AppRegistry to create applications within their infrastructure as code, CI/CD pipelines, and post-provisioning processes, and use Application Manager to view application operational data and perform operational actions. 

    Enterprises are creating, migrating and managing thousands of applications on AWS that use hundreds of thousands of resources. Customers can now navigate directly from AppRegistry to Application Manager to view their AppRegistry application resources, monitor the application operational and compliance status, view operational items, and execute runbooks against application stacks or individual resources. AppRegistry creates a resource group for every application and CloudFormation stack associated to the application. The resource group is kept up-to-date with the application definition as resources are added and removed from the application. Application resource groups can be used with any AWS services that support resource groups, including Amazon CloudWatch Application Insights and Amazon CloudWatch automatic dashboards.

    For more information, please refer to the documentation for AWS Service Catalog AppRegistry documentation and AWS Systems Manager Application Manager. See the AWS Region Table for Region availability.

    » AWS IoT Device Defender announces Audit One-Click

    Posted On: Sep 22, 2021

    Today we are launching Audit One-Click for AWS IoT Device Defender. Audit One-Click makes it easy for AWS IoT Core customers to improve their security baseline by making it possible to start auditing their account and IoT devices against security best practices with a single click.

    Audit One-Click allows customers to turn on an AWS IoT Device Defender audit with preset configurations including enabling all available audit checks and a daily audit schedule. It also provides contextual explanations for the benefits of regular security audits. Audit One-Click is only available from the AWS IoT console.

    Customers can use Audit One-Click in all AWS Regions where AWS IoT Device Defender is available. For more information about AWS IoT Device Defender audit, see AWS IoT Device Defender developer guide.

    » Amazon Lex is now available in the Asia Pacific (Seoul) and Africa (Cape Town) regions

    Posted On: Sep 22, 2021

    Starting today, Amazon Lex  is available in the Asia Pacific (Seoul) and Africa (Cape Town) regions. Amazon Lex is a service for building conversational interfaces into any application using voice and text. Amazon Lex combines advanced deep learning functionalities of automatic speech recognition (ASR) for converting speech to text, and natural language understanding (NLU) to recognize the intent of the text. This enables you to build applications with engaging users experiences and lifelike interactions. With Amazon Lex, you can easily create sophisticated, natural language, conversational bots (“chatbots”), virtual agents and IVR systems.

    With this launch, Amazon Lex is now available in 12 Regions globally, US East (N. Virginia), US West (Oregon), Canada (Central), Europe (Frankfurt), Europe (Ireland), Europe (London), Asia Pacific (Korea), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Africa (Cape Town), and AWS GovCloud (US-West). To get started, go to the Amazon Lex Management Console and see the documentation for more details.

    » Amazon EC2 Fleet instant mode now supports targeted Amazon EC2 On-Demand Capacity Reservations

    Posted On: Sep 22, 2021

    Starting today, you can use EC2 Fleet with targeted On-Demand Capacity Reservations. On-Demand Capacity Reservations enable you to reserve compute capacity for your Amazon EC2 instances in a specific Availability Zone for any duration. For targeted Capacity Reservations, instances must specifically target the Capacity Reservation to run in the reserved capacity. Until now, there was no option to use targeted Capacity Reservations when launching an EC2 Fleet.

    You can use targeted Capacity Reservations with EC2 Fleet in the instant mode only. Before requesting an EC2 Fleet, you need to create a Capacity Reservations resource group, add your targeted Capacity Reservations to that group, and reference that group in your EC2 Fleet launch template. When launching, EC2 Fleet will first launch instances into Capacity Reservations in the referenced Capacity Reservations resource group if they match the instance type, platform, and Availability Zone combination. If the number of unused Capacity Reservations is less than the target capacity, EC2 Fleet will launch the remaining capacity based on your EC2 Fleet configuration. This feature is helpful when you run multiple EC2 Fleets and want to use your Capacity Reservations only in selected fleets.

    The feature is available in all commercial AWS Regions except for Asia Pacific (Osaka), EU (Milano), Africa (Cape Town), and China (Beijing, Ningxia). Amazon EC2 Fleet simplifies the provisioning of EC2 capacity across different EC2 instance types, Availability Zones, and purchase models (On-Demand, Reserved Instances, Savings Plans, and Spot) to optimize your application’s scalability, performance, and cost. To learn more about using EC2 Fleet, please visit this page. To learn more about On-Demand Capacity Reservations, please visit this page. To learn more about using On-Demand Capacity Reservations with EC2 Fleet, please visit this page.

    » Amazon ECR adds the ability to replicate individual repositories to other regions and accounts

    Posted On: Sep 22, 2021

    Today, Amazon Elastic Container Registry (ECR) launched the ability to replicate specific repositories to accounts or regions, and see when images were replicated through the ECR API. This gives you granular control to replicate images within repositories you want, instead of replicating all images in a registry, and the ability to automate actions through the new DescribeImageReplicationStatus API whenever images are replicated.

    To get started, you can specify which repositories to replicate with a prefix within the AWS Management Console or PutReplicationConfiguration API. For example, the prefix “prod” would replicate repositories named prod-1 or prod-app, but not test-app. See the full documentation here and walk-through in the blog post here.

    » Amazon EMR Studio now supports multi-language Jupyter-based notebooks for Spark workloads

    Posted On: Sep 22, 2021

    EMR Studio is an integrated development environment (IDE) that makes it easy for data scientists and data engineers to develop, visualize, and debug big data and analytics applications written in R, Python, Scala, and PySpark. Today, we are excited to announce that from EMR 6.4.0 and later, you can use Python, Scala, SparkSQL, and R within the same Jupyter notebook in EMR Studio, providing flexibility to use different programming languages for Spark workloads.

    Previously, you could only write code in one language within the same notebook for Spark workloads. With this feature enhancement to Jupyter notebooks, you can now switch between Python, Scala, SparkSQL, and R within the same Jupyter notebook and share data between cells via temporary tables. You can also use this feature from EMR Notebooks or from Jupyter notebooks talking to Jupyter Enterprise Gateway (JEG) on EMR 6.4.0 and later.

    EMR Studio is available in US East (Ohio), US East (N. Virginia), US West (Oregon), Canada (Central), Europe (Ireland), Europe (Frankfurt), Europe (London), Europe (Stockholm), Asia Pacific (Mumbai), Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific (Sydney), and Asia Pacific (Tokyo) regions.

    To learn more about using multiple languages in the same Jupyter notebook in EMR studio see our documentation here.

    » Amazon Lex announces utterances statistics for bots built using Lex V2 console and API

    Posted On: Sep 22, 2021

    Amazon Lex is a service for building conversational interfaces into any application using voice and text. Starting today, Amazon Lex makes utterances statistics available through the Amazon Lex V2 console and API. You can now use utterances statistics to tune bots built on Lex V2 console and APIs to further improve conversational experience for your users. With this launch, you can view and analyze utterance information processed by the bot. This information can be used to improve performance of your bot by adding new utterances to existing intents and helping you discover new intents that can be serviced by the bot. Utterances statistics also enable you to compare performance across multiple versions of a bot. 

    There is no additional charge for using utterances statistics. You can access the feature from Amazon Lex V2 console, AWS Command Line Interface (CLI), or via APIs. To learn more, visit the Amazon Lex documentation page. Utterances statistics is now available with Lex V2 console and APIs in the following AWS Regions: US East (N. Virginia), US West (Oregon), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Canada (Central), Europe (Frankfurt), Europe (Ireland), and Europe (London).

    Amazon Lex V2 console and APIs make it easier to build, deploy and manage bots so that you can expedite building virtual agents, conversational IVR systems, self-service chatbots, or informational bots. For more information, visit Amazon Lex V2 documentation.

    » AWS Ground Station announces Licensing Accelerator

    Posted On: Sep 22, 2021

    AWS is announcing Licensing Accelerator, a new AWS Ground Station feature which provides commercial businesses, space start-ups, and universities access to resources to help them more efficiently secure spectrum licenses required for their operations and missions. Licensing accelerator is free-of-charge to AWS Ground Station customers. AWS Ground Station is a fully managed service that lets customers control satellite communications, process satellite data, and scale their satellite operations. With Licensing Accelerator, AWS Ground Station customers can launch and scale their spacecraft operations faster by leveraging the latest, centrally located information about satellite licensing regulations such as space station licensing, remote sensing licenses, and International Telecommunications Union (ITU) coordination.

    Today satellite operators must navigate complex and rapidly changing national and international requirements in order to launch, operate, and communicate with their satellites. To start, they must gain approval to launch their satellites. Next, they have to apply for spectrum needed to communicate with their spacecraft. In the case of Earth exploration satellites, operators also must secure remote sensing licenses. Finally, they have to coordinate their spectrum and operational needs with the ITU and its 193 members. All of this requires work with multiple agencies and comprehensive understanding about how all the different licensing application processes work. As a result, customers can find it difficult to accurately forecast how much time is needed to apply for licenses and meet all the requirements, resulting in possible launch and service delivery delays. Additionally, they may have to allocate additional capital budgets to meet licensing requirements and complete applications while ensuring their operations and missions continue on schedule.

    Licensing Accelerator removes the guesswork and provides a guide for satellite operators’ licensing requirements. By onboarding to Licensing Accelerator and answering a few questions about their satellite mission, AWS Ground Station customers get a customized step-by-step guide with consolidated checklists and links to free resources for their licensing needs. With Licensing Accelerator, AWS Ground Station customers no longer need to invest the capital to figure out these steps. Licensing Accelerator saves customers time and helps mitigate the risks of launch delays by helping satellite operators more accurately submit their applications and predict the time to get an operation approved.

    Customers who want additional assistance will be routed to Licensing Accelerator Partners who are expert radio frequency (RF) engineers, consultants, and lawyers with decades of licensing experience. Licensing Accelerator Partners offer a free initial consultation to AWS GS customers at no commitment, so customers can identify the right partner to meet their needs.

    Licensing Accelerator is available in the US East (Ohio) region with additional regions coming soon. To learn more about AWS Ground Station, visit our website here. To get started with Licensing Accelerator, onboard to AWS Ground Station through the AWS Console here and request access to Licensing Accelerator.

    » AWS Single Sign-On is now available in the AWS GovCloud (US-West) Region

    Posted On: Sep 22, 2021

    AWS Single Sign-On is now available in the AWS GovCloud (US-West) Region. For a full list of the regions where AWS SSO is available, see the AWS Regional Services List.

    AWS Single Sign-On (AWS SSO) is where you create, or connect, your workforce identities in AWS once and manage access centrally across your AWS organization. You can choose to manage access just to your AWS accounts or cloud applications. You can create user identities directly in AWS SSO, or you can bring them from your Microsoft Active Directory or a standards-based identity provider, such as Okta Universal Directory or Azure AD. With AWS SSO, you get a unified administration experience to define, customize, and assign fine-grained access. Your workforce users get a user portal to access all of their assigned AWS accounts or cloud applications. AWS SSO can be flexibly configured to run alongside or replace AWS account access management via AWS IAM.

    It is easy to get started with AWS SSO. With just a few clicks in the AWS SSO management console you can connect AWS SSO to your existing identity source and configure permissions that grant your users access to their assigned AWS Organizations accounts and hundreds of pre-integrated cloud applications, all from a single user portal.

    To learn more, please visit the AWS Single Sign-On web page, the AWS Region Availability pages, and the AWS GovCloud (US) web page.

    » Amazon DynamoDB now provides you more granular control of audit logging by enabling you to filter Streams data-plane API activity in AWS CloudTrail

    Posted On: Sep 22, 2021

    You now can use AWS CloudTrail to filter and retrieve Amazon DynamoDB Streams data-plane API activity, giving you more granular control over which DynamoDB API calls you want to selectively log and pay for in CloudTrail and to help address compliance and auditing requirements.

    Data plane events provide visibility into the data plane resource operations performed on or within a resource. You now can specify AWS::DynamoDB::Stream as a resource type, so that you can exercise granular control over logging of streams events and non-streams events for DynamoDB. For example, you can log only DynamoDB Stream APIs to narrow the CloudTrail events you receive, enabling you to identify security issues while controlling costs. With CloudTrail data-plane logging, you can record all API activity on DynamoDB, and receive detailed information such as the AWS Identity and Access Management (IAM) user or role that made a request, the time of the request, and the accessed table. DynamoDB data events are delivered to an Amazon S3 bucket and Amazon CloudWatch Events, creating an audit log of data access so that you can respond to events recorded by CloudTrail.

    CloudTrail logging of DynamoDB data plane events is available in all commercial AWS regions where CloudTrail is available. For data plane events pricing, see AWS CloudTrail pricing. To learn more about filtering DynamoDB streams data plane events, see Logging DynamoDB Operations by Using AWS CloudTrail.

    » Amazon SageMaker Autopilot now generates additional metrics for classification problems

    Posted On: Sep 21, 2021

    Amazon SageMaker Autopilot automatically builds, trains, and tunes the best machine learning models based on your data, while allowing you to maintain full control and visibility. Starting today, SageMaker Autopilot generates additional metrics, along with the objective metric, for all model candidates. For binary classification problems, Autopilot now generates F1 score (harmonic mean of the precision and recall), accuracy, and AUC (area under the curve) for all model candidates. For multi-class classification, Autopilot now generates both F1 macro and accuracy for all model candidates. As previously supported, you can select any of these metrics as the objective metric to be optimized by your Autopilot experiment. By viewing additional metrics along with the objective metric, you can now quickly assess and compare multiple candidates to build a model that best meets your needs.

    The additional metrics are now generated in all AWS regions where SageMaker Autopilot is currently supported. For a complete list of metrics and default objective metric per problem type, please review documentation. To get started with SageMaker Autopilot, see the Getting Started or access Autopilot within SageMaker Studio.

    » SageMaker Studio enables interactive Spark based data processing from Studio Notebooks

    Posted On: Sep 21, 2021

    Amazon SageMaker announces a new set of capabilities that will enable interactive Spark based data processing from SageMaker Studio Notebooks. Amazon SageMaker Studio is the first fully integrated development environment (IDE) for machine learning (ML). SageMaker Studio provides a single, web-based visual interface where you can perform all ML development steps required to prepare data, as well as build, train, and deploy models. With a single click, data scientists and developers can quickly spin up Studio Notebooks to interactively explore datasets and build ML models.

    Starting today, data scientists and data engineers can visually browse, discover, and connect to Spark data processing environments running on Amazon EMR, right from their Studio notebooks in a few simple clicks. Once connected, they can interactively query, explore and visualize data, and run Spark jobs using the built-in SparkMagic notebook environments for Python and Scala.

    Analyzing, transforming and preparing large amounts of data is a foundational step of any data science and ML workflow and businesses are leveraging Apache Spark for fast data preparation. SageMaker Studio already offers purpose-built and best-in-class tooling such as Experiments, Clarify and Model Monitor for ML. With the newly launched capability, customers can easily access purpose-built Spark environments from Studio Notebooks. SageMaker Studio can therefore now serve as a unified environment for data science and data engineering workflows enabling customers to standardize data workflows onto Studio notebooks.

    These new data analytics capabilities in SageMaker Studio are generally available in all AWS Regions where SageMaker Studio is available and there are no additional charges to use this capability. For complete information on pricing and regional availability, please refer to the SageMaker Studio pricing page. To learn more, see “Interactive Data Preparation with Studio Notebooks” in the SageMaker Studio Notebooks user guide.

    » Amazon Comprehend announces model management and evaluation enhancements

    Posted On: Sep 21, 2021

    Amazon Comprehend has launched a suite of features for Comprehend Custom to enable continuous model improvements by giving developers the ability to create new model versions, to continuously test on specific test sets, and to migrate new models to existing endpoints. Using AutoML, custom entity recognition allows you to customize Amazon Comprehend to identify entities that are specific to your domain; custom classification enables you to easily build custom text classification models using your business-specific labels. Custom models can subsequently be used to perform inference on text documents, both in real-time and batch processing modes. Creating a custom model is simple - no machine learning experience required. Below is a detailed description of these features:

    Improved Model Management - For most natural language processing (NLP) projects, models are continuously retrained over time as new data is collected or if there is deviation between the training dataset and documents processed at inference. With model versioning and live endpoint updates, you can continuously retrain new model versions, compare the accuracy metrics across versions, and update live endpoints with the best performing model with a single click.

  • Model Versioning allows you to re-train newer versions of an existing model making it easier to iterate and track the accuracy changes. Each new version can be identified with a unique version ID.
  • Active Endpoint Update enables update of an active synchronous endpoint with a new model. This ensures that you can deploy a new model version into production without any downtime.
  • Improved Control for Model Training/Evaluation - Data preparation and model evaluation are often the most tedious part of any NLP project. Model evaluation and troubleshooting can often be confusing without a clear indication of the training and test data split. You can now provide separate train and test datasets during model training. We also launched a new training mode which improves inference accuracy on long documents, spanning across multiple paragraphs.

  • Customer Provided Test Dataset allows you to provide an optional test dataset during model training. Previously, you had to manually run an inference job against a test set to evaluate a model. As additional data is collected and new model versions are trained, evaluating model performance using the same test dataset can provide for a fair comparison across model versions.
  • New Training Mode improves the accuracy of the entity recognizer model for long documents, containing multiple paragraphs. During model training using CSV annotations, choosing the ONE_DOC_PER_FILE input format for long documents allows the model to learn more contextual embeddings, significantly improving the model accuracy.
  • To learn more and get started, visit the Amazon Comprehend product page or our documentation.

    » AWS Site-to-Site VPN releases updated Download Configuration utility

    Posted On: Sep 21, 2021

    Today, AWS Site-to-Site VPN released an updated Download Configuration utility. Customers can now generate configuration templates for compatible Customer Gateway (CGW) devices, simplifying how customers setup VPN connections to AWS.

    This update also adds support for downloading configuration templates using a new API and Internet Key Exchange version 2 (IKEv2) parameters for many popular CGW devices; see the Your Customer Gateway page for compatible devices. As IKEv2 supports the latest security algorithms, reduced protocol complexity, and simpler security association (SA) negotiation, Site-to-Site VPN encourages customers to move to IKEv2. 

    Customers can create and download configuration templates using either the AWS Management Console, or by using two new APIs — GetVpnConnectionDeviceTypes and GetVpnConnectionDeviceSampleConfiguration. For more information, the Getting Started with VPN user guide and the AWS Site-to-Site VPN API reference may be helpful.

    These features are available to customers across all AWS commercial and AWS GovCloud Regions.

    » AWS Amplify CLI and Admin UI is now generally available in US West (N. California), Europe (Paris), Europe (Stockholm), South America (São Paulo), and Middle East (Bahrain)

    Posted On: Sep 21, 2021

    AWS Amplify offers a fully managed static web hosting service that accelerates your application release cycle by providing a simple CI/CD workflow for building and deploying full-stack static web applications. Simply connect your application's code repository in the console, and changes to your frontend and backend are deployed in a single workflow on every code commit.

    With today’s launch, AWS Amplify CLI and Admin UI is now available in 17 AWS Regions: US East (N. Virginia), US East (Ohio), US West (Oregon), US West (N. California), Asia Pacific (Mumbai), Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Canada (Central), EU (Frankfurt), EU (Ireland), EU (London), EU (Paris), EU (Stockholm), South America (São Paulo), and Middle East (Bahrain).

    Get started in the Amplify Admin UI or CLI.

    » Optimize your Amazon Forecast model with the accuracy metric of your choice

    Posted On: Sep 21, 2021

    We’re excited to announce that in Amazon Forecast, you can now select the accuracy metric of your choice to direct AutoML to optimize training a predictor for the selected accuracy metric. Additionally, we have added three more accuracy metrics to evaluate your predictor – average weighted quantile loss (Average wQL), mean absolute percentage error (MAPE), and mean absolute scaled error (MASE).

    Depending on the business operations and the accuracy metric that had traditionally been used in evaluating forecasts, customers preferred using different accuracy metrics for evaluating their predictors. Previously, customers understood the strength of their predictor by evaluating three accuracy metrics: weighted quantile loss (wQL) metric for each selected distribution point, weighted absolute percentage error (WAPE), and root mean square error (RMSE), but did not have control over the metric that AutoML optimizes model accuracy.

    With today’s launch, you can direct AutoML to optimize the predictor for a specific accuracy metric of your choosing and Forecast will provide customers five different model accuracy metrics for you to assess the strength of your forecasting models. These are: average weighted quantile loss (Average wQL) of all selected distribution points, weighted absolute percentage error (WAPE), mean absolute percentage error (MAPE), mean absolute scaled error (MASE), and root mean square error (RMSE), calculated at the mean forecast. For each metric, a lower value, which are non-negative, indicates a smaller error and therefore a more accurate model.

    To get started with this capability, read our blog to learn more about each accuracy metric and see Evaluating Predictor Accuracy. You can use this capability in all Regions where Forecast is publicly available. For more information about Region availability, see Region Table.

    » Amazon Detective supports S3 and DNS finding types, adds finding details

    Posted On: Sep 20, 2021

    Amazon Detective expands security investigation support for Amazon Simple Storage Service (S3) and DNS-related findings on Amazon GuardDuty, providing full coverage of all detections from GuardDuty. Along with this, Detective now makes it even easier for a security analyst to investigate entities and behaviors using a revamped user experience. 

    Now, security analysts can easily investigate unusual activities on their S3 buckets, and answer questions such as “Who created the S3 bucket?”, “When was the S3 bucket created?”, “Who made the S3 bucket public?”, and “Did the user execute sensitive APIs such as disable logging on other S3 buckets?”. They can also deep dive on findings related to low-reputation domain names (such as those associated with cryptocurrency-related activities) and algorithmically-generated domains. With this, security analysts can now easily analyze, investigate, and quickly identify the root cause of all GuardDuty finding types using Detective.

    Amazon Detective also improved the existing resource profile pages to enable customers to more quickly focus on the activity associated with the involved entities for a finding. The new finding overview provides a more complete set of details for each finding, and provides links to the profiles for each involved entity. Analysts can use this to further understand how various entities such as EC2 instances, IAM principals, and IP addresses are associated with findings. For example, Detective aggregates S3 bucket-level activity and relevant investigation context from existing data sources in an S3 bucket profile to aid investigations and provide analysts with the ability to pivot to other resources, such as the IAM user/roles sessions resources that accessed the bucket, or the remote IP address that invoked S3 bucket level APIs within the scope time.

    Security analysts who already use Detective for their security investigations will have the new capabilities enabled without performing any additional steps. They can also use the "Investigate in Detective" option in GuardDuty and Security Hub to pivot to Detective for further investigation of the newly supported findings. To read more about how to pivot from GuardDuty and Security Hub to Detective, see the Detective User Guide.

    Amazon Detective makes it easy to analyze, investigate, and quickly identify the root cause of potential security issues. To get started, enable a 30-day free trial of Amazon Detective with just a few clicks in the AWS Management console. See the AWS Regions page for all the regions where Detective is available. To learn more, visit the Amazon Detective product page.

    » Amazon Redshift Cross-Account Data Sharing is now generally available in AWS GovCloud (US) Regions

    Posted On: Sep 20, 2021

    Amazon Redshift data sharing allows you to share live and transactionally consistent data across different Redshift clusters without the complexity and delays associated with data copies and data movement. Ability to share data across clusters that are in the same AWS account is already available in AWS GovCloud(US) Regions. Now sharing data across Redshift clusters in different AWS accounts is also generally available in AWS GovCloud(US) Regions. Cross-account data sharing is supported on all Amazon Redshift RA3 node types. There is no additional cost to use cross-account sharing on your Amazon Redshift clusters. 

    With data sharing, you can securely share data at many levels including schemas, tables, views, and user defined functions, and use fine-grained controls to specify access for each data consumer. With cross-account data sharing, you can provide data access to other business groups within your organization, partners, and customers, enabling you to securely offer data and analytics as a service. Users with access to shared data can discover and query the data with high performance using standard SQL and analytics tools. Queries accessing shared data use the compute resources of the consumer Redshift cluster and do not impact the performance of the producer cluster. In addition to the database level privileges available to control sharing within the same account, Amazon Redshift integrates with AWS Identity Access Management (IAM) and offers additional granular security controls with a new authorization and acceptance workflow in cross-account sharing. With this, you can ensure that only authorized users are able to share data to other AWS accounts including outside organizations and consume data coming from other AWS accounts. Amazon Redshift also provides ability to monitor the data sharing permissions and the usage across all the consumer clusters and accounts and revoke access when needed.

    Learn more about data sharing capability in feature page and refer to the blog and documentation on how to get started with cross-account data sharing.

    » AWS announces General Availability of the Amazon GameLift Plug-in and AWS CloudFormation Templates for Unity

    Posted On: Sep 20, 2021

    Today, we are excited to announce the general availability (GA) of the Amazon GameLift Plug-in for Unity, making it easier to access GameLift resources and integrate GameLift into your Unity game. Trusted by some of the most successful game companies in the world like Ubisoft and Gungho, GameLift deploys, operates, and scales dedicated servers for multiplayer games. With this update, game developers can use the GameLift Plug-in for Unity to access GameLift APIs and deploy AWS CloudFormation templates for common gaming scenarios.

    To ease game development, a majority of game developers utilize a game engine, with Unity as a popular choice. The GameLift Plug-in for Unity provides everything a Unity developer needs to access GameLift to deploy and scale game servers. After downloading and installing the plug-in package from the GitHub repo/download page into the Unity development environment, developers can use the plug-in UI to configure settings, perform local testing of builds of their game server, and import and run a sample Unity game. The GameLift Plug-in for Unity also includes five pre-built CloudFormation template sample scenarios that developers can customize for their game, making it easier to integrate GameLift without having to be an AWS expert.

    Benefits of using the GameLift Plug-in for Unity:

  • Make it easier to use GameLift. The GameLift Plug-in for Unity provides a simple download and installation experience and includes native Unity UIs and workflows that make it easy for Unity developers to get started using GameLift.
  • Reduce the amount of development effort and time. The GameLift Plug-in for Unity comes with the tools for preparing Unity games to run on the GameLift service, including libraries needed to access GameLift APIs. It circumvents the need for developers to download the GameLift Server SDK in source code form and manually build the C# libraries.
  • Help developers learn how to use GameLift and access its features. The included CloudFormation templates allow developers to build a game server backend using other AWS services with UI-level integrations in the Unity Editor. The GameLift Plug-in for Unity also includes a sample game developers can use to explore the basics of integrating their game with Amazon GameLift.
  • As a reminder, GameLift is available in 22 public regions: US East (Ohio and N. Virginia), US West (N. California and Oregon), Africa (Cape Town), Asia Pacific (Hong Kong, Mumbai, Seoul, Singapore, Sydney, and Tokyo), Canada (Central), Europe (Frankfurt, Ireland, London, Milan, Paris, and Stockholm), Middle East (Bahrain), South America (São Paulo), China (Beijing) operated by Sinnet, and China (Ningxia), operated by NWCD. 

    To learn more, read our blog, check out the GameLift release notes or dive into the GameLift product page.

    » Now authenticate Amazon EMR Studio users using IAM-based authentication or IAM Federation, in addition to AWS Single Sign-On

    Posted On: Sep 20, 2021

    Amazon EMR Studio is an integrated development environment (IDE) that makes it easy for data scientists and data engineers to develop, visualize, and debug data engineering and data science applications written in R, Python, Scala, and PySpark. Today, we are introducing additional authentication options with EMR Studio. Before this release, to login to EMR Studio, you needed to integrate your identity provider (IdP) with AWS Single Sign-on (AWS SSO). With this release, you can now choose to use AWS Identity and Access Management (IAM) authentication or use IAM federation with your corporate credentials to login to EMR Studio, in addition to using AWS SSO.

    Each EMR Studio provides a unique access URL allowing users to directly login to their Studio environments with their corporate credentials. When you choose IAM Authentication, you can directly login to EMR Studio via the AWS Console or the EMR Studio access URL, which redirects you to the IAM login page for authentication. When you choose IAM federation or AWS SSO-based authentication, accessing the Studio access URL redirects to your identity provider's sign-in portal for authentication. You can also access EMR Studio from your identity provider's portal. If you have more than one studio in your environment, you can also directly access specific Studios directly from your IdP portal. AWS SSO is a great choice if you want to define federated access permissions for your users based on their group memberships in a single centralized directory such as Microsoft Active Directory. If you use multiple directories, or want to manage the permissions based on user attributes, consider IAM as your design alternative.

    With each of these options, you can define per-user fine-grained access control on resources. When using AWS SSO, you can use IAM session policies to manage permissions. For example, you can create session policy to restrict users from creating a new EMR cluster. When using IAM, you can grant users access to an EMR Studio with IAM permissions policies and attribute-based access control (ABAC). For example, you can attach a permissions policy to an IAM identity for creating new EMR clusters.

    To learn more about federation options in AWS, see our documentation here. To learn more about using IAM-based authentication or IAM federation on EMR Studio, see our Amazon EMR Studio documentation here. EMR Studio is available in US East (Ohio, N. Virginia), US West (Oregon), Canada (Central), Europe (Ireland, Frankfurt, London, and Stockholm), and Asia Pacific (Mumbai, Seoul, Singapore, Sydney, and Tokyo) regions.

    » Amazon Connect Chat now supports passing a customer display name and contact attributes through the chat user interface

    Posted On: Sep 20, 2021

    Amazon Connect Chat now supports passing a customer display name and contact attributes through the chat user interface so you can personalize the chat customer experience. Contact attributes include relevant metadata associated with the contact such as customer ID, loyalty status, or even context about the webpage the customer was on when they started the chat. Contact attributes are available in Amazon Connect flows, making it easy to create unique and compelling customer experiences, such as prioritizing a platinum level customer or performing an agent screen pop with the relevant customer information displayed. In addition, you can also share the customer name using the chat user interface, ensuring that the name is visible to both the agent and customer throughout the interaction, enabling your agents to personalize the conversation.

    This feature is available in all AWS regions where Amazon Connect chat widget is offered. To learn more and get started, see the following resources or visit the Amazon Connect website.

  • Pass the customer display name
  • Pass contact attributes
  • » AWS Elastic Beanstalk supports Dynamic Instance Type Selection

    Posted On: Sep 20, 2021

    AWS Elastic Beanstalk now supports dynamic instance type selection for Elastic Beanstalk’s environments. This means Elastic Beanstalk will automatically fetch all EC2 instance types based on region and availability zone for you to run a variety of applications. With dynamic instance type, you can choose the best suited instance type to optimize your application’s performance. For example, if you have machine learning applications, you can optimize performance by selecting an accelerated computing instance type such as p3, or p4d. On the Elastic Beanstalk console, you can navigate to the ‘Capacity’ tab in ‘Configure more options’ to select the instance type.

    To determine which instance types meet your requirements, such as supported Regions, see Available instance types in the Amazon EC2 User Guide for Linux Instances or Available instance types in the Amazon EC2 User Guide for Windows Instances. For more information on Elastic Beanstalk environment’s Amazon EC2 instances, visit Elastic Beanstalk Developer Guide.

    » Amazon CloudWatch request metrics for Amazon S3 Access Points now available

    Posted On: Sep 17, 2021

    Amazon S3 Access Points customers can now configure Amazon CloudWatch request metrics. With this launch, S3 request metrics can now be generated for all objects in a bucket, or you can generate metrics for specific combinations of prefix, object tags, or access points. With S3 Access Points, you can easily build the right access controls to shared dataset and now with the support of filtering by access points, you can monitor the request patterns by access controls. You can use the S3 Management Console, SDK, API, or AWS CloudFormation to enable S3 request metrics. The metrics are available at 1-minute intervals and can be monitored on both the Amazon S3 Console or on Amazon CloudWatch Console.

    Amazon CloudWatch Metrics for S3 Access Points is available in all AWS Regions, including the AWS GovCloud (US) Regions. To learn more about Amazon CloudWatch Metrics, visit Monitoring Metrics with Amazon CloudWatch in the Amazon S3 User Guide. For more information about Amazon CloudWatch pricing, see Amazon CloudWatch Pricing.

    » AWS RoboMaker now supports container images in simulation

    Posted On: Sep 17, 2021

    AWS RoboMaker, a service that allows customers to simulate robotics applications at cloud scale, now supports container images. This feature enables customers to use the container tools that they are already familiar with to build and package their code for running simulations in RoboMaker.  

    With container support, you can now take advantage of container features such as cross-environment execution and dependency package locking while using RoboMaker. To use this feature, you create a RoboMaker Robot Application and Simulation Application with OCI compliant images stored in Amazon ECR (Elastic Container Registry). You can then use the created applications to run simulation jobs in RoboMaker.  

    AWS RoboMaker is available in the following AWS Regions: US East (N. Virginia), US East (Ohio), US West (Oregon), EU (Ireland), EU (Frankfurt), Asia Pacific (Tokyo), and Asia Pacific (Singapore). To get started, please visit the RoboMaker webpage or run a sample simulation job in the RoboMaker console.

    » Amazon MSK now supports running multiple authentication modes and updates to TLS encryption settings

    Posted On: Sep 17, 2021

    Amazon Managed Streaming for Apache Kafka (Amazon MSK) now supports the simultaneous use of multiple authentication modes and updates to encryption-in-transit settings for Amazon MSK clusters. These features allow you to migrate your clients seamlessly from one authentication mode to another and update encryption settings to match those changes. 

    With this launch, you can now activate any combination of authentication modes (mutual TLS, SASL SCRAM, or IAM Access Control) on new or existing clusters, which is useful if you are migrating to a new authentication mode or need to run multiple authentication modes simultaneously. You also have the flexibility to update TLS encryption settings for data moving between clients and brokers to ensure that your encryption settings can evolve with your requirements. Additionally, you can update the Private Security Authority recognized by the cluster that can be used to sign certificates for mutual TLS authentication.

    The ability to update authentication and TLS encryption is available in all regions where Amazon MSK is available. To learn more about these features or how to migrate clients to new authentication modes, visit Amazon MSK’s user documentation.

    » Amazon QuickSight launches Dataset-as-a-Source

    Posted On: Sep 17, 2021

    Amazon QuickSight announced Dataset-as-a-Source, a new feature that saves customers time and improves data governance. Dataset-as-a-Source allows users to create a new dataset using one or more existing datasets as input, and combine it with brand new data sources, such as other databases, CSV files, and apps like Twitter. Curators can create central datasets that Authors can reuse to create their own. Curators can control the definitions of business metrics in the central datasets, and Authors save time by getting a starting point to create new datasets themselves. If the dataset’s definition needs to be updated, Curators can make changes to the central datasets and dependent datasets get automatically updated. Dataset-as-a-Source can be used to combine datasets with Direct Query, SPICE, or a combination of the two. To learn more, visit here.

    Currently, Authors are dependent on Curators to create complex datasets, and have to wait while Curators spend time building them. Additionally, each dataset has to be created from scratch (from the original data sources). As a result, dataset owners have to replicate relevant business metrics in each individual dataset. When the metric definition changes, it is inefficient to update datasets one at a time, and you run the risk of missing the update in one particular dataset or making a mistake in one.

    Dataset-as-a-Source allows Curators to create central datasets and share these with Authors on their team. Authors can use these datasets as a starting point to create their own datasets. Curators can define business metrics in central datasets that Authors can use, without having to redo the work of recreating the field themselves. Furthermore, Authors benefit from all the join and filter work Curators did, and don't have to do it again. If Curators wish to make any changes to these centrally-defined fields, they can make modifications in one central dataset and the associated datasets get the updates automatically, saving time and preventing errors.

    Dataset-as-a-Source is available in Amazon QuickSight Standard and Enterprise Editions in all QuickSight regions - US East (N. Virginia and Ohio), US West (Oregon), Canada, Sau Paulo, EU (Frankfurt, Ireland and London), Asia Pacific (Mumbai, Seoul, Singapore, Sydney and Tokyo), and US West (GovCloud). For further details, visit here. Currently, datasets using Row Level Security (RLS) or Column Level Security (CLS) cannot be used as a source for a new dataset; but this will be added in the near future. RLS and CLS can still be applied to the dependent datasets created from the source dataset.

    » AWS IQ now supports AWS Certified experts and consulting firms located in the UK & France

    Posted On: Sep 16, 2021

    AWS IQ now supports AWS Certified experts and consulting firms located in the UK & France. Quickly find, engage, & get help from experts and consulting firms in UK and France for on-demand work.

    To help you find the best expert, select your preferred location on the IQ request form. Experts will be able to see the preferred location when reviewing your request. When experts respond, you can now see the expert or firm’s location in the expert’s profile. Post a request or learn more about AWS IQ.

    Experts and consulting firms based in France, the UK, or the US can register today to grow your business, connect with new AWS customers, and tackle exciting, hands-on projects. Expert profiles must have achieved an Associate, Professional, or Specialty certification.

    » AWS CodeCommit is Now Available in the Africa (Cape Town) Region

    Posted On: Sep 16, 2021

    AWS CodeCommit is now available in the Africa (Cape Town) region. AWS CodeCommit is a secure, highly scalable, managed source control service that hosts private Git repositories. CodeCommit eliminates the need for you to manage your own source control system or worry about scaling its infrastructure. You can use CodeCommit to store anything from code to binaries. It supports the standard functionality of Git, so it works seamlessly with your existing Git-based tools.

    To learn more about using AWS CodeCommit, see the CodeCommit documention or visit the CodeCommit console.

    For a full list of AWS Regions where AWS CodeCommit is available, see the AWS Regional Services page.

    » Amazon Corretto 17 is now generally available

    Posted On: Sep 16, 2021

    Amazon Corretto 17 is now generally available. This version supports the latest Java feature release JDK 17 and is available on Linux, Windows, and macOS. You can download Corretto 17 from the downloads page

    Amazon Corretto is a no-cost, multi-platform, production-ready distribution of OpenJDK. Corretto is distributed by Amazon under an open source license.

    » Announcing Amazon MSK Connect: Run serverless, scalable Kafka Connect clusters in Amazon MSK

    Posted On: Sep 16, 2021

    Amazon MSK Connect is now available, enabling you to run fully managed Kafka Connect clusters with Amazon Managed Streaming for Apache Kafka (Amazon MSK). With a few clicks, MSK Connect allows you to easily deploy, monitor, and scale connectors that move data in and out of Apache Kafka and Amazon MSK clusters from external systems such as databases, file systems, and search indices. MSK Connect eliminates the need to provision and maintain cluster infrastructure. Connectors scale automatically in response to increases in usage and you pay only for the resources you use. With full compatibility with Kafka Connect, it is easy to migrate workloads without code changes. MSK Connect will support both Amazon MSK-managed and self-managed Apache Kafka clusters.

    You can get started with MSK Connect from the Amazon MSK console or the AWS CLI. MSK Connect is available in all commercial AWS regions where Amazon MSK is available. To learn more, visit the product page and the Amazon MSK Developer Guide.

    » Announcing Build on AWS for Startups

    Posted On: Sep 16, 2021

    Amazon Web Services (AWS) announces the general availability of Build on AWS, a new offering from AWS Activate designed to help startups build their infrastructure on AWS in minutes. Build on AWS is a collection of infrastructure templates and reference architectures covering a wide variety of solutions curated specifically for startups. These solutions are built by experts at AWS and based on AWS best practices. This enables startups to focus on building their core product knowing they’re using AWS best practices for their underlying cloud infrastructure. With the launch of Build on AWS, we’ve simplified the first steps of launching scalable, reliable, secure, and optimized infrastructure tailored to startups’ industry or use case.

    Build on AWS features CloudFormation templates and reference architectures. CloudFormation templates are deployable with a single click, and reference architectures contain an architecture diagram that can be replicated. Within Build on AWS, startups will find basic solutions and personalized solutions. Basic solutions contain core solutions to get started on AWS, including Hosting a Wordpress Website on Amazon Lightsail and Building a Data Processing API using Serverless. A guided path is also provided to help startups choose and deploy the right solution. Personalized solutions are solution templates that are relevant to the startups’ industry, interests, and AWS usage. Startups also have the ability to search through all of the templates by use-case or by underlying AWS services. With hundreds of deployments, every template is ready for production traffic.

    Watch this short video to learn more. Log into the Activate Console to get started. Not an Activate member? Apply for Activate today.

    » Route 53 Resolver DNS Firewall Now Available in Asia Pacific (Osaka) Region

    Posted On: Sep 16, 2021

    Today, we are pleased to announced that the Route 53 Resolver DNS Firewall is now generally available in the Asia Pacific (Osaka) Region. The Route 53 Resolver DNS Firewall is a managed firewall that allows customers to block DNS queries made for known malicious domains and to allow queries for trusted domains.

    With Route 53 Resolver DNS Firewall, customers can centrally deploy DNS firewall rules across accounts, organizational units (OUs), and VPCs in their organization using AWS Firewall Manager. Alternately, customers can also choose to directly share their firewall rules across their accounts by using AWS Resource Access Manager (RAM). They can utilize Amazon CloudWatch Metrics and Contributor Insights to understand the number of DNS queries being blocked or allowed by their firewall, down to the rule level. They can also enable logging by using Route 53 Resolver Query Logs to get instance-level information on blocked and allowed queries, such as the instance ID or source IP address of the instance making the query. AWS Managed Domain Lists allow customers to quickly get started with baseline protections against common network threats.

    To get started with this feature, visit the Route 53 documentation. To learn more about pricing, you can visit the Route 53 pricing page.

    » Amazon RDS now supports X2g instances for MySQL, MariaDB, and PostgreSQL databases.

    Posted On: Sep 16, 2021

    Amazon Relational Database Service (Amazon RDS) now supports AWS Graviton2-based X2g database (DB) instances for MySQL, MariaDB, and PostgreSQL databases. X2g DB instances offer double the memory per vCPU compared to R6g/R5 instances and the lowest cost per GiB of memory in Amazon RDS for MySQL, MariaDB, and PostgreSQL databases. The X2g.16xl DB instance has 33% more memory than previously available in Amazon RDS DB instances for MySQL, MariaDB, and PostgreSQL databases and is a great choice for memory-intensive DB workloads.

    AWS Graviton2 processors are based on 64-bit Arm Neoverse cores and custom silicon designed by AWS for optimized performance and cost. AWS Graviton2 Processors deliver 7x more performance, 4x more compute cores, 5x faster memory, and 2x larger caches versus first-generation AWS Graviton Processors. Additionally, the AWS Graviton2 processors feature always-on fully encrypted DDR4 memory and 50% faster per core encryption performance.

    You can launch new X2g DB instances with a single click in the Amazon RDS Management Console or via a single command in the AWS Command Line Interface (AWS CLI). If you want to upgrade your existing Amazon RDS DB instance to X2g, you can do so on the Modify DB Instance page in the Amazon RDS Management Console or via the AWS CLI. Amazon RDS X2g DB instances are supported on MySQL version 8.0.25 and higher, MariaDB version 10.4.18 and higher, and PostgreSQL version 12.5 and higher. For more details, refer to the Amazon RDS User Guide.

    X2g DB instances are available for Amazon RDS in the AWS US East (N. Virginia and Ohio), US West (Oregon), and Europe (Ireland) regions. They are offered in 7 sizes, providing up to 64 vCPUs and 1,024 GiB of memory.

    For complete information on pricing and regional availability, please refer to the Amazon RDS pricing page. Launch new Graviton2-based DB instances for Amazon RDS for MySQL, MariaDB, and PostgreSQL in the Amazon RDS Management Console or using the AWS CLI today.

    » Amazon RDS now supports R5b instances for MySQL and PostgreSQL databases

    Posted On: Sep 16, 2021

    Amazon Relational Database Service (Amazon RDS) now supports R5b database (DB) instances for MySQL and PostgreSQL databases. R5b DB instances support up to 3x the I/O operations per second (IOPS) and 3x the bandwidth on Amazon Elastic Block Store (Amazon EBS) compared to the latest x86-based memory-optimized DB instances (R5) available in Amazon RDS for MySQL and PostgreSQL databases. R5b DB instances are a great choice for IO-intensive DB workloads.

    You can launch new R5b DB instances with a single click in the Amazon RDS Management Console or via a single command in the AWS Command Line Interface (AWS CLI). If you want to upgrade your existing Amazon RDS DB instance to R5b, you can do so on the Modify DB Instance page in the Amazon RDS Management Console or via the AWS CLI. Amazon RDS R5b DB instances are supported on MySQL version 8.0.25 and higher, and PostgreSQL version 12.5 and higher. For more details, refer to the Amazon RDS User Guide.

    R5b DB instances are available for Amazon RDS for MySQL and PostgreSQL databases in the AWS US East (N. Virginia, Ohio), US West (Oregon), Europe (Frankfurt), and Asia Pacific (Singapore, Tokyo) regions. They are offered in 8 sizes, providing up to 96 vCPUs, 768 GiB of memory, 25 Gbps of networking bandwidth, and 60 Gbps of Amazon EBS bandwidth.

    For complete information on pricing and regional availability, please refer to the Amazon RDS pricing page. Launch an R5b DB instance for Amazon RDS for MySQL and PostgreSQL in the Amazon RDS Management Console or using the AWS CLI today.

    » Amazon CloudWatch Application Insights adds account application auto-discovery and new health dashboard

    Posted On: Sep 16, 2021

    Setting up monitoring and managing the health of your business applications is now even easier with the ability to discover the applications and resources in your account even without a Resource Group, automatically set up monitoring for them and see their health at a glance in a summary health dashboard presented when you complete setup or open CloudWatch Application Insights. CloudWatch Application Insights is a service that helps customers easily setup monitoring and troubleshoot their enterprise applications running on AWS resources. The new feature makes setting up monitoring for all the resources in your account a truly one step process.

    With the addition of account auto-discovery, customers now have the choice of either setting up monitoring via the application resources in a Resource Group or having CloudWatch Application Insights setup monitoring for all the application resources in an AWS account. In either case, CloudWatch Application Insights will automatically discover the applications and resources in the resource group or the account and set up the recommended metrics, telemetry, logs and alarms. Once the setup is complete, a new summary dashboard is presented that provides an overview of the application monitoring setup and the health status of the monitored applications and resources. In this dashboard, you’ll see summaries of your monitored assets, telemetry, components and detected problems all in one convenient view. 

    Amazon CloudWatch Application Insights can be accessed via the Insights tab in the CloudWatch left navigation panel. It creates Amazon CloudWatch Automatic Dashboards to visualize problem details, accelerate troubleshooting, and help reduce mean time to resolution and is available in all AWS commercial regions at no additional charge. Depending on setup, you may incur charges for Amazon CloudWatch monitoring resources. To learn more about Amazon CloudWatch Application Insights, please visit the service page here.

    » Amazon RDS now supports T4g instances for MySQL, MariaDB, and PostgreSQL databases.

    Posted On: Sep 16, 2021

    Amazon Relational Database Service (Amazon RDS) now supports AWS Graviton2-based T4g database (DB) instances for MySQL, MariaDB, and PostgreSQL  databases. T4g DB instances offer up to 36% better price performance over comparable current generation x86-based T3 DB instances depending on the workload characteristics.

    T4g is the next generation burstable general-purpose DB instance type that provides a baseline level of CPU performance, with the ability to burst CPU usage at any time for as long as required. Based on AWS Graviton2 processors, T4g DB instances offer a balance of compute, memory, and network resources and are ideal for DB workloads with moderate CPU usage that experience temporary spikes in use. 

    AWS Graviton2 processors are based on 64-bit Arm Neoverse cores and custom silicon designed by AWS for optimized performance and cost. AWS Graviton2 Processors deliver 7x more performance, 4x more compute cores, 5x faster memory, and 2x larger caches versus first-generation AWS Graviton Processors. Additionally, the AWS Graviton2 processors feature always-on fully encrypted DDR4 memory and 50% faster per core encryption performance.

    You can launch new T4g DB instances with a single click in the Amazon RDS Management Console or via a single command in the AWS Command Line Interface (AWS CLI). If you want to upgrade your existing Amazon RDS DB instance to T4g, you can do so on the Modify DB Instance page in the Amazon RDS Management Console or via the AWS CLI. Amazon RDS T4g DB instances are supported on MySQL version 8.0.25 and higher, MariaDB version 10.4.18 and higher, and PostgreSQL version 12.5 and higher. For more details, refer to the Amazon RDS User Guide.

    Amazon RDS T4g DB instances are available in the AWS US East (N. Virginia and Ohio), US West (Oregon, San Francisco), Canada (Central), South America (Sao Paulo), Asia Pacific (Mumbai, Hong Kong, Tokyo, Singapore, Sydney, Seoul), and Europe (Frankfurt, Ireland, London, Stockholm) regions. They are offered in 6 sizes, providing up to 8 vCPUs and 32 GiB of memory.

    For complete information on pricing and regional availability, please refer to the Amazon RDS pricing page. Launch new Graviton2-based DB instances for Amazon RDS for MySQL, MariaDB, and PostgreSQL in the Amazon RDS Management Console or using the AWS CLI today.

    » AWS Service Management Connector for ServiceNow supports AWS Service Catalog AppRegistry

    Posted On: Sep 16, 2021

    Starting today, customers can view their registered applications on AWS Service Catalog AppRegistry in their ServiceNow CMDB leveraging the AWS Service Management Connector for ServiceNow. Organizations are creating, migrating, and managing applications on AWS that are associated with multiple AWS resources. Customers define applications within AppRegistry by providing a name, description, and associations to the AWS CloudFormation stacks and application metadata that constitute their application. With this integration, customers can view AWS applications in their ServiceNow system of record and operational tooling. Customers can then relate ITSM process such as change requests, incidents/problem at the Application level in ServiceNow. This will allow for streamlined impact analysis and operational investigation of AWS applications.

    This new version enables customers to configure, request and provision AWS Service Catalog products via ServiceNow order guides enabling customers to bundle multiple AWS services/stacks into a single service request. The connector also includes enhancements to AWS resource relationships from AWS Config into the ServiceNow CMDB. The connector provides existing integration features for AWS Config, AWS Systems Manager OpsCenter, AWS Systems Manager Automation and AWS Security Hub, which simplifies cloud provisioning, operations and resource management for ServiceNow administrators.

    The AWS Service Management Connector for ServiceNow is available at no charge in the ServiceNow Store. These new features are generally available in all AWS Regions where AWS Service Catalog, AWS Config, AWS Systems Manager and AWS Security Hub services are available. For more information, please visit the documentation on the AWS Service Management Connector. You can also learn more about AWS Service Catalog, AWS Config, AWS Systems Manager and AWS Security Hub.

    » Amazon SageMaker now supports inference endpoint testing from SageMaker Studio

    Posted On: Sep 16, 2021

    You can now get real-time inference results from your models hosted by Amazon SageMaker directly from Amazon SageMaker Studio.

    Amazon SageMaker is a fully managed service that provides developers and data scientists the ability to quickly build, train, and deploy machine learning (ML) models. Amazon SageMaker Studio provides a single, web-based visual interface where you can perform all ML development steps. Once a model is deployed to SageMaker, customers can get predictions from their models deployed on SageMaker real-time endpoints. As part of model development and verification, customers want to ensure inferences from the model are returning as expected from the endpoint. Previously, customers used third-party tooling such as curl or wrote code in Jupyter Notebooks to invoke the endpoints for inference. Now, customers can provide a JSON payload, send the inference request to the endpoint, and receive results directly from SageMaker Studio. The results are displayed directly in SageMaker Studio and can be downloaded for further analysis.

    This feature is generally available in all regions where SageMaker and SageMaker Studio is available. To see where SageMaker is available, review the AWS region table. To learn more about this feature, please see our documentation. To learn more about SageMaker, visit our product page.

    » Announcing Amazon Redshift RSQL, a command line client for interacting with Amazon Redshift clusters and databases

    Posted On: Sep 15, 2021

    Amazon Redshift, a fully-managed cloud data warehouse, announces availability of Amazon Redshift RSQL, a command line client for interacting with Amazon Redshift clusters and databases. With Amazon Redshift RSQL, you connect to an Amazon Redshift cluster, describe database objects, query data, and view query results in various output formats.

    Amazon Redshift RSQL supports the capabilities of the PostgreSQL psql command line tool with an additional set of Amazon Redshift specific capabilities: You can use Single Sign-On (SSO) authentication using ADFS, PingIdentity, Okta, Azure AD or other SAML/JWT based Identity Providers, as well as use Browser-based SAML Identity Providers with Multi-Factor Authentication (MFA); You can describe properties or attributes of Amazon Redshift objects such as Distribution Keys, Sort Keys, Late Binding Views (LBVs), Materialized Views, External tables in Amazon Glue catalog or Hive Metastore, External tables in Amazon RDS for PostgreSQL, Amazon Aurora PostgreSQL-Compatible Edition, Amazon RDS for MySQL (preview) and Amazon Aurora MySQL-Compatible Edition (preview), and tables shared via Amazon Redshift Data Sharing; you can also use enhanced control flow commands such as IF (\ELSEIF, \ELSE, \ENDIF), \GOTO and \LABEL.

    With Amazon Redshift RSQL batch mode, which executes a script passed as an input parameter, you can now run scripts which include both SQL and complex business logic. Customers with existing self-managed on-premise data warehouses can use Amazon Redshift RSQL to easily replace existing ETL and automation scripts, such as Teradata BTEQ scripts, instead of manually re-implementing them in a procedural language. 

    Amazon Redshift RSQL is available for Linux, Windows, and macOS X operating systems.

    To get started and learn more about Amazon Redshift RSQL visit our documentation.

    » AWS Lake Formation is now available in Asia Pacific (Osaka)

    Posted On: Sep 15, 2021

    You can now use AWS Lake Formation in the Asia Pacific (Osaka) AWS region.

    AWS Lake Formation is a service that makes it easy to set up a secure data lake in days. A data lake is a centralized, curated, and secured repository that stores all your data, both in its original form and prepared for analysis. A data lake enables you to break down data silos and combine different types of analytics to gain insights and guide better business decisions.

    Creating a data lake with Lake Formation is as simple as defining where your data resides and what data access and security policies you want to apply. Lake Formation then collects and catalogs data from databases and object storage, moves the data into your new Amazon S3 data lake, cleans and classifies data using machine learning algorithms, and secures access to your sensitive data. Your users can then access a centralized catalog of data which describes available data sets and their appropriate usage. Your users then leverage these data sets with their choice of analytics and machine learning services, like Amazon EMR for Apache Spark, Amazon Redshift, AWS Glue, Amazon QuickSight, and Amazon Athena.

    For a list of regions where AWS Lake Formation is available, see the AWS Region Table.

    » Extract custom entities from documents in their native format with Amazon Comprehend

    Posted On: Sep 15, 2021

    Amazon Comprehend, a natural-language processing (NLP) service that uses machine learning to uncover information in text, now allows you to extract custom entities from documents in a variety of formats (PDF, Word, plain text) and layouts (e.g., bullets, lists). This enables you to more easily extract insights and further automate your document processing workflows.

    Prior to this announcement, you could only use Amazon Comprehend on plain text documents, which required you to flatten documents into machine-readable text, often reducing the quality of the context within the document. This new feature combines the power of Natural Language Processing (NLP) and Optical Character Recognition (OCR) to extract custom entities from your PDF, Word, and plain text documents using the same API with no preprocessing required.

    The new custom entity recognition feature utilizes the structural context of text (text placement within a page) combined with natural language context to extract custom entities from dense text, numbered lists, and bullets. This combination also allows customers to extract discontiguous or disconnected entities that aren’t immediately part of the same span of text (for example, entities nested within a table). This new feature also removes the need for customers to build custom logic to convert PDF and Word files to flattened, plain text before using Comprehend. By natively supporting new document formats, Comprehend offers key benefits to customers in industries such as mortgage, finance, and insurance companies, who process diverse document formats and layouts. For example, mortgage companies can now process applications faster by extracting an applicant’s bank information, address, and co-signor name from documents such as scanned PDFs of bank statements, pay stubs, and employment verification letters.

    To train a custom entity recognition model that can be used on your PDF, Word, and plain text documents, customers need to first annotate PDF documents using a custom Amazon SageMaker Ground Truth annotation template that is provided by Amazon Comprehend. The custom entity recognition model leverages both the natural language and positional information (e.g. coordinates) of the text to accurately extract custom entities that previously may be impacted when flattening a document. For step-by-step details on how to annotate your documents, see Custom document annotation for extracting named entities in documents using Amazon Comprehend. Once you’ve finished annotating, you can train a custom entity recognition model and use it to extract custom entities from PDF and Word for batch (asynchronous) processing. To extract text and spatial locations of text from scanned PDF documents, Amazon Comprehend calls Amazon Textract on your behalf as a step before custom entity recognition. For details on how to train and use your model, see Extract custom entities from documents in their native format with Amazon Comprehend.

    Custom entity recognition support for plain text, PDF, and Word documents is available directly via the AWS console and AWS CLI. To view a list of the supported AWS regions for both Comprehend and Textract, please visit the AWS Region Table for all AWS global infrastructure.

    To learn more and get started, visit the Amazon Comprehend product page, the intelligent document processing page, or our documentation.

    » Amazon Timestream is now in scope for AWS SOC Reports

    Posted On: Sep 15, 2021

    You can now use Amazon Timestream in applications that are subject to System and Organization Control (SOC) compliance. Amazon Timestream is a fast, scalable, secure, and purpose-built time series database for application monitoring, IoT, and real-time analytics workloads that can scale to process trillions of time series events per day.

    Amazon Timestream is now in scope for AWS's SOC 1, 2, and 3 reports, allowing you to get deep insight into the security processes and controls that protect customer data. AWS SOC Reports are independent third-party examination reports that demonstrate how AWS achieves key compliance controls and objectives. The purpose of these reports is to help you and your auditors understand the AWS controls established to support operations and compliance.

    In addition to meeting standards for SOC, Amazon Timestream is HIPAA eligible, ISO (9001, 27001, 27017, and 27018) and PCI DSS compliant. You can go to the Services in Scope by Compliance Program page to see a full list.

    » Amazon Transcribe now supports redaction of personal identifiable information (PII) for streaming transcriptions

    Posted On: Sep 14, 2021

    Amazon Transcribe is an automatic speech recognition service that you can use to add speech-to-text capability to your applications. Starting today, you can utilize Amazon Transcribe to automatically remove personal identifiable information (PII) from your streaming transcription results. Amazon Transcribe uses state-of-the-art machine learning technology to help identify sensitive information such as Social Security number, credit card/bank account information, and contact information (i.e. name, email address, phone number and mailing address). With this feature, companies can provide their contact center agents with valuable transcripts for on-going conversation while maintaining privacy standards. These transcripts can then be used to help supervisors extract real-time insights and identify calls that require attention.

    You can configure Amazon Transcribe’s PII redaction feature to identify PII or identify and redact PII for each streaming session. This provides you with the option to highlight the identified sensitive information, or highlight and mask the PII data.

    In addition, Amazon Transcribe now allows for granular PII categories. This gives you the flexibility to select the PII types you want to redact or identify from transcriptions. For example, you can protect your customers’ Social Security number or credit card details, but use other PII fields such as name, email and phone to create/update customer profiles in your CRM systems for marketing purposes. The following PII types are supported by Transcribe:  BANK_ACCOUNT_NUMBER, BANK_ROUTING, CREDIT_DEBIT_NUMBER, CREDIT_DEBIT_CVV, CREDIT_DEBIT_EXPIRY, PIN, EMAIL, ADDRESS, NAME, PHONE, SSN, and ALL.

    PII redaction is available now for US English with both asynchronous and streaming transcription jobs. This feature is supported in the following AWS regions: US East (N. Virginia), US East (Ohio),US West (Oregon), Asia Pacific (Seoul), Asia Pacific (Sydney), Asia Pacific (Tokyo), Canada(Central), EU (Frankfurt), EU (Ireland), and EU (London). You will incur additional charges as described in Automatic content redaction pricing. To learn more, see the “Introducing PII identification and redaction in streaming transcriptions using Amazon Transcribe” post and Amazon Transcribe documentation.

    » Amazon EC2 T3 instances are now supported on EC2 Dedicated Hosts in multiple AWS Regions

    Posted On: Sep 14, 2021

    Amazon Web Services (AWS) announces the general availability of the Amazon EC2 T3 instances on EC2 Dedicated Hosts, designed to provide the most cost-efficient way for customers to run their eligible Bring Your Own Licenses (BYOL) software on AWS. With T3 Dedicated Hosts, customers can run up to 4 times more instances per host than comparable EC2 general purpose Dedicated Hosts, and reduce their infrastructure footprint and license costs by up to 70%. T3 Dedicated Hosts are best suited for running BYOL software with low-to-moderate CPU utilization and eligible per-socket, per-core or per-VM software licenses including Microsoft Windows Desktop, Windows Server and SQL Server, and Oracle Database.

    T3 Dedicated Hosts support general-purpose burstable T3 instances in Standard mode only. The T3 instances running on EC2 Dedicated Hosts share CPU resources to provide a baseline CPU performance with the ability to burst to a higher level when needed. This enables T3 Dedicated Hosts, which have 48 cores, to support up to 192 instances per host and offer 7 instance sizes ranging from t3.nano to t3.2xlarge. Compared to other EC2 Dedicated Hosts, T3 Dedicated Hosts also support smaller instance sizes: t3.nano, t3.micro, t3.small and t3.medium, enabling customers to run small-sized databases and VMs.

    T3 Dedicated Hosts are generally available today in AWS US East (Northern Virginia, Ohio), US West (Northern California, Oregon), Europe (Ireland, Frankfurt, London, Milan, Paris, Stockholm), South America (Sao Paulo), Canada (Central), Africa (Cape Town), Asia Pacific (Hong Kong, Mumbai, Osaka, Seoul, Singapore, Sydney, Tokyo), Middle East (Bahrain), China (Ningxia and Beijing), AWS GovCloud (US-West) and AWS GovCloud (US-East) Regions. Customers can purchase the new instances via Savings Plans, Dedicated Host Reservations or On-Demand To get started with T3 Dedicated Hosts, visit the AWS Management Console, AWS Command Line Interface (CLI), and AWS SDKs. To learn more, visit the EC2 Dedicated Hosts Page, visit the AWS forum for EC2 or connect with your usual AWS Support contacts.

    » Amazon CodeGuru Reviewer enhances security findings generated by GitHub Action by adding severity fields and CWE tags

    Posted On: Sep 13, 2021

    Today, we are announcing the enhancement of security findings generated by CodeGuru Reviewer’s GitHub action by adding severity fields and CWE (Common Weakness Enumerations) tags. Customers can use these new features to sort, filter, and prioritize their backlog of security vulnerabilities within GitHub’s user interface.

    Amazon CodeGuru Reviewer is a developer tool that analyzes your code and provides intelligent recommendations for improving your code’s quality and security. CodeGuru Reviewer recently launched a CI/CD experience for GitHub Actions which allows developers to receive security findings as a step within their GitHub CI workflows. The recommendations generated by CodeGuru Reviewer’s GitHub Action now have either a low, medium, high, or critical severity, in addition to a CWE tag, which allows customers to dive deeper into the ramifications of their findings and fix security vulnerabilities.

    You can get started using the CodeGuru Reviewer’s GitHub Action by visiting the GitHub Marketplace page.

    To learn more about CodeGuru Reviewer, take a look at the Amazon CodeGuru page. To contact the team visit the Amazon CodeGuru developer forum.

    » AWS Firewall Manager now supports AWS WAF rate-based rules

    Posted On: Sep 13, 2021

    AWS Firewall Manager now enables customers to centrally deploy AWS WAF rate-based rules across accounts in their organization. An AWS WAF rate-based rule allows customers to track the rate of requests for each originating IP address and trigger a rule action on IPs once it goes over the limit. With this launch, security administrators on AWS Firewall Manager can now deploy rate-based rules across accounts, mandating request limits per account, using Firewall Manager security policy for AWS WAF.

    To get started, you can configure a AWS WAF rule group containing the rate-based rule(s), using your Firewall Manager security administrator account, and reference it in the Firewall Manager security policy for AWS WAF, along with the accounts and resources where you want the rules to be applied. Firewall Manager policy ensures the rate-based rules are consistently enforced, even as new accounts and resources are created across an organization. Each rate-based rule is applied to the AWS WAF web access control list (web ACL) in each account, calculating the incoming web requests per account in a trailing, continuously updated 5-minute time span. If an IP address breaches the configured limit specified in the rule, AWS WAF applies the rule action to additional requests from the IP address until the request rate falls below the limit.

    Firewall Manager is a security management service that allows customers a central place to configure and deploy firewall rules from, across accounts and resources in their organization. With Firewall Manager, customers can deploy and monitor rules for AWS WAF, AWS Shield Advanced, VPC security groups, AWS Network Firewall, and Amazon Route 53 Resolver DNS Firewall across their entire organization. Firewall Manager ensures that all firewall rules are consistently enforced, even as new accounts and resources are created.

    To get started, see AWS Firewall Manager documentation for more details and AWS Region Table for the list of regions where AWS Firewall Manager is currently available. To learn more about AWS Firewall Manager, its features and pricing, please visit the website.

    » Contact Lens for Amazon Connect adds support for 8 languages

    Posted On: Sep 13, 2021

    Contact Lens for Amazon Connect has now launched both post-call and real-time analytics support for 3 new languages, Korean (South Korea), Japanese (Japan), and Mandarin (Mainland China). In addition, 5 languages, French (Canada), French (France), Portuguese (Brazil), German (Germany), and Italian (Italy), that were already supported for post-call analysis, are now also supported for real-time analytics. With this launch, Contact Lens now supports 21 languages for post-call analytics and 12 languages for both post-call and real-time analytics.

    Contact Lens, a feature of Amazon Connect, enables you to better understand the sentiment and trends of customer conversations to identify crucial company and product feedback. In addition, with real-time capabilities, you can get alerted to issues during live customer calls and can deliver proactive assistance to agents while calls are in progress, improving customer satisfaction.

    With Contact Lens for Amazon Connect, you only pay for what you use based on the number of minutes used. There are no required up-front payments, long-term commitments, or minimum monthly fees. Please visit our website to learn more about Contact Lens for Amazon Connect.

    » AWS CodeBuild now supports a small ARM machine type

    Posted On: Sep 13, 2021

    AWS CodeBuild’s support for Arm-based workloads now run on an additional AWS Graviton2 machine type suited for less-resource intensive workloads.

    In November 2019, CodeBuild launched support for native Arm builds on the first generation of AWS Graviton processors. Support for this platform allows customers to build and test on Arm without the need to emulate or cross-compile. In February 2021, CodeBuild introduced support for Graviton 2 processors which deliver a leap in performance and capabilities over first-generation AWS Graviton processors.

    Before, CodeBuild customers targeting Arm were able to use an 8 vCPU machine to run their work loads. Now, customers can use a 2 vCPU Graviton2 machine, which is suited to run less resource-intensive workloads or balance speed with cost

    CodeBuild’s support for Arm using Graviton2 is available in: US East (N. Virginia), US East (Ohio), US West (Oregon), US West (N. California), Europe (Ireland), Asia Pacific (Mumbai), Asia Pacific (Tokyo), Asia Pacific (Sydney), Asia Pacific (Singapore), and Europe (Frankfurt). To learn more about CodeBuild’s support for Arm, please visit our documentation. To learn more about how to get started, visit the AWS CodeBuild product page

    » Amazon SES now supports emails with a message size of up to 40MB

    Posted On: Sep 13, 2021

    Amazon Simple Email Service (Amazon SES) customers can now request a limit increase to send and receive emails with a message size of up to 40MB.

    With this launch, the default message size limit in Amazon SES remains at 10MB for email sending and 30MB for email receiving, however customers can now request a limit increase to send/receive email messages that contain up to a 40MB message size (including the email text, images, attachments and the MIME encoding). This limit increase can be requested via the AWS Support Center. To learn more about this process, see this page in the Amazon SES Developer Guide.

    Amazon SES is a scalable, cost-effective, and flexible cloud-based email service that allows digital marketers and application developers to send marketing, notification, and transactional emails from within any application. To learn more about Amazon SES, visit this page.

    » Amazon Aurora Serverless v1 supports configurable autoscaling timeout

    Posted On: Sep 13, 2021

    Amazon Aurora Serverless v1 now supports setting timeout for autoscaling. Based on your application’s needs, you can specify a timeout between 1 and 10 minutes with a default value of 5 minutes. Aurora Serverless v1 looks for a period of no activity to initiate a scaling operation. If the timeout period is reached without such a point, you can stay at the current capacity or force the capacity change. Learn more about autoscaling in the Aurora Serverless v1 documentation. To set the timeout, visit the AWS Management Console or use the latest AWS SDK or CLI.

    Aurora Serverless v1 is an on-demand, autoscaling configuration for Amazon Aurora, where the database will automatically start up, shut down, and scale capacity up or down based on your application's needs. It is a simple, cost-effective option for infrequent, intermittent, or unpredictable workloads. Read about Aurora Serverless v1 on the product page and in the Aurora documentation.

    » Announcing general availability of Amazon EC2 VT1 instances - the first EC2 instance optimized for video transcoding

    Posted On: Sep 13, 2021

    Amazon Web Services (AWS) announces the general availability of Amazon EC2 VT1 instances powered by Xilinx® Alveo™ U30 media accelerators for video transcoding. VT1 instances are AWS’s first EC2 instances that feature hardware acceleration for video transcoding and are optimized for workloads such as live broadcast, video conferencing, and just-in-time transcoding. These instances deliver up to 30% lower cost per stream than Amazon EC2 G4dn GPU-based instances and up to 60% lower cost per stream than Amazon EC2 C5 CPU-based instances.

    EC2 VT1 instances feature up to 8 Xilinx® Alveo™ U30 media accelerators, 192 GB of system memory, 25 Gbps of networking throughput, and 19 Gbps of EBS bandwidth as well as 2nd generation custom Intel Xeon Scalable processors. They support H.264 and H.265 video formats with resolutions up to 4K UHD. They are ideal for customers who are looking to lower their transcoding cost-per-stream or optimize their network bit-rate.

    Customers can get started quickly with VT1 instances using the Xilinx Video SDK, which is integrated with FFmpeg making it easy to migrate existing applications to VT1 instances. Customers can launch VT1 instances using Xilinx AMIs available on AWS Marketplace that contain support for U30 accelerators or use Amazon Elastic Kubernetes Service (Amazon EKS) or Amazon Elastic Container Service (Amazon ECS) for containerized applications.

    VT1 instances are now available in the AWS US East (N. Virginia), US West (Oregon), Europe (Ireland) and Asia Pacific (Tokyo) regions. Customers can purchase VT1 instances as On-Demand Instances, Reserved Instances, Spot Instances, or as part of Savings Plan. VT1 instances will also be available on AWS Outposts soon for deployments in customers’ on-premises data centers, colocation facilities, or edge locations.

    To get started with Amazon EC2 VT1 instances, visit the AWS Management Console. To learn more, visit our product details page

    » Amazon Connect adds near real-time insights into voice call, chat, and task activity in the Canada (Central) region

    Posted On: Sep 13, 2021

    Amazon Connect now allows customers to subscribe to a near real-time stream of contact (voice calls, chat, and task) events (e.g., call is queued) in your Amazon Connect contact center in the Canada (Central) region. These events include when a voice call, chat, or task is initiated, queued to be assigned to an agent, connected to an agent, transferred to another agent or queue, and disconnected. Contact events can be used to create analytics dashboards to monitor and track contact activity, integrate into workforce management (WFM) solutions to better understand contact center performance, or to integrate applications that react to events (e.g., call disconnected) in real-time. Amazon Connect contact events are published via Amazon EventBridge, and can be set up in a couple of clicks by going to the Amazon EventBridge AWS console and creating a new rule.

    Amazon Connect contact events do not incur additional Amazon Connect charges. You may incur charges for Amazon Eventbridge usage. Please see Amazon EventBridge Pricing for more information. To learn more, see our help documentation, or visit the Amazon Connect website.

    » AWS Health Aware (AHA) is now available for Organizational and Personal AWS Accounts to customize Health Alerts

    Posted On: Sep 13, 2021

    AWS Health Aware or AHA, is an incident management & communication framework to ingest proactive and real-time alerts from AWS Health to a customer’s preferred communication channels. Customers using AWS Organizations can get aggregated active account level alerts from impacted accounts across their organization. Alerts can be configured to endpoint(s) such as Slack, Microsoft Teams, Amazon Chime and Email Alerts. AHA can also be integrated with a broad range of other endpoints during configuration. These alerts are targeted to give customers event visibility and guidance to help quickly diagnose and resolve issues that are impacting our customer’s applications or workloads.

    You can find more information here.

    » Amazon Aurora now supports AWS Graviton2-based X2g instances

    Posted On: Sep 10, 2021

    Amazon Aurora now supports AWS Graviton2-based X2g database instances. Customers can now get double the memory per vCPU compared to R6g instances. X2g instances provide the highest memory per vCPU at the lowest cost per GiB of memory for Amazon Aurora. X2g instances are available when using both Amazon Aurora MySQL-Compatible Edition and Amazon Aurora PostgreSQL-Compatible Edition.

    X2g instances are the next generation of memory-optimized instances for memory-intensive applications offering double the memory per vCPU when compared to R6g instances. The largest db.x2g.16xlarge instance size provides 1TiB of memory, allowing customers to scale up and consolidate their workloads on a fewer number of instances.

    You can launch new instances in the Amazon RDS Management Console or using the AWS CLI. Upgrading a database instance to Graviton2 requires a simple instance type modification, using the same steps as any other instance modification. The X2g database instances are supported on Aurora MySQL 2.09.2 and higher, 2.10.0 and higher, and Aurora PostgreSQL 11.9 and higher, 12.4 and higher, and 13.3 versions. For more details refer to the documentation.

    For complete information on pricing and regional availability, please refer to the Amazon Aurora pricing page. Review our technical documentation for more details.

    » Amazon EC2 D3 instances with dense local HDD storage now available in India (Mumbai) Region

    Posted On: Sep 10, 2021

    Starting today, Amazon EC2 D3 instances, the latest generation of the dense HDD-storage instances, are available in the Asia Pacific (Mumbai) Region.

    Amazon EC2 D3 instances provide up to 48 TB of local HDD storage and are powered by 2nd Generation Intel® Xeon® Scalable (Cascade Lake) Processors with a sustained all core turbo frequency of up to 3.1 GHz. D3 instances provide up to 2.5x higher networking speed, and 45% higher disk throughput compared to D2 instances. These instances are an ideal fit for workloads including distributed / clustered file systems, big data and analytics, and high capacity data lakes. With D3 instances, you can easily migrate from previous-generation D2 instances or on-premises infrastructure to a platform optimized for dense HDD storage workloads.

    D3 instances are available in 4 sizes ranging from 4 to 32 vCPUs, 32 to 256 GiB of memory, and 6 to 48 TB of local, HDD storage. D3 instances include up to 25 Gbps of network bandwidth, up to 4,600 MiB/s of disk throughput and are optimized for access to the Amazon Elastic Block Store (EBS).

    With this regional expansion, Amazon EC2 D3 instances are now available in the AWS US East (N. Virginia), US East (Ohio), US West (Oregon), EU (Ireland, Frankfurt, London), Asia Pacific (Tokyo, Singapore, Sydney, Mumbai), and AWS GovCloud (US-West) Regions. D3 instances are available for purchase with Savings Plans, Reserved Instances, Convertible Reserved, On-Demand, and Spot instances, or as Dedicated instances.

    To get started with D3 instances, visit the AWS Management Console, AWS Command Line Interface (CLI), and AWS SDKs. To learn more, visit the EC2 D3 instances Page, visit the AWS forum for EC2 or connect with your usual AWS Support contacts.

    » AWS ParallelCluster now supports cluster management through Amazon API Gateway

    Posted On: Sep 10, 2021

    Amazon ParallelCluster is a fully supported and maintained open source cluster management tool that makes it easier for scientists, researchers, and IT administrators to deploy and manage high performance computing (HPC) clusters on Amazon Web Services. HPC clusters are traditionally collections of tightly coupled compute, storage, and networking resources that enable customers to run large scale scientific and engineering workloads.

    With its latest release, we have introduced a major update to AWS ParallelCluster. Significant feature enhancements in this version include:

  • Support for cluster management via Amazon API Gateway: Customers can now manage and deploy clusters through HTTP endpoints with Amazon API Gateway. This opens up new possibilities for scripted or event-driven workflows, such as creating new clusters when datasets are ingested or when existing infrastructure is insufficient to meet dynamic compute needs. These APIs also make it easier for builders to use AWS ParallelCluster as a building block in their HPC workflows and create their own extensible solutions, services, and customized front-end interfaces. AWS ParallelCluster’s command-line interface (CLI) has also been redesigned for compatibility with this API and includes a new JSON output option. This new functionality makes it possible for customers to implement similar building block capabilities using the CLI as well.
  • Improved custom AMI creation: Customers now have access to a more robust process for creating and managing custom AMIs using EC2 Image Builder. Customers can specify build components to install software, apply security hardening steps, or modify operating system configuration in a replicable way. Custom AMIs can now be managed through a separate AWS ParallelCluster configuration file, and can be created using the pcluster build-image command in the AWS ParallelCluster command-line interface
  • Robust cluster configuration: Cluster configuration files have been retooled to make them more robust and easier to maintain. You can learn more about the changes to our configuration files here.
  • For more detail you can find the complete release notes for the latest version of Amazon ParallelCluster here.

    » Amazon Aurora now supports AWS Graviton2-based T4g instances

    Posted On: Sep 10, 2021

    Amazon Aurora now supports AWS Graviton2-based T4g database instances. Graviton2 T4g database instances deliver a performance improvement of up to 49% over comparable current generation x86-based database instances. You can launch these database instances when using Amazon Aurora MySQL-Compatible Edition and Amazon Aurora PostgreSQL-Compatible Edition

    T4g instances provide a baseline level of CPU performance, with the ability to burst CPU usage at any time, for as long as required. They offer a balance of compute, memory and network resources, and are ideal for database workloads with moderate CPU usage that experience temporary spikes in use. Amazon Aurora T4g instances are configured for Unlimited Mode, which means they can burst beyond the baseline over a 24-hour window for an additional charge.

    You can launch new instances in the Amazon RDS Management Console  or using the AWS CLI. Upgrading a database instance to Graviton2 requires a simple instance type modification, using the same steps as any other instance modification. The T4g database instances are supported on Aurora MySQL 2.09.2 and higher, 2.10.0 and higher, and PostgreSQL 11.9 and higher, 12.4 and higher, and 13.3 versions. For more details, refer to the documentation.

    For complete information on pricing and regional availability, please refer to the Amazon Aurora pricing page.  Review our technical documentation for more details.

    » Amazon EC2 Hibernation adds support for Red Hat Enterprise Linux 8, CentOS 8, and Fedora 34

    Posted On: Sep 9, 2021

    Amazon EC2 now supports Hibernation for On-Demand Nitro-based instances running Red Hat Enterprise Linux (RHEL) version 8, CentOS version 8, and Fedora version 34 onwards. Hibernation allows you to pause your EC2 Instances and resume them at a later time, rather than fully terminating and restarting them. Resuming your instance lets your applications continue from where they left off so that you don’t have to restart your OS and application from scratch. Hibernation is useful for cases where rebuilding application state is time-consuming (e.g., developer desktops) or an application’s start-up steps can be prepared in advance of a scale-out.

    For RHEL version 8, CentOS version 8, and Fedora version 34 onwards, Hibernation is supported for On-Demand Instances running on Nitro-based instances (C5, C5d, M5, M5a, M5ad, M5d, R5, R5a, R5ad, R5d, T3, and T3a) with up to 150 GB of RAM. Nitro-based instances are supported by a combination of dedicated hardware and lightweight hypervisor enabling faster innovation and enhanced security.

    Hibernation is available in all commercial AWS Regions except Asia Pacific (Osaka).

    Hibernation is available through AWS CloudFormation, AWS Management Console, the AWS SDKs, AWS Tools for Powershell, or the AWS Command Line Interface (CLI). To learn more about Hibernation, see our FAQs, technical documentation, and blog.

    » Amazon Braket introduces verbatim compilation for quantum circuits

    Posted On: Sep 9, 2021

    Amazon Braket, the AWS quantum computing service, now offers greater control over how quantum circuits are executed on quantum computers. With the new verbatim compilation feature, customers can now specify their circuits to run exactly as defined without any modification during the compilation process.

    When developing quantum algorithms, users program primarily in abstract quantum circuits that specify a collection of gates to be executed. Quantum circuit compilation transforms an abstract quantum circuit into a compiled circuit that is optimized for a specific type of quantum hardware. During this optimization, the original circuit undergoes a compilation process that transforms the circuit through qubit allocation, reordering, and mapping to the native gates supported by the hardware. Researchers and quantum algorithm specialists, who are focused on hardware benchmarking or on developing error-mitigation protocols, need an ability to exactly specify the gates and circuit layouts that will be executed on their chosen quantum hardware. The new verbatim compilation capability gives users direct control over the compilation process by disabling certain optimization steps, thereby ensuring that their circuits are executed exactly as designed.

    Amazon Braket users can indicate entire circuits or parts of them, for which they wish to turn off compilation, using a ‘verbatim box’ within the Amazon Braket SDK. This verbatim compilation feature is available for Rigetti quantum computers, which are currently available in AWS region US West 1 (N. California) on Amazon Braket. To learn more and get started, see the following resources:

  • Amazon Braket web page 
  • Amazon Braket Console
  • The example notebook “Verbatim Compilation” in Amazon Braket tutorials on Github
  • Amazon Braket documentation
  • » Amazon EC2 announces increases for instance network bandwidth

    Posted On: Sep 9, 2021

    Amazon EC2 now offers increased instance bandwidth from AWS region to traffic destined towards Internet Gateway, Direct Connect and between regions for the current generation of instances.

    Prior to this announcement, EC2 instance network bandwidth for traffic from AWS region destined to Internet Gateway, Direct Connect and other AWS regions was up to 5 Gbps for all instances. With this launch, current generation EC2 instances that have 32 vCPUs or higher can now drive higher bandwidth of up to 50% of instance network bandwidth to these destinations. For e.g. c5n.9xlarge has an instance network bandwidth of 50 Gbps; with this launch it can drive 25 Gbps to Internet Gateway, Direct Connect and other AWS regions. AWS customers can now leverage this increased instance network bandwidth to speed up data migration via Direct Connect, along with accessing services across regions or serving content to internet destinations.

    EC2 instance network bandwidth increase from AWS region to Internet Gateway, Direct Connect and other AWS regions is now available in all AWS Commercial and GovCloud (US) regions.

    To learn more about EC2 instance network bandwidth, see AWS EC2 instance network bandwidth.

    » AWS Amplify announces command hooks to execute custom scripts when running Amplify CLI commands

    Posted On: Sep 9, 2021

    With today’s launch, customers can execute custom scripts before, during, and after Amplify CLI commands (“amplify push”, “amplify api gql-compile”, and more). This allows you to extend Amplify’s best-practice defaults to meet your organization’s specific security guidelines and operational requirements. AWS Amplify CLI is a command line toolchain that helps frontend web and mobile developers create cloud backends and connect them to their app for common use cases. To create a command hook, customers place their bash shell scripts into the “amplify/hooks" folder with the associated Amplify CLI command as the script file name, such as "post-push.sh" or "pre-add-function.sh”. Command hooks support bash scripts by-default but customers can extend it with their preferred scripting runtime.

    Customers can trigger validation checks before, during, and after an Amplify CLI command is executed. For example, organizations with multiple team members can enforce all team members use the same Amplify CLI version. Or, customers can also conditionally interrupt an Amplify backend deployment. For example, an IT administrator can enforce that deployment only begins if a custom credential scanner script passed successfully. Finally, customers can also use command hooks to execute scripts after a command was completed successfully. For example, after an “amplify push”, customers can trigger a script that automatically cleans up build artifacts.

    To get started with Amplify CLI’s new command hooks capability, check out our documentation.

    » Amazon CloudWatch Application Insights and AWS Systems Manager Application Manager combine to offer an integrated application management experience

    Posted On: Sep 9, 2021

    Manage and monitor your applications on AWS seamlessly and easily with new service integrations for AWS Systems Manager Application Manager and CloudWatch Application Insights. AWS Systems Manager Application Manager is a service in AWS Systems Manager which brings together operations information from multiple AWS services so customers can investigate and remediate issues. CloudWatch Application Insights is a service that helps customers easily setup monitoring and troubleshoot their enterprise applications running on AWS resources. Together, the two services provide a combined view of your application health and an ability to dive deep into problems to quickly resolve issues.

    With this service integration, customers can now with one click automatically setup monitoring of their applications directly from the Application Manager Console enabling them to see the monitoring summary and problem details for their monitored applications. The service will discover the resources in your application Resource Group and configure the recommended metrics, telemetry, logs and alarms. If you want to dive deep into your application issue you can click through on the problem which will take you directly to CloudWatch Application Insights for further analysis.

    Application Manager can be found in Systems Manager in the left navigation panel under Application Management. Amazon CloudWatch Application Insights can be accessed directly via the Insights tab in the CloudWatch left panel. The integrated application management services are available in all AWS commercial regions. To learn more about the services visit the respective product pages for Systems Manager and Amazon CloudWatch Application Insights.

    » Amazon Lex launches support for Korean

    Posted On: Sep 9, 2021

    We are delighted to announce that Amazon Lex now supports Korean. Amazon Lex is a service for building conversational interfaces into any application using voice and text. Amazon Lex provides deep learning powered automatic speech recognition (ASR) for converting speech to text and natural language understanding (NLU) capabilities to recognize the intent of the text, enabling you to build applications with lifelike conversational interactions. Now you can deliver a robust and localized conversational experience that understands Korean. You can also respond to users with natural sounding Amazon Polly Korean voices and provide a fully localized, voice experience.

    The conversational interfaces built on Amazon Lex can be applied to a variety of use cases such as interactive voice response systems, self-service chatbots, and application bots. Amazon Lex also provides pre-defined slots that are localized to capture information such as common Korean names and cities.

    Korean is available in all AWS Regions where Amazon Lex operates. To learn more, visit the Amazon Lex documentation page.

    » AWS Cloud Map now available in the AWS GovCloud (US) Regions

    Posted On: Sep 9, 2021

    AWS Cloud Map is now available in both AWS GovCloud (US) Regions.

    AWS Cloud Map is a cloud resource discovery service. With AWS Cloud Map, you can define custom names for your application resources, such as Amazon Elastic Container Services (Amazon ECS) tasks, Amazon Elastic Compute Cloud (Amazon EC2) instances, Amazon Simple Storage Service (Amazon S3) buckets, Amazon DynamoDB tables, or any other cloud resource. You can then use these custom names to discover the location and metadata of cloud resources from your applications using AWS SDK and authenticated API queries. AWS Cloud Map is a highly available, managed service with rapid change propagation.

    Visit the AWS Region Table to see all AWS Regions where AWS Cloud Map is available. Please visit our product page to learn more about AWS Cloud Map.

    » Ability to customize reverse DNS for Elastic IP addresses now available in additional regions for Virtual Private Cloud customers

    Posted On: Sep 9, 2021

    Starting today, the ability to customize reverse DNS for Elastic IP addresses for Virtual Private Clouds (VPC) is available in 16 additional regions. These AWS Regions are US East (N. Virginia), US West (N. California, Oregon), Asia Pacific (Hong Kong, Osaka, Seoul, Singapore, Sydney, Tokyo), Europe (Frankfurt, Ireland, London, Paris, Stockholm), Middle East (Bahrain), and South America (São Paulo). With today’s launch, this feature is available in all commercial regions.

    This feature makes it easier to setup reverse Domain Name System (DNS) lookup for your Elastic IP addresses and improves your email deliverability. A reverse DNS lookup for an IP address returns its domain name and is commonly used by email services to filter out spams. This release helps to improve your email deliverability in all commercial regions from EC2 by enabling you to set reverse DNS lookup with just a few clicks and meet a key spam filter requirement.

    Often, email services associate generic or incorrect domain name of an email’s IP address with spammers and drop such emails. Using this feature, you can assign your domain name to an Elastic IP address, enabling email services to validate your email’s domain name and, as a result, help improve your email deliverability.



    To get started, use the AWS CLI, SDK or Console to set reverse DNS for an EIP. There is no additional charge to set reverse DNS for an EIP. For more information about this feature, visit the Elastic IP documentation.

    » AWS announces enhancements to the AWS Marketplace Consulting Partner Private Offer self-service experience.

    Posted On: Sep 9, 2021

    Today, AWS Marketplace announced a new feature the enables Consulting Partners the ability to easily view and create offers from Independent Software Vendors' (ISV) resell authorization opportunities in the AWS Marketplace Management Portal (AMMP). With this launch, Consulting Partners can now review all resell Opportunities ISVs have granted them, and quickly create a Consulting Partner Private Offer (CPPO) from the Opportunity. A CPPO allows customers to purchase software solutions in AWS Marketplace directly from Consulting Partners with custom terms and pricing not publicly available. With the improved transparency of resell opportunities and streamlined private offer creation process, Consulting Partners can reduce operational load while accelerating deal delivery.

    From the Partner tab in AMMP, Consulting Partners can now see a list of resell opportunities from authorizing ISVs, select their preferred opportunity and create a private offer in three simple steps.

    At launch, all registered AWS Marketplace Consulting Partner can view, search, and initiate a CPPO directly from the Partner tab in AMMP. To learn more about this feature or how to get authorized as a Consulting Partner, please review the AWS Marketplace Seller Guide documentation.

    » Amazon EC2 I3en Instances are Now Available in AWS Regions in the Middle East (Bahrain), South Africa (Cape Town), and Europe (Milan)

    Posted On: Sep 9, 2021

    Starting today, Amazon EC2 I3en Instances are Now Available in Amazon Web Services regions in the Middle East (Bahrain), South Africa (Cape Town), and Europe (Milan). I3en instances offer up to 60 TB of low latency NVMe SSD instance storage and up to 50% lower cost per GB over I3 instances.

    I3en instances are designed for data-intensive workloads such as relational and NoSQL databases, distributed file systems, search engines, and data warehouses that require high random I/O access to large amounts of data residing on instance storage. I3en instances are powered by Amazon Web Services -custom Intel® Xeon® Scalable processors with 3.1 GHz sustained all core turbo performance and provide up to 100 Gbps of networking bandwidth and come in seven instance sizes, with storage options from 1.25 to 60 TB.

    With this regional expansion, I3en instances are now available across US East (N. Virginia, Ohio, Amazon GovCloud), US West (N. California, Oregon, Amazon GovCloud), Europe (Frankfurt, Ireland, London, Milan, Paris and Stockholm), Asia Pacific (Singapore, Hong Kong, Tokyo, Seoul, Sydney, Mumbai), Canada (Central), South America (Sao Paulo), South Africa (Cape Town), the Middle East (Bahrain), China (Beijing) Region, operated by Sinnet and China (Ningxia) Region, operated by NWCD. Customers can purchase I3en as On-Demand, Reserved Instances or as Spot Instances. To get started, visit the AWS Management Console, AWS Command Line Interface (CLI), and AWS SDKs. To learn more, visit our product page.

    » Amazon OpenSearch Service (successor to Amazon Elasticsearch Service) now supports Index Transforms

    Posted On: Sep 8, 2021

    Amazon OpenSearch Service (successor to Amazon Elasticsearch Service) now supports index transforms that enables customers to extract significant information from large data sets and store summarized views in new indices. Customers can derive new insights, further analyze, and visualize trends from the new summary index.

    Index transforms are similar to “materialized views” in databases and provide an interactive way to aggregate and store summarized views from large data sets so that you can visualize and analyze the data more easily. For example, you can summarize the annual sales index with multiple fields using transforms to organize the data by region, quarter, and then revenue. Using OpenSearch Dashboards or the transforms API, customers can schedule and run index transforms jobs to create the summarized indices for analyzing trends and patterns.

    Index transforms feature is powered by OpenSearch, an Apache 2.0-licensed distribution of Elasticsearch. To learn more about OpenSearch and Index Transforms, visit the project website. Index Transforms is available on all domains running OpenSearch 1.0 or greater. To learn more, see the documentation.

    Index transforms is now available for all Amazon OpenSearch Service domains across 25 Regions globally: US East (N. Virginia, Ohio), US West (Oregon, N. California), AWS GovCloud (US-Gov-East, US-Gov-West), Canada (Central), South America (Sao Paulo), Africa (Cape Town), Middle East (Bahrain), EU (Ireland, London, Frankfurt, Paris, Stockholm, Milan), Asia Pacific (Singapore, Sydney, Tokyo, Osaka, Seoul, Mumbai, Hong Kong), and China (Beijing – operated by Sinnet, Ningxia – operated by NWCD). Please refer to the AWS Region Table for more information about Amazon OpenSearch Service availability.

    » Amazon EKS Connector is now in public preview

    Posted On: Sep 8, 2021

    Amazon Elastic Kubernetes Service (Amazon EKS) now allows you to connect any conformant Kubernetes cluster to AWS and visualize it in the Amazon EKS console. You can connect any conformant Kubernetes cluster, including Amazon EKS Anywhere clusters running on-premises, self-managed clusters on Amazon Elastic Compute Cloud (Amazon EC2), and other Kubernetes clusters running outside of AWS. Regardless where your cluster is running, you can use the Amazon EKS console to view all connected clusters and the Kubernetes resources running on them.

    To connect your cluster, the first step is to register it with Amazon EKS using the Amazon EKS console, Amazon EKS API or eksctl. After providing required inputs such as a cluster name and an IAM role that includes the required permissions, you will receive a configuration file for Amazon EKS Connector, a software agent that runs on a Kubernetes cluster and enables the cluster to register with Amazon EKS. After you apply the configuration file to the cluster, the registration is complete. Once your cluster is connected, you will be able to see the cluster, its configuration and workloads, and their status in the Amazon EKS console. Visit the EKS documentation for more details.

    » Amazon OpenSearch Service (successor to Amazon Elasticsearch Service) now supports Data Streams with OpenSearch 1.0 to simplify management of time-series data

    Posted On: Sep 8, 2021

    Amazon OpenSearch Service now supports data streams to help simplify management of time-series data such as logs, metrics, and traces. Data streams abstract the underlying indexes required for your time-series data, the rollover process, and the optimizations required to efficiently manage and query time-based data, reducing operational overhead. You can move your older rolled-over indexes that are part of a data stream to UltraWarm and beyond that to cold storage, helping you retain data for longer, cost-effectively.

    Previously, a typical workflow to manage time-series data involved multiple steps such as creating a rollover index alias, defining a write index, and defining common mappings and settings for the backing indices. Data streams help simplify this initial setup process, and are designed to work out of the box for time-based data such as application logs that are typically append-only in nature. With data streams, once you define an index template, and configure the rollover criteria in an Index State Management (ISM) policy, you can start ingesting data straight away. Data streams can help automatically create the necessary write index, backing indices, and based on your ISM policy, will rollover indexes in a data stream from hot to warm or warm to cold.

    Data Streams are supported for Amazon OpenSearch Service domains running OpenSearch 1.0, a community-driven, open source search and analytics suite derived from Apache 2.0 licensed Elasticsearch 7.10.2 & Kibana 7.10.2. To learn more about OpenSearch, please visit https://opensearch.org/.

    To learn more about Data Streams, please see the documentation.

    Data Streams is now available for Amazon OpenSearch Service domains across 25 regions globally. Please refer to the AWS Region Table for more information about Amazon OpenSearch Service availability.

    To learn more about Amazon OpenSearch Service, please visit the product page.

    » AWS Managed Services (AMS) now offers a catalog of operational offerings with Operations on Demand

    Posted On: Sep 8, 2021

    AWS Managed Services (AMS) is excited to announce Operations on Demand, a flexible and scalable option to gain access to additional skilled AMS operations capacity, skills, and experience. Operations on Demand gives customers access to a full range of operational capabilities above and beyond the extensive scope provided by AMS Operations Plans. Customers choose from a curated and continually expanding catalog of operational offerings which are delivered by a combination of automation and highly skilled AMS resources. The catalog includes a mix of short-term and ongoing operational use cases and can be used to supplement your existing operations or fill a knowledge or capacity gap. Examples of catalog offerings include assisting with the maintenance of Amazon Elastic Kubernetes Service (EKS), operations of AWS Control Tower, management of SAP clusters, and performing in-place upgrades of instances running out-of-support operating systems. Customers pay for what they use in blocks of hours, and can unsubscribe from a catalog offering at any time. Please see our public documentation for a listing of current catalog offerings. The Operations on Demand feature is available for both the AMS Advanced and Accelerate Operations Plans in all regions where AMS is available.

    To get started, see the AMS documentation. Learn more about AMS here and download the Forrester Total Economic Impact™ study.

    » AWS CDK releases v1.117.0 - v1.120.0 with improved support for Amazon Kinesis Firehose, Amazon CloudFront, Amazon Cognito, and more

    Posted On: Sep 8, 2021

    During August, 2021, 4 new versions of the AWS Cloud Development Kit (CDK) for JavaScript, TypeScript, Java, Python, .NET and Go were released (v1.117.0 through v1.120.0). These releases include multiple additions to the Kinesis Firehose Construct Library, including compression and prefixes on S3 delivery stream destinations, delivery stream metrics, S3 source backups, AWS Lambda-based data processors and more. Additionally, CloudFront Construct Library now supports Origin Shield, CloudWatch supports defining alarms across AWS accounts, and Cognito User Pools support Device Tracking. These releases resolve 28 issues and introduce 37 new features that span 30 different modules across the library. Many of these changes were contributed by the developer community.

    The AWS CDK is a software development framework for defining cloud applications using familiar programming languages. The AWS CDK simplifies cloud development on AWS by hiding infrastructure and application complexity behind intent-based, object-oriented APIs for each AWS service.

    To get started, see the following resources:

  • Read the full release notes for 1.117.0, 1.118.0, 1.119.0 and 1.120.0.
  • Get started with the AWS CDK in all supported languages by taking CDK Workshop.
  • Read our Developer Guide and API Reference.
  • Find useful constructs published by AWS, partners and the community in Construct Hub.
  • Connect with the community in the cdk.dev Slack workspace.
  • Follow our Contribution Guide to learn how to contribute fixes and features to the CDK.
  • » Announcing the General Availability of AWS Local Zones in Chicago, Kansas City, and Minneapolis

    Posted On: Sep 8, 2021

    Today we are announcing the general availability of AWS Local Zones in Chicago, Kansas City, and Minneapolis. Customers can now use these new Local Zones to deliver applications that require single-digit millisecond latency to end-users or for on-premises installations in these three metro areas.

    AWS Local Zones are a type of AWS infrastructure deployment that places AWS compute, storage, database, and other select services closer to large population, industry, and IT centers where no AWS Region exists today. You can use AWS Local Zones to run applications that require single-digit millisecond latency for use cases such as real-time gaming, hybrid migrations, media and entertainment content creation, live video streaming, engineering simulations, AR/VR, and machine learning inference at the edge.

    With this launch, AWS Local Zones are now generally available in 10 metro areas - Boston, Chicago, Dallas, Denver, Houston, Kansas City, Los Angeles, Miami, Minneapolis, and Philadelphia. With an additional six Local Zones launching later in 2021 in Atlanta, Las Vegas, New York, Phoenix, Portland, and Seattle, customers will be able to deliver ultra-low latency applications to end-users in cities across the US.

    You can enable AWS Local Zones from the “Settings” section of the EC2 Console or ModifyAvailabilityZoneGroup API. To learn more, please visit the AWS Local Zones website.

    » AWS Firewall Manager Automations for AWS Organizations v1.1 is now available

    Posted On: Sep 8, 2021

    The AWS Firewall Manager Automations for AWS Organizations solution allows you to centrally configure, manage, and audit firewall rules across all your accounts and resources in AWS Organizations. This solution is a reference implementation to automate the process to setup AWS Firewall Manager security policies. This solution supersedes AWS Centralized WAF and VPC Security Group Management solution.

    In addition to the previous feature set, this update adds following capabilities to the solution:

  • Support for DNS Firewall security policies with Amazon Managed Domains lists
  • Support for compliance reports in csv format for the FMS policies
  • Support for FMS policy customizations using a policy manifest file in your S3 bucket. The policy manifest is version controlled, allowing you to revert back to previous policy configurations at any point of time
  • Support for applying different policy configurations to different OUs and Regions
  • Additional AWS Solutions Implementations are available on the AWS Solutions page, where customers can browse common questions by category to find answers in the form of succinct Solution Briefs or comprehensive Solution Implementations, which are AWS-vetted, automated, turnkey reference implementations that address specific business needs.

    » Amazon MQ now supports RabbitMQ version 3.8.22

    Posted On: Sep 8, 2021

    You can now launch RabbitMQ 3.8.22 brokers on Amazon MQ. This release includes a fix for an issue with queues using per-message TTL (time to live), identified in the previously supported version, RabbitMQ 3.8.17, and we recommend upgrading to RabbitMQ 3.8.22.

    Amazon MQ is a managed message broker service for Apache ActiveMQ and RabbitMQ that makes it easier to set up and operate message brokers on AWS. Amazon MQ reduces your operational responsibilities by managing the provisioning, setup, and maintenance of message brokers for you. Because Amazon MQ connects to your current applications with industry-standard APIs and protocols, you can more easily migrate to AWS without having to rewrite code.

    You can upgrade RabbitMQ with just a few clicks in the AWS Management Console. If your broker has automatic minor version upgrade enabled, AWS automatically upgrades the broker to version 3.8.22 during the prescribed maintenance window. To learn more about upgrading, please see: Managing Amazon MQ for RabbitMQ engine versions in the Amazon MQ Developer Guide.

    RabbitMQ 3.8.22 includes the fixes and features of all previous releases of RabbitMQ. To learn more, read the RabbitMQ Changelog.

    » AWS Marketplace launches aliases for all single AMI products

    Posted On: Sep 8, 2021

    Today, AWS announced that customers can use aliases to refer to Amazon Machine Images (AMI) purchased from AWS Marketplace. AMI aliases are unique identifiers that be used instead of an AMI ID in deployment scripts. Starting today, aliases are available for all single AMI products on AWS Marketplace. This simplifies launching new AMIs as customers don’t have to change AMI IDs for each region every time there is a version update. Customers can rather use a single alias that will auto-resolve to current AWS region. Additionally, customers can always refer to the latest version by using the ‘latest’ alias for a given AMI product. This will automate deployment pipelines and reduce the manual steps required to upgrade to a new version of AMI purchased from AWS Marketplace.

    Customers can find the alias of subscribed AMI on the product’s configuration page. To learn more about it, please refer to the public documentation here.

    » OpenSearch Dashboards Notebooks, a new visual reporting feature, now available on Amazon OpenSearch Service (successor to Amazon Elasticsearch Service)

    Posted On: Sep 8, 2021

    Amazon OpenSearch Service now supports OpenSearch Dashboards Notebooks, a new feature that enables OpenSearch users to interactively and collaboratively develop rich reports backed by live data and queries. A notebook is a document made up of cells or paragraphs that can combine markdown, SQL and Piped Processing Language (PPL) queries, and visualizations with support for multi-timelines so that users can easily tell a story. Notebooks can be developed, shared as an OpenSearch Dashboards link, PDF or PNG, and refreshed directly from OpenSearch Dashboards to foster data driven exploration and collaboration among OpenSearch users and their stakeholders. Common use cases for notebooks includes creating postmortem reports, designing run books, building live infrastructure reports, or even documentation.

    OpenSearch Dashboards Notebooks feature is powered by OpenSearch, an Apache 2.0-licensed distribution of Elasticsearch. OpenSearch Dashboards Notebooks is supported on all Amazon Opensearch Service domains running OpenSearch 1.0 or greater. To learn more about Opensearch and Opensearch Dashboards Notebooks, visit the project website and Feature Deepdive: OpenSearch Dashboards Notebooks.

    OpenSearch Notebooks is now available for all Amazon OpenSearch Service domains across 25 regions globally: US East (N. Virginia, Ohio), US West (Oregon, N. California), AWS GovCloud (US-Gov-East, US-Gov-West), Canada (Central), South America (Sao Paulo), Africa (Cape Town), Middle East (Bahrain), EU (Ireland, London, Frankfurt, Paris, Stockholm, Milan), Asia Pacific (Singapore, Sydney, Tokyo, Osaka, Seoul, Mumbai, Hong Kong), and China (Beijing – operated by Sinnet, Ningxia – operated by NWCD). Please refer to the AWS Region Table for more information about Amazon OpenSearch Service availability.

    » AWS Launch Wizard now supports SAP deployment from accounts using AWS Managed Services

    Posted On: Sep 8, 2021

    AWS Managed Services (AMS) now allows customers to deploy SAP HANA-based workloads using AWS Launch Wizard

    AWS Launch Wizard provides a guided experience for deploying production ready enterprise workloads like SAP on AWS, allowing you to get your systems up and running in hours vs. weeks or months. AWS Managed Services augments your capabilities to help you operate your AWS infrastructure more efficiently and securely.

    With this launch, you can use AWS Launch Wizard to deploy SAP applications in AMS-managed AWS accounts. SAP applications deployed with Launch Wizard can be onboarded to existing AMS-based operational/governance practices, allowing your team to focus on innovation and value-adding activities, leaving infrastructure management and governance to AMS.

    To learn more about AWS Launch Wizard, visit the Launch Wizard Page. To learn more about AWS Managed Services, visit the AWS Managed Services page. Or, learn how to deploy SAP HANA on AWS with AWS Launch Wizard yourself in under 2 hours in this short demo

    » Amazon Elastic Kubernetes Service Anywhere is now generally available

    Posted On: Sep 8, 2021

    Today, we are excited to announce the general availability of Amazon Elastic Kubernetes Service (Amazon EKS) Anywhere, a new deployment option for Amazon EKS that allows customers to create and operate Kubernetes clusters on customer-managed infrastructure, supported by AWS. Customers can now run Amazon EKS Anywhere on their own on-premises infrastructure using VMware vSphere starting today, with support for other deployment targets in the near future, including support for bare metal coming in 2022.

    Amazon EKS Anywhere helps simplify the creation and operation of on-premises Kubernetes clusters while providing tools for automating cluster management. AWS supports all Amazon EKS Anywhere components including integrated 3rd-party software, so that customers can reduce their support costs and avoid maintenance of redundant open-source and third-party tools. In addition, Amazon EKS Anywhere gives customers Kubernetes operational tooling consistent with Amazon EKS, which is optimized to simplify cluster installation with default configurations for the operating system and networking.

    Amazon EKS Anywhere is open-source and there are no upfront commitment or fees to use it. Customers that have AWS Enterprise Support have the option to purchase additional Amazon EKS Anywhere support. Starting today, customers can download and install Amazon EKS Anywhere on their on-premises infrastructure by following the instructions in the documentation. To learn more, visit the blog or product pages.

    » Amazon Elasticsearch Service is now Amazon OpenSearch Service; adds support for OpenSearch 1.0

    Posted On: Sep 8, 2021

    Amazon Elasticsearch Service has a new name: Amazon OpenSearch Service. This change, which was previously announced here, coincides with the addition of support for OpenSearch 1.0. You can now run and scale both OpenSearch and Elasticsearch (until version 7.10) clusters on Amazon OpenSearch Service and get all of the same benefits you have enjoyed so far from Amazon Elasticsearch Service.

    OpenSearch 1.0, is a community-driven, open source search and analytics suite derived from Apache 2.0 licensed Elasticsearch 7.10.2 & Kibana 7.10.2. It consists of a search engine, OpenSearch, and a visualization and user interface, OpenSearch Dashboards. With OpenSearch 1.0, we are adding support in Amazon OpenSearch Service for several new features such as transforms, data streams, and notebooks on OpenSearch Dashboards. To learn more about OpenSearch and the latest version, please refer to https://opensearch.org/.

    Along with OpenSearch 1.0, we continue to support legacy Elasticsearch versions until 7.10 on the service. You can upgrade your existing Elasticsearch clusters on Amazon OpenSearch Service to OpenSearch 1.0, similar to how you upgrade to newer Elasticsearch versions. We have changed our configuration APIs to make them generic (e.g. CreateElasticsearchDomain to CreateDomain) and launched a new SDK version. We have maintained backward compatibility to help you continue to operate your cluster and to make this renaming as seamless as possible for you. As we roll out this name change. you will start noticing the new service name Amazon OpenSearch Service, as well as new resource types for OpenSearch, in consoles of various AWS services, in documentation, and in APIs. Please refer to the FAQs for more information and the documentation to get a summary of the changes related to this rename.

    OpenSearch 1.0 on Amazon OpenSearch Service is now available in 25 regions globally: US East (N. Virginia, Ohio), US Wes (Oregon, N. California), AWS GovCloud (US-Gov-East, US-Gov-West), Canada (Central), South America (Sao Paulo), EU (Ireland, London, Frankfurt, Paris, Stockholm, Milan), Asia Pacific (Singapore, Sydney, Tokyo, Seoul, Mumbai, Hong Kong. Osaka), Middle East (Bahrain), China (Beijing – operated by Sinnet, Ningxia – operated by NWCD), and Africa (Cape Town). Please refer to the AWS Region Table for more information about Amazon OpenSearch Service availability.

    To learn more, please see the documentation.

    » AWS Systems Manager Change Calendar now supports third-party calendar imports, giving you a more holistic view of events

    Posted On: Sep 7, 2021

    Change Calendar, a capability of AWS Systems Manager, now supports importing of third-party calendars, such as Microsoft Outlook calendars, thereby enabling you to view all your events centrally and control what changes can be made to your AWS resources during those events.

    Using Change Calendar, you can schedule calendar events to control the changes made to your AWS resources during events, such as public marketing promotions, when you expect high demand on your resources. This new features now allows you to easily see your external events in a change calendar and control disruptive changes being made during the blocked periods on your calendar. Additionally, any calendar import through Change Calendar retains the recurring events ensuring users do not lose important recurrring events during the import. This feature release supports importing from Google Calendars, Microsoft Outlook Calendars and Apple iCloud Calendars in .ics format.

    To get started, choose Change Calendar in the Systems Manager console on the left navigation menu to import an external calendar while setting up a new change calendar or editing an existing one. This feature is available in all AWS Regions where AWS Systems Manager is available, excluding AWS China (Beijing and Ningxia) Regions. For more details about this feature, see our documentation. To learn more about AWS Systems Manager Change Calendar, see our Product Page.

    » Amazon Pinpoint now supports encrypted SNS topics for inbound SMS

    Posted On: Sep 7, 2021

    Amazon Pinpoint now supports encrypted SNS topics as destinations for incoming SMS text messages. This enables you to add another layer of protection when using Amazon Pinpoint for two-way SMS text messaging. When you enable two-way SMS messaging, you can publish inbound messages to encrypted SNS topics for retrieval and processing. Amazon SNS uses an AWS Key Management Service (AWS KMS) key to encrypt the messages that it sends to these topics.

    Data in Amazon Pinpoint is encrypted in transit and at rest. When you submit data to Amazon Pinpoint, it encrypts the data as it receives and stores it. When you retrieve data from Amazon Pinpoint, it uses modern security protocols to transmit the data to you. By using encrypted SNS topics, you can add an additional layer of security by encrypting the data with a key that you control. This feature is especially useful for applications that handle private and sensitive data.

    For more information about using encrypted SNS topics for SMS, see Configuring two-way SMS messaging in the Amazon Pinpoint User Guide.

    » Support for multi-key encryption now available with AWS Elemental MediaPackage and SPEKE v2.0

    Posted On: Sep 7, 2021

    AWS Elemental MediaPackage now supports version 2.0 of the Secure Packager and Encoder Key Exchange (SPEKE) API. SPEKE v2 makes it possible to use native Content Protection Information Exchange Format (CPIX) 2.3 documents which allows for the use of multiple encryption keys for different media tracks. With MediaPackage and SPEKE v2 you can now use two keys, one for audio tracks and one for video tracks with live DASH and CMAF streams, with support for more complex encryption models for content protection requirements to follow.

    AWS has also launched a new SPEKE v2.0 Qualification Program for partner DRM platforms. The program includes automated and manual tests to verify the implementations’ compliance and the end-to-end interoperability with reference video players. Qualified SPEKE v2.0 DRM platforms partners are: Axinom, BuyDRM, castLabs, Inka Entworks, and Intertrust.

    For more information see the SPEKE v2.0 Specification and read the AWS Media Blog on improving streaming content security with SPEKE v2.0 and AWS Elemental MediaPackage. And to get started visit the MediaPackage content encryption, Live CMAF, Live DASH, and Live API documentation pages.

    With MediaPackage, you can reduce workflow complexity, increase origin resiliency, and better protect multiscreen content without the risk of under or over-provisioning infrastructure. MediaPackage functions independently or as part of AWS Elemental Media Services, a family of services that form the foundation of cloud-based video workflows and offer the capabilities you need to transport, create, package, monetize, and deliver video.

    Visit the AWS region table for a full list of AWS Regions where AWS Elemental MediaPackage is available.

    » AWS Network Firewall is Now HIPAA Eligible

    Posted On: Sep 7, 2021

    Starting today, AWS Network Firewall is a HIPAA eligible service. This means you can use AWS Network Firewall to secure and inspect protected health information (PHI) stored in your accounts.

    If you have a HIPAA Business Associate Addendum (BAA) in place with AWS, you can now start using AWS Network Firewall for your HIPAA-regulated workloads. If you don't have a BAA in place with AWS, or if you have any other questions about running HIPAA-regulated workloads on AWS, please contact us.

    AWS Network Firewall is a managed firewall service that makes it easy to deploy essential network protections for all your Amazon Virtual Private Clouds (VPCs). The service automatically scales with network traffic volume to provide high-availability protections without the need to set up or maintain the underlying infrastructure. AWS Network Firewall is integrated with AWS Firewall Manager to provide you with central visibility and control of your firewall policies across multiple AWS accounts. To learn more about AWS Network Firewall, please see the AWS Network Firewall product page and service documentation

    » Amazon CodeGuru Reviewer adds new inconsistency detectors

    Posted On: Sep 7, 2021

    Amazon CodeGuru Reviewer is a developer tool that leverages automated reasoning and machine learning to detect potential code defects that are difficult to find and offers suggestions for improvements. Today, we are announcing the addition of a new set of detectors that can identify inconsistencies within a code repository. These inconsistency detectors are a new type of machine learning based detector that analyzes coding patterns within a developer’s repository and helps detects when there is an anomaly that deviates from their standard pattern.

    An example of an inconsistency that CodeGuru Reviewer can now find is a missing null check. Previously, if a developer always included a null check for input into a certain function, but accidentally missed it one time, CodeGuru would not have detected this anomaly. Now, CodeGuru Reviewer can identify patterns within a repository, such as always including a null check on input into a certain function, and detect when the developer deviates from their standard pattern. Other examples of inconsistencies that CodeGuru Reviewer can detect include typos, inconsistent logging patterns, and missing API calls. After detecting an inconsistency, CodeGuru Reviewer provides recommendations for how the developer can remediate it.

    Visit the documentation to get started with Amazon CodeGuru Reviewer and analyze your first 100K lines of code for free, for 90 days. To learn more, take a look at the Amazon CodeGuru page. To contact the team, visit the Amazon CodeGuru developer forum.

    » Amazon Detective offers Splunk integration

    Posted On: Sep 7, 2021

    Amazon Detective, in coordination with the Splunk Trumpet project, has released the ability to pivot from an Amazon GuardDuty finding in Splunk directly to an Amazon Detective entity profile so that customers can quickly identify the root cause of potential security issues or suspicious activities.

    This new capability will help simplify security analysis for your security and operations teams by enabling a quick pivot from Splunk into Amazon Detective. You no longer need to copy and paste URLs or search in Detective for the resource you want. Instead, Amazon Detective can do the heavy lifting while you focus on quickly answering investigative questions. For example, Amazon Detective can help you answer questions such as: “How long has this IP address that I am investigating in Splunk been interacting with the resources in my AWS accounts?”, “Which of my EC2 instances did this IP address communicate with?”, “What data volumes were exchanged with this IP address?”, “Which ports did the communication occur on?”, or “Which users and roles invoked API operations from this IP address?”

    The new Amazon Detective integration is available now as part of the Splunk Trumpet Project in all of the Regions where Amazon Detective is supported. This integration is an addition to the Lambda pre-processor that sends GuardDuty findings to Splunk. The updated code receives the input records for Amazon GuardDuty findings and parses the content to generate the appropriate Amazon Detective URLs as additional fields in Splunk. The URLs that Splunk generates use the format for profile URLs that is described in Navigating directly to a profile using a URL the Amazon Detective User Guide. Here is an example URL for an EC2 instance: (https://console.aws.amazon.com/detective/home?region=us-east-1#entities/Ec2Instance/i-0149bf6226265a199?scopeStart=1624674429&scopeEnd=1626473483).

    Use the following instructions to complete the initial Splunk integration with AWS: Automating AWS Data Ingestion into Splunk. On the Splunk Trumpet project installation page, select Detective GuardDuty URLs from the AWS CloudWatch Events dropdown.

    Amazon Detective makes it easy to analyze, investigate, and quickly identify the root cause of potential security issues. To get started, enable a 30-day free trial of Amazon Detective with just a few clicks in the AWS Management console. See the AWS Regions page for all the Regions where Detective is available. To learn more, visit the Amazon Detective product page.

    » Cloud9 is now available in 2 more regions

    Posted On: Sep 7, 2021

    AWS Cloud9 is now available in Asia Pacific (Osaka) and Africa (Cape Town). AWS Cloud9 is a cloud-based integrated development environment (IDE) that lets you write, run, and debug your code with just a browser.

    Learn more about AWS Cloud9 by visiting our product page  or the AWS console. For a full list of AWS Regions where AWS Cloud9 is available, please visit our region table.

    » Amazon RDS for SQL Server now supports MSDTC JDBC XA for SQL Server 2017 CU16+ and SQL Server 2019

    Posted On: Sep 7, 2021

    Amazon RDS for SQL Server now supports MSDTC JDBC XA transactions. With MSDTC you can either execute the transaction using SQL Server as the Transaction manager using linked servers, or you can promote MSDTC running on the same host as the Client application to the role of Transaction manager.

    With this announcement, customers that are using SQL Server 2017 CU16+ and SQL Server 2019 can now enable XA option setting to start distributed transactions from JDBC.

    Please visit Amazon RDS for SQL Server Pricing for complete regional availability information.

    » Cross-account event discovery for Amazon EventBridge schema registry

    Posted On: Sep 3, 2021

    Amazon EventBridge schema registry now supports discovery of cross-account events published to an event bus. The EventBridge schema registry stores event structure - or schema - in a shared central location and maps those schemas to code for Java, Python, and Typescript to use events as objects in your code. Schemas from your event bus are automatically added to the registry when you turn on the schema discovery feature. You can connect to and interact with the schema registry from the AWS console, APIs, or through the SDK Toolkits for Jetbrains (Intellij, PyCharm, Webstorm, Rider) and VS Code.

    You can turn on the schema discovery feature by a few clicks in the AWS Management Console to automatically add all schema sent to an event bus to the registry. Now schema discovery will generate schemas for events sent to an event bus from another account. This makes it easier to build multi-account, event-driven architectures and discover the schema of events sent from another account. Any developer in your organization can search for and access events in the registry. By generating code bindings, the registry enables you to interact with the event as an object in your code. You can interact with the schema registry in your preferred IDE to take advantage of features like code validation and auto-completion.

    The EventBridge schema registry is available at no additional cost and customers only pay for schema discovery. The schema discovery feature has a free tier of 5 million ingested events per month, and a fee of $0.10 per million ingested events outside of the free tier. All ingested events are measured in 8KB chunks. For more info on pricing, please see the EventBridge pricing page.

    The EventBridge schema registry is available in all commercial regions except Bahrain, Milan, Cape Town, Osaka. For details on EventBridge availability, please see the AWS region table.

    To learn more:

  • Visit the EventBridge product page.
  • Read EventBridge Schema Registry in the Amazon EventBridge Developer Guide.
  • » ACM Private CA now supports the Online Certificate Status Protocol (OCSP)

    Posted On: Sep 3, 2021

    AWS Certificate Manager (ACM) Private Certificate Authority (CA) announces the availability of Online Certificate Status Protocol (OCSP) for distributing certificate revocation information. When establishing an encrypted TLS connection, endpoints can use OCSP to query, in near real time, if a certificate has been revoked. Thus alerting the endpoint that the certificate should not be trusted. This feature provides a fully managed OCSP solution for notifying endpoints that certificates have been revoked without the need to manage or operate infrastructure themselves. 
    Previously, ACM Private CA customers could use CRLs to check revocation status for certificates issued by ACM Private CA or build and manage their own OCSP. CRLs are not suitable for endpoints with limited storage, introduce additional compute processing to access and parse, and can become stale as often clients only download CRLs on a daily or less frequent basis . Building and operating an OCSP responder requires customers to perform custom development, handle standard maintenance and respond to emergency events in case the OCSP fails.
    Private CA now offers fully managed OCSP. Customers can enable OCSP with a single operation via the console, CloudFormation, API or command line with no development or deployment required for new or existing CAs. Private CA’s OCSP allows customers to deploy certificates that any TLS endpoint can query revocation status on directly, moving the storage and processing requirements to the OCSP responder and solving the stale status issue. Customers who issue certificates can now choose OCSP, Certificate Revocation Lists (CRLs) or both to distribute revocation information for their private certificates.

    Private CA provides you a highly-available private CA service without the upfront investment and ongoing maintenance costs of operating your own private CA. CA administrators can use Private CA to create a complete CA hierarchy, including online root and subordinate CAs, with no need for external CAs. With Private CA, you can create private certificates for your resources in one place with a secure, pay as you go, managed private CA service. The OCSP feature is an add-on option for Private CA. Pricing for the OCSP feature can be found on the public ACM Private CA pricing page.

    The CA OCSP feature is available in all Private CA supported regions except AWS GovCloud. For a list of regions where Private CA is available, see  AWS Regions and Endpoints .

    To get started with Private CA visit the Getting Started page.

    » Amazon Aurora PostgreSQL now supports oracle_fdw extension in AWS GovCloud (US) Regions

    Posted On: Sep 3, 2021

    Amazon Aurora PostgreSQL-Compatible Edition now supports the oracle_fdw extension in AWS GovCloud (US) Regions, which allows your PostgreSQL database to connect to and retrieve data stored in Oracle databases.

    Foreign Data Wrappers are libraries for PostgreSQL databases that can communicate with an external data source, abstracting the details of connecting to the data source and obtaining data from it. oracle_fdw is a PostgreSQL extension that provides a Foreign Data Wrapper for easy and efficient access to Oracle databases. Oracle_fdw is supported for PostgreSQL 12.7 or higher. For more information about oracle_fdw, supported versions, and extension support in Aurora PostgreSQL, please review the extensions page.

    Amazon Aurora combines the performance and availability of high-end commercial databases with the simplicity and cost-effectiveness of open source databases. It provides up to three times better performance than the typical PostgreSQL database, together with increased scalability, durability, and security. For more information, please visit the Amazon Aurora product page. To get started with Amazon Aurora, take a look at our getting started page.

    Don't see an extension you'd like to use? Let us know through the AWS Forum, through AWS Enterprise support, or by emailing us at rds-postgres-extensions-request@amazon.com.

    » Amazon EMR Studio is now HIPAA eligible and HITRUST certified

    Posted On: Sep 3, 2021

    Amazon EMR Studio is an integrated development environment (IDE) that makes it easy for data scientists and data engineers to develop, visualize, and debug data engineering and data science applications written in R, Python, Scala, and PySpark. 

    Today, we are excited to announce that EMR Studio is now Health Insurance Portability and Accountability Act (HIPAA) eligible, and is Health Information Trust Alliance (HITRUST) certified. You can now use EMR Studio to run sensitive healthcare workloads regulated by HIPAA and HITRUST.

    HIPAA eligibility and HITRUST certification applies to all AWS Regions where Amazon EMR Studio is available. EMR Studio is available in US East (Ohio, N. Virginia), US West (Oregon), Canada (Central), Europe (Ireland, Frankfurt, London, and Stockholm), and Asia Pacific (Mumbai, Seoul, Singapore, Sydney, and Tokyo) regions.

    See the Architecting for HIPAA Security and Compliance on Amazon Web Services Whitepaper for information and best practices about how to configure AWS HIPAA Eligible Services to store, process, and transmit protected health information (PHI). To get started with EMR Studio, see our Amazon EMR Studio documentation.

    » Amazon Aurora Supports PostgreSQL 9.6.22, 10.17, 11.12, and 12.7 in AWS GovCloud (US) Regions

    Posted On: Sep 3, 2021

    Following the announcement of updates to the PostgreSQL database by the open source community, we have updated Amazon Aurora PostgreSQL-Compatible Edition to support PostgreSQL versions 9.6.22, 10.17, 11.12, and 12.7 in AWS GovCloud (US) Regions. These releases contain bug fixes and improvements by the PostgreSQL community. As a reminder, Amazon Aurora PostgreSQL 9.6 will reach end of life on January 31, 2022. Minor version 9.6.22 is limited to upgrades for clusters already running Aurora PostgreSQL 9.6.

    You can initiate a minor version upgrade manually by modifying your DB cluster, or you can enable the “Auto minor version upgrade” option when creating or modifying a DB cluster. Doing so means that your DB cluster is automatically upgraded after AWS tests and approves the new version. For more details, see Automatic Minor Version Upgrades for PostgreSQL. Please review the Aurora documentation to learn more.  

    Amazon Aurora combines the performance and availability of high-end commercial databases with the simplicity and cost-effectiveness of open source databases. It provides up to three times better performance than the typical PostgreSQL database, together with increased scalability, durability, and security. For more information, please visit the Amazon Aurora product page .To get started with Amazon Aurora, take a look at our getting started page.

    » New full-text search non-string indexing capabilities for Amazon Neptune

    Posted On: Sep 3, 2021

    Amazon Neptune now supports searching on new data types, such as numbers and dates, in addition to strings when using the full-text search integration with Elasticsearch. This improvement allows Neptune customers to replicate non-string values into an Elasticsearch cluster, such as provided by the Amazon Elasticsearch Service, to run Gremlin or SPARQL queries searching on these values.

    Customers asked for more ways to search and filter on graph data using the Neptune full-text search integration with Elasticsearch. Now you can access Elasticsearch’s built-in indexing by text, longs, doubles, booleans, and dates when querying their Neptune graphs. Non-string indexing is enabled by default and available starting in engine release version 1.0.4.2. You can use the quick-start to set up the full-text search integration for the first time, or update the existing full-text search integration by performing one-time re-indexing. Gremlin users can use the withSideEffect step and pass the Elasticsearch endpoint, search pattern, and field information. Similarly, SPARQL users can use the SERVICE keyword to federate queries to Elasticsearch.

    To learn more about this integration, including sample search queries for non-string values, read the Amazon Neptune User Guide. Customers can use the Amazon Neptune integration with Elasticsearch clusters in all regions where Neptune is available. There are no additional charges for using this feature. For more information on pricing and region availability, refer to the Neptune pricing page and AWS Region Table.

    » Amazon RDS for MariaDB supports new minor versions 10.5.12, 10.4.21, 10.3.31, 10.2.40

    Posted On: Sep 2, 2021

    Amazon Relational Database Service (Amazon RDS) for MariaDB now supports MariaDB minor versions 10.5.12, 10.4.21, 10.3.31, and 10.2.40. We recommend that you upgrade to the latest minor versions to fix known security vulnerabilities in prior versions of MariaDB, and to benefit from the numerous bug fixes, performance improvements, and new functionality added by the MariaDB community.

    You can leverage automatic minor version upgrades to automatically upgrade your databases to more recent minor versions during scheduled maintenance windows. Learn more about upgrading your database instances, including automatic minor version upgrades, in the Amazon RDS User Guide.

    Amazon RDS for MariaDB makes it easy to set up, operate, and scale MariaDB deployments in the cloud. See Amazon RDS for MariaDB for pricing details and regional availability. Create or update a fully managed Amazon RDS database in the Amazon RDS Management Console.

    » Amazon Monitron launches a new ethernet gateway device

    Posted On: Sep 2, 2021

    Today, we are announcing the launch of Amazon Monitron Gateway (Ethernet), a new gateway device that allows customers to use their ethernet network to connect Monitron to the internet. The ethernet gateway joins the Wi-Fi gateway that was launched in December 2020, giving customers even more options for Amazon Monitron internet connectivity. Amazon Monitron is an end-to-end  system that uses machine learning (ML) to detect abnormal conditions in industrial equipment, enabling you to implement predictive maintenance and reduce unplanned downtime. It includes sensors to capture vibration and temperature data from equipment, a gateway device to securely transfer data to AWS, the Amazon Monitron service that analyzes the data for abnormal equipment conditions using machine learning, and a companion mobile app to set up the devices and receive reports on operating behavior and alerts to potential failures in your equipment.

    The customers whose network connectivity protocols require a wired gateway device can now use the ethernet gateway to facilitate the transfer of data collected by the Amazon Monitron Sensors to AWS using their ethernet network.

    Amazon Monitron helps monitor and detect potential failures in a broad range of rotating equipment such as motors, gearboxes, pumps, fans, bearings, and compressors. Amazon Monitron Sensors and Gateways are available to purchase separately or bundled in starter packs on Amazon.com or with your Amazon Business account, in USUK, GermanySpain, FranceItaly, and Canada. The Amazon Monitron service is available in the US East (N. Virginia) and Europe (Ireland) regions and you can download the Amazon Monitron app from the Google Play Store

    » Updated AWS Solutions Implementation: AWS CloudEndure Migration Factory Solution

    Posted On: Sep 2, 2021

    AWS CloudEndure Migration Factory Solution coordinates and automates manual processes for larger scaled migrations involving a substantial number of servers. This solutions implementation helps enterprises improve migration velocity and prevents long cutover windows by providing an orchestration and automation platform for rehosting servers to AWS at scale.

    The updated version supports AWS Application Migration Service (AWS MGN). You can now use the same automation we built into AWS CloudEndure Migration Factory Solution to automate both AWS MGN and CloudEndure Migration. For example:

  • Integrate different types of tools that support migration, such as discovery tools, migration tools, and configuration management database (CMDB) tools.
  • Automate many small, manual tasks in large migration, which helps save time and makes it easier to scale.
  • To learn more and get started, please visit the solutions implementation web page.

    Additional AWS Solutions Implementations are available on the AWS Solutions Implementations webpage, where you can browse technical reference implementations that are vetted by AWS architects, offering detailed architecture and instructions for deployment to help build faster to solve common problems.

    » Amazon Elastic File System introduces Intelligent-Tiering to automatically optimize storage costs

    Posted On: Sep 2, 2021

    Amazon Elastic File System (EFS) now supports Intelligent-Tiering, a new capability that makes it easier for you to optimize costs for shared file storage, even when access patterns change. EFS Intelligent-Tiering is designed to help you achieve the right price and performance blend for your application file data by placing your file data in a storage class based on file access patterns.

    EFS Intelligent-Tiering uses Lifecycle Management to monitor the access patterns of your workload and is designed to automatically transition files that are not accessed for the duration of the Lifecycle policy (e.g. 30 days) from either the EFS Standard or EFS One Zone storage classes, to their corresponding Infrequent Access (IA) storage class (EFS Standard-Infrequent Access or EFS One Zone-Infrequent Access). This helps you take advantage of IA storage pricing that is up to 92% lower than the EFS Standard or EFS One Zone file storage pricing for workloads with changing access patterns.

    Additionally, if access patterns change, EFS Intelligent-Tiering is designed to automatically move files back to performance-optimized (EFS Standard or EFS One Zone) storage classes. This helps you eliminate the risk of unbounded access charges, while providing consistent low latencies. If the files become infrequently accessed again, EFS Intelligent-Tiering is designed to transition the files back to IA based on your Lifecycle policy.

    Amazon EFS Intelligent-Tiering is available in all AWS regions where Amazon EFS is available.

    To learn more, please read the AWS News blog, the EFS documentation, and create a file system in a single-click using the Amazon EFS Console.

    » AWS Transfer Family simplifies managed file transfer workflows with low code automation

    Posted On: Sep 2, 2021

    AWS Transfer Family now supports managed workflows that makes it easy for you to create, execute, and monitor post upload processing for file transfers over SFTP, FTPS, and FTP for Amazon S3 and Amazon EFS. Using this feature, you can save time with low code automation to coordinate all the necessary tasks such as copying and tagging. You can also configure custom logic to scan for errors in the data including Personal Identifiable Information (PII), viruses, malware, or incorrect file formats or types. With managed workflows, quickly detect anomalies and meet your compliance requirements with ease.

    AWS Transfer Family’s managed workflows help you orchestrate common file processing steps such copying and tagging, without the overhead of managing your own custom code and infrastructure. Bring your own file processing logic using AWS Lambda for use cases such as malware scanning and file-type compatibility checks, so that you can easily pre-process data before feeding it to your data analytics pipelines.

    Get started with AWS Transfer Family’s managed workflows today in two steps. First, set up your workflow by defining a sequence of action steps. Next, map the workflow to one of your AWS Transfer Family’s managed file servers. This ensures that upon file arrival, actions specified in this workflow are evaluated and triggered in real-time. To monitor your ongoing workflow executions, you can use the AWS Transfer Family Management Console and set up detailed alerts using Amazon CloudWatch logs.

    Support for managed workflows is available in all AWS Regions where AWS Transfer Family is available. To learn more about this feature, visit AWS Transfer Family’s usage guide on Managing post processing workflows.

    » Amazon S3 Multi-Region Access Points accelerate access to replicated data sets by up to 60%

    Posted On: Sep 2, 2021

    Amazon S3 Multi-Region Access Points accelerate performance by up to 60% when accessing data sets that are replicated across multiple AWS Regions. Based on AWS Global Accelerator, S3 Multi-Region Access Points consider factors like network congestion and the location of the requesting application to dynamically route your requests over the AWS network to the lowest latency copy of your data. This automatic routing allows you to take advantage of the global infrastructure of AWS while maintaining a simple application architecture.

    S3 Multi-Region Access Points provide a single global endpoint to access a data set that spans multiple S3 buckets in different AWS Regions. This allows you to build multi-region applications with the same simple architecture used in a single region, and then to run those applications anywhere in the world. Application requests made to an S3 Multi-Region Access Point’s global endpoint automatically route over the AWS global network to the S3 bucket with the lowest network latency. This allows applications to automatically avoid congested network segments on the public internet, improving application performance and reliability.

    S3 Multi-Region Access Points Introduction

    Applications running on-premises or within AWS can also connect to an S3 Multi-Region Access Point using AWS PrivateLink. Establishing a PrivateLink connection to an S3 Multi-Region Access point allows you to route S3 requests into AWS, or across multiple AWS Regions over a private connection using a very simple network architecture and configuration.

    In addition to simplifying request routing for Amazon S3, S3 Multi-Region Access Points also give you a new S3 Management Console experience for managing all aspects of a multi-region S3 setup. In the S3 Management Console, S3 Multi-Region Access Points show a centralized view of the underlying replication topology, replication metrics, and your request routing configuration. This gives you an even easier way to build, manage, and monitor storage for multi-region applications.

    You can get started with S3 Multi-Region Access Points using the Amazon S3 API, CLI, SDK, or with a few clicks in the S3 Management Console. To see the full list of supported AWS Regions, visit the S3 Multi-Region Access Points user guide.

    S3 Multi-Region Access Points are available at a low per-GB request routing charge, plus an internet acceleration fee for requests that are made to S3 from outside of AWS. To see how S3 Multi-Region Access Points work, please see the overview and video. To learn more about S3 Multi-Region Access Points, visit the feature page, read the blog post, and visit the user guide.

    » Introducing Amazon FSx for NetApp ONTAP

    Posted On: Sep 2, 2021

    Amazon Web Services (AWS) announces the general availability of Amazon FSx for NetApp ONTAP, a storage service that allows customers to launch and run complete, fully managed ONTAP file systems in the cloud for the first time. ONTAP is NetApp’s file system technology that has traditionally powered on-premises network-attached storage (NAS) and provides a widely adopted set of data access and data management capabilities. Amazon FSx for NetApp ONTAP provides the popular features, performance, and APIs of ONTAP file systems with the agility, scalability, and simplicity of a fully managed AWS service, making it easier for customers to migrate on-premises applications that rely on NAS appliances to AWS. It also provides developers with high-performance and feature-rich file storage that makes it easy to build, test, and run cloud-native applications.

    FSx for ONTAP storage is broadly accessible from Linux, Windows, and macOS compute instances via the industry-standard NFS, SMB, and iSCSI protocols. It enables customers to use ONTAP’s widely-used data management capabilities, such as snapshots, cloning, and replication, with the click of a button. FSx for ONTAP provides customers with low-cost storage capacity that’s fully elastic and virtually unlimited in size, and it provides ONTAP capabilities like deduplication and compression to help customers further reduce storage costs. It also provides sub-millisecond latencies and high levels of throughput and IOPS.

    Amazon FSx for NetApp ONTAP Overview

    Amazon FSx for NetApp ONTAP is available in today in all commercial AWS Regions, excluding the US West (N. California) Region, the Asia Pacific (Osaka) Region, the AWS China (Beijing) Region, operated by Sinnet, and the AWS China (Ningxia) Region, operated by NWCD. 

    To learn more about Amazon FSx for NetApp ONTAP:

  • Create a Amazon FSx for NetApp ONTAP file system
  • Explore the product detail page
  • Read the blog post
  • Visit the documentation
  • » Amazon SageMaker is now available in the AWS Asia Pacific (Osaka) Region

    Posted On: Sep 2, 2021

    Amazon SageMaker is now available in the AWS Asia Pacific (Osaka) Region. Amazon SageMaker is a fully managed service that provides every developer and data scientist with the ability to build, train, and deploy machine learning (ML) models quickly. SageMaker removes the heavy lifting from each step of the machine learning process to make it easier to develop high quality models.

    For more information, please visit our documentation.

    » Amazon S3 Intelligent-Tiering further automates storage cost savings by removing the minimum storage duration and monitoring and automation charge for small objects

    Posted On: Sep 2, 2021

    The Amazon S3 Intelligent-Tiering storage class automates storage cost savings for a wider range of workloads by eliminating the minimum storage duration, and removing the low per-object monitoring and automation charges for objects smaller than 128 KB. S3 Intelligent-Tiering is the only cloud storage class that delivers automatic storage cost savings when data access patterns change, without performance impact or operational overhead. Previously, S3 Intelligent-Tiering was optimized for long-lived objects stored for a minimum of 30 days and objects larger than 128 KB. With these changes, S3 Intelligent-Tiering is the ideal storage class for data with unknown, changing, or unpredictable access patterns, independent of object size or retention period. You can use S3 Intelligent-Tiering as the default storage class for data lakes, analytics, and new applications.

    The Amazon S3 Intelligent-Tiering storage class is designed to optimize storage costs by automatically moving data to the most cost-effective access tier when access patterns change. For a low monthly object monitoring and automation charge, S3 Intelligent-Tiering monitors access patterns and automatically moves objects that have not been accessed to lower cost access tiers. S3 Intelligent-Tiering delivers automatic storage cost savings in two low latency and high throughput access tiers. For data that can be accessed asynchronously, customers can choose to activate automatic archiving capabilities within the S3 Intelligent-Tiering storage class. There are no retrieval charges in S3 Intelligent-Tiering. If an object in the infrequent access tier is accessed later, it is automatically moved back to the frequent access tier. No additional tiering charges apply when objects are moved between access tiers within the S3 Intelligent-Tiering storage class. You can now configure S3 Intelligent-Tiering as your storage class for newly created data by specifying INTELLIGENT-TIERING on your S3 PUT API request header. S3 Intelligent-Tiering is designed for 99.9% availability and 99.999999999% durability.

    Effective today, the S3 Intelligent-Tiering monitoring and automation charge for objects smaller than 128 KB will no longer be charged, this includes new and existing objects. Also, effective today for all new and existing objects in S3 Intelligent-Tiering, you will not accrue pro-rated charges for objects deleted, transitioned, or overwritten within 30 days.

    S3 Intelligent-Tiering is available in all public AWS Regions, including the AWS GovCloud (US) Regions, the AWS China (Beijing) Region, operated by Sinnet, and the AWS China (Ningxia) Region, operated by NWCD. To learn more, see the S3 Intelligent-Tiering page, the S3 pricing page, the user guide, and get started in the S3 console.

    » Amazon EBS direct APIs now supports creating 64 TB EBS Snapshots

    Posted On: Sep 2, 2021

    Amazon EBS direct APIs now supports creating 64 TB EBS Snapshots directly from any block storage data, including on-premises data. With this new capability, customers can use EBS Snapshots for disaster recovery of their largest on-premises workloads and achieve business continuity in AWS at lower costs.

    Previously customers could use EBS direct APIs to create EBS Snapshots of volumes up to 16 TB in size. Customers can now use EBS direct APIs to create EBS Snapshots of 64 TB volumes and recover them to Amazon EBS io2 Block Express volumes.

    You can use Amazon EBS direct APIs to create 64 TB EBS Snapshots in all AWS regions where Amazon EBS direct APIs are available.

    To learn more, see Amazon EBS direct APIs Documentation.

    » AWS Database Migration Service now supports migrating multiple databases in one task using MongoDB as a source

    Posted On: Sep 2, 2021

    AWS Database Migration Service (AWS DMS) expands functionality by adding support for migrating multiple databases in one task using MongoDB and Amazon DocumentDB (with MongoDB compatibility) as a source. Using AWS DMS, you can now group multiple databases of a MongoDB cluster and migrate them using one DMS task to any AWS DMS supported targets including Amazon DocumentDB (with MongoDB compatibility) with minimal downtime.

    To learn more, see Using MongoDB as a source for Amazon DMS.

    For regional availability, please refer to the AWS Region Table.

    » AWS Lambda now supports AWS PrivateLink in previously unsupported Availability Zones

    Posted On: Sep 1, 2021

    AWS Lambda now supports AWS PrivateLink in previously unsupported Availability Zones in US East (N. Virginia), US West (Oregon), Asia Pacific (Singapore), Asia Pacific (Tokyo), Asia Pacific (Seoul), Asia Pacific (Mumbai), South America (São Paulo), Canada (Central), and EU (London) regions. With this launch, AWS Lambda now supports AWS PrivateLink in all Availability Zones in all commercial regions, AWS GovCloud (US-East), and AWS GovCloud (US-West). 

    Previously, AWS Lambda supported AWS PrivateLink in all Availability Zones in US East (Ohio), US West (N. California), Asia Pacific (Hong Kong), Asia Pacific (Sydney), Asia Pacific (Osaka), EU (Frankfurt), EU (Ireland), EU (Milano), EU (Paris), EU (Stockholm), Middle East (Bahrain), AWS GovCloud (US-East), AWS GovCloud (US-West), and Africa (Cape Town). With this feature you can manage and invoke Lambda functions from your Amazon Virtual Private Cloud (VPC) without exposing your traffic to the public internet. PrivateLink provides private connectivity between your VPCs and AWS services such as Lambda, on the private AWS network.

    With PrivateLink, you can provision and use VPC endpoints to access the Lambda API from your VPC. VPC endpoints deliver private and reliable connectivity to Lambda without requiring Internet Gateway, Network Address Translation (NAT) devices, or firewall proxies. You can attach AWS Identity and Access Management (IAM) policies to your VPC endpoint to control who can use the VPC endpoint and which functions can be accessed from that VPC endpoint.

    For more information, see the AWS Region table. For complete information on pricing for VPC endpoints, please refer to the AWS PrivateLink pricing page. You can get started by creating a VPC endpoint for Lambda using AWS Management Console, AWS CLI, or AWS CloudFormation. To learn more, visit Lambda developer guide.

    » NICE DCV releases web client SDK 1.0.3

    Posted On: Sep 1, 2021

    NICE DCV is a high-performance remote display protocol that helps users to securely access remote desktop or application sessions, including 3D graphics applications hosted on servers with high-performance GPUs.

    We are pleased to announce the release of version 1.0.3 of the NICE DCV web client software development kit (SDK). This version of the SDK brings support for WebCodecs. The WebCodecs API is an experimental technology that improves video processing efficiency and makes it possible for supported web browsers to stream high resolution high framerate contents. Please refer to each web browser’s documentation to get the latest information on their WebCodecs support status.

    The NICE DCV web client SDK is an optional JavaScript SDK component that enables developers and independent software vendors (ISVs) to integrate a customized NICE DCV web client into their web applications. Customers can build custom NICE DCV web clients using custom user interface components and the core NICE DCV streaming features in the SDK, delivering unique experiences tailored to their own use cases. The NICE DCV web client SDK is designed to be used in conjunction with NICE DCV software. Visit the NICE DCV documentation page to learn more and the NICE DCV download page to get started.

    » NICE DCV releases version 2021.2

    Posted On: Sep 1, 2021

    NICE DCV is a high-performance remote display protocol that helps customers securely access remote desktop or application sessions, including 3D graphics applications hosted on servers with high-performance GPUs.

    We are pleased to announce the release of NICE DCV version 2021.2 with the following new features:

  • Web client clipboard improvements - Customers can now copy and paste images using the DCV web client on Google Chrome and Microsoft Edge.
  • Option to prevent screenshots on native clients - This feature supports security by preventing users from taking screenshots of their DCV session content. When enabled, users may only capture a black screen.
  • Streaming quality improvements - To deliver the highest frame rates using low bandwidth consumption, NICE DCV is designed to automatically update to full lossless those parts of the streamed image that are not changing. This update delivers a more fluid “build-to-lossless” experience when using the QUIC protocol.
  • DCV Session Manager features - The DCV Session Manager now offers a command line interface (CLI) and a broker data persistence feature. Customers can more easily create and manage DCV sessions via CLI without needing to call the APIs directly. For higher availability, brokers can persist server state information on an external data store and restore them at startup time.
  • NICE DCV is the remote display protocol used by Amazon Appstream 2.0, AWS RoboMaker, and Amazon Nimble Studios. For more information, please see the release notes or visit the NICE DCV webpage to download and get started with DCV.

    » Amazon Polly offers full support in the AWS Africa (Cape Town) Region

    Posted On: Sep 1, 2021

    Amazon Polly, a service that turns text into speech, extends its offering to the AWS Africa (Cape Town) Region. Today, we are excited to announce the general availability of Ayanda, Polly’s first South African English Neural Text-to-Speech (NTTS) voice, as well as full support for the entire portfolio of Amazon Polly's Neural and Standard voices in the AWS Africa (Cape Town) Region. With this launch, the service now supports 31 languages including 7 different varieties of English. Neural TTS voices are available in 12 AWS Regions.

    To learn more, see our complete list of Polly text-to-speech voices and try them out on our Amazon Polly Console. For more details, please visit the Amazon Polly documentation, review our Neural TTS pricing, regional availability, service limits, and FAQs.

    » AWS Security Hub Automated Response and Remediation adds support for PCI-DSS v3.2.1 Security Standard

    Posted On: Sep 1, 2021

    AWS Security Hub Automated Response & Remediation solution is a reference implementation that includes a library of automated security response and remediation actions to common security findings. The solution makes it easier for customers to resolve common security findings and improve their security posture in AWS.

    AWS Security Hub Automated Response and Remediation now supports 17 new PCI-DSS v3.2.1 controls. This release also adds support for seven more AWS Foundational Security Best Practices controls and 17 additional controls in the Center for Internet Security (CIS) AWS Foundations Benchmark v1.2.0.

    AWS Security Hub gives you a comprehensive view of your security posture across your AWS accounts. Customers can create CloudWatch Event rules to invoke on-demand response workflows for selected findings across their AWS accounts, or they can use CloudWatch Event rules to take fully automated actions on specific types of findings. Many customers find the process to set up CloudWatch Event rules difficult and time consuming and creating the permissions to enable them to run cross-account can be complex. The AWS Security Hub Automated Response & Remediation solution simplifies this process by offering predefined response and remediation actions to common security controls. The solution now supports over 50 automated remediations in total. Version 1.0 offers 10 prepackaged security playbooks to remediate security findings based on the Center for Internet Security (CIS) AWS Foundations Benchmark. Version 1.2 includes a playbook of 11 fully automated remediations based on the AWS Foundational Security Best Practices standard. Version 1.3 adds a Playbook for PCI-DSS with 17 remediations, 17 additional CIS remediations, and 7 additional AWS Foundational Security Best Practices remediations.

    The AWS Security Hub Automated Response & Remediation solution works in all regions that support AWS Service Catalog and AWS Systems Manager as well as AWS GovCloud (US) Regions, China Regions, Milan, Bahrain, and Hong Kong. To get started with the solution, visit  the AWS Solution Library or GitHub.

    Additional AWS Solutions Implementations offerings are available on the AWS Solutions page, where customers can browse common questions by category to find answers in the form of succinct Solution Briefs or comprehensive Solution Implementations, which are AWS-vetted, automated, turnkey reference implementations that address specific business needs.

    » AWS Firewall Manager now supports AWS WAF log filtering

    Posted On: Sep 1, 2021

    AWS Firewall Manager now enables security administrators to specify which web requests to log and which requests to exclude from logs when using AWS WAF to inspect web traffic. If you use Firewall Manager security policies to centralize AWS WAF logging, you can now log only the information you want to analyze. By reducing the amount of log data stored, you can reduce your log delivery and storage costs.

    You can enable log filtering in Firewall Manager when you create a Firewall Manager security policy. After you select the option to centralize your AWS WAF logs, you can choose to filter web requests based on rule actions, labels applied to web requests, or both. For each filter, you can indicate whether matching requests should be logged or discarded after processing. There is no additional cost for log filtering, but standard service charges for AWS Firewall Manager, AWS WAF, and AWS Config still apply.

    Firewall Manager is a security management service that enables customers to centrally configure and deploy firewall rules across accounts and resources in their organization. With Firewall Manager, customers can deploy and monitor rules for AWS WAF, AWS Shield Advanced, VPC security groups, AWS Network Firewall, and Amazon Route 53 Resolver DNS Firewall across their entire organization. Firewall Manager ensures that all firewall rules are consistently enforced, even as new accounts and resources are created.

    To get started, see the AWS Firewall Manager documentation for more details about AWS WAF log filtering and the AWS Region Table for the list of AWS regions where AWS Firewall Manager is currently available. To learn more about AWS Firewall Manager, including its features and pricing, please visit AWS Firewall Manager.

    » Amazon CloudWatch Application Insights adds support for Microsoft SQL Server FCI and FSx storage

    Posted On: Sep 1, 2021

    Now you can easily setup monitoring, alarms and dashboards for your Microsoft SQL Server Failover Cluster Instances (FCI) running on AWS and for applications using FSx managed storage with CloudWatch Application Insights. CloudWatch Application Insights is a capability that helps customers monitor and troubleshoot their enterprise applications running on AWS resources. The new feature adds automatic discovery of the database and managed storage, the FCI configuration and the underlying resources along with setting up the metrics, telemetry and logs for monitoring its health and wellness. 

    With the addition of SQL Server FCI, customers now have an easy path for setting up and managing monitoring for this fault tolerant database setup. Additionally, Launch Wizard, the service which helps you deploy applications on AWS, recently added support for SQL Server FCI which means you can now easily setup the database and observability for it all within your Launch Wizard deployment. As part of the new feature launch, we’ve added support for monitoring of Amazon’s managed file system FSx for Windows. Now CloudWatch Application Insights can automatically discover your FSx file systems and setup monitoring for these resources as part of your application.

    Amazon CloudWatch Application Insights provides automated setup of observability for your enterprise applications and underlying AWS resources and can be accessed via the Insights tab in CloudWatch left panel. It creates Amazon CloudWatch Automatic Dashboards to visualize problem details, accelerate troubleshooting, and help reduce mean time to resolution. Amazon CloudWatch Application Insights is available in all AWS commercial regions at no additional charge. Depending on setup, you may incur charges for Amazon CloudWatch monitoring resources. To learn more about Amazon CloudWatch Application Insights, please review the documentation.

    » AWS Elemental MediaTailor now supports time based schedules for Channel Assembly streams

    Posted On: Sep 1, 2021

    Channel Assembly with AWS Elemental MediaTailor now lets you schedule programs based on a wall-clock time. Using "linear mode" you have fine-grained control of when individual sources will be played and which sources are to follow on a channel output. Looping mode is also available where timing on individual programs is loosely defined to ensure that there is content always playing on a channel output.

    Using Channel Assembly with MediaTailor, you can create linear channels that are delivered over-the-top (OTT) in a cost-efficient way, even for channels with low viewership. Virtual live streams are created with a low running cost by using existing multi-bitrate encoded and packaged content. You can also monetize Channel Assembly linear streams by inserting ad breaks in your programs without having to condition the content with SCTE-35 markers.

    For more information about using Channel Assembly with MediaTailor, read this AWS News Blog, and go to the MediaTailor documentation pages.

    AWS Elemental MediaTailor is a channel assembly and ad-insertion service for creating linear OTT channels using existing video content and monetizing those channels, other live streams, or video-on-demand (VOD) content with personalized advertising.

    Visit the AWS global region table for a full list of AWS Regions where AWS Elemental MediaTailor is available.

    » Amazon SageMaker now supports M5d, R5, and P3dn instances for SageMaker Studio Notebooks

    Posted On: Sep 1, 2021

    Today, we are excited to announce that Amazon SageMaker Studio now supports Amazon EC2 M5d, R5, and P3dn instances. Customers can launch SageMaker Studio Notebooks with these instances types in the regions where they are available.

    Amazon SageMaker Studio is the first fully integrated development environment (IDE) for machine learning (ML). SageMaker Studio provides a single, web-based visual interface where you can perform all ML development steps, giving you complete access, control, and visibility required to build, train, and deploy models. 

    Amazon EC2 M5d instances deliver M5 instances backed by NVMe-based SSD block level instance storage physically connected to the host server. M5d instances are ideal for workloads that require a balance of compute and memory resources along with high-speed, low latency local block storage including data logging and media processing. Amazon EC2 R5 instances are the memory optimized instances. They are well suited for memory intensive applications such as real time big data analytics. Amazon EC2 P3dn instances are optimized for distributed machine learning. Their faster networking, new processors with additional vCPUs, doubling of GPU memory, and fast local instance storage enable developers to optimize performance even on a single instance.

    For more information, visit the Amazon SageMaker Studio documentation for details.

    »

    Page 1|Page 2|Page 3|Page 4