Contents of this page is copied directly from AWS blog sites to make it Kindle friendly. Some styles & sections from these pages are removed to render this properly in 'Article Mode' of Kindle e-Reader browser. All the contents of this page is property of AWS.

Page 1|Page 2|Page 3|Page 4

Privacy video: Innovating securely

=======================

I’m pleased to share a video of a conversation about privacy I had with my colleague Laura Dawson, the North American Lead at the AWS Institute. Privacy is becoming more of a strategic issue for our customers, similar to how security is today. We discussed how, while the two topics are similar in some ways, they also have important differences. We also talked about the importance of building a strong privacy program, and how AWS helps customers safeguard privacy while still taking advantage of digital modernization opportunities.

The differences between security and privacy aren’t fully understood in some industries. Security principles are better known in the industry – security involves considering the confidentiality, integrity, and availability of information. It’s about keeping unauthorized parties away from your data, and about making sure access to your systems and data is appropriate. Similarly, privacy is about control of data through its entire lifecycle, specifically personal identifiable information (PII). That includes the collection, use, transmission, and deletion of that data. Properly managing the privacy of PII is like security when you consider the “access control” aspect, but privacy is about making sure you always have granular control of what is happening to that PII from formation/gathering through to deletion.

Unlike security, which is now commonly recognized as a core business function, privacy practices and principles are still in the early stages of being widely accepted. This is why AWS advocates for organizations to follow the principles of Privacy by Design, to ensure that privacy processes and controls are baked into everything you do.

I also discussed with Laura some of the privacy trends I see happening in the tech industry right now, such as homomorphic encryption, anonymization, and PII discovery tools. The privacy challenges organizations face today, however, aren’t just technology challenges; they’re also business challenges, of how to get value from the data you control, in a way that meets privacy best practices and accounts for your customers’ interests.

For more about these and other privacy topics, check out the video of my conversation with Laura. To learn more about privacy at AWS, check out the Data Privacy Center and Data Protection at AWS.

Hardening the security of your AWS Elastic Beanstalk Application the Well-Architected way

=======================

Launching an application in AWS Elastic Beanstalk is straightforward. You define a name for your application, select the platform you want to run it on (for example, Ruby), and upload the source code. The default Elastic Beanstalk configuration is intended to be a starting point which prioritizes simplicity and ease of setup. This allows you to quickly deploy a web application on the AWS Cloud. For increased security of production applications, we recommend additional steps you can take to complement the default configuration.

In this post we will describe our recommendations, which are aligned with the AWS Well-Architected Framework, to help you harden the security posture of your Elastic Beanstalk applications. The Well-Architected Framework provides best practices to help you build secure, high-performing, resilient, and efficient infrastructure for your applications and workloads. Focusing on the Security pillar of the framework, we will walk you through additional configurations for increased network protection and protection of data at rest and in transit.

Introduction to Elastic Beanstalk

Elastic Beanstalk is an orchestration service that provisions and operates infrastructure in the AWS Cloud. You can use Elastic Beanstalk to deploy and manage applications in the cloud. Elastic Beanstalk supports many programming languages and frameworks, such as Java, .NET, PHP, Node.js, Python, Ruby, Go, and Docker. Elastic Beanstalk can help you decrease overhead by handling tasks such as resource provisioning, load balancing, auto scaling, and health monitoring. You only need to upload the application code. Elastic Beanstalk automatically integrates with other AWS services such as Amazon CloudWatch for logging and monitoring.

Target scenario for this post

This post shows you how to achieve the following things:

  • Launch a highly available Ruby application on Elastic Beanstalk.
  • Attach a MySQL database to the application using Amazon RDS.
  • Protect your sensitive data.
  • Align your application’s security configuration to the Security pillar of the Well-Architected Framework.
  • Figure 1: Target architecture for the two-tier web application deployed using Elastic Beanstalk

    Figure 1 depicts the target architecture, which is a two-tier web application. Clients resolve the website’s domain name using the Domain Name System (DNS) service Amazon Route 53. An Application Load Balancer (ALB) is used to direct traffic to and from the Amazon EC2 instances which are running the web servers. The EC2 instances are deployed in an Auto Scaling group in private subnets. To ensure that clients can always access the application, the infrastructure is setup so that it can automatically deal with system failures and scale up when there’s an increase in demand. This is done by placing the EC2 instances in the Auto Scaling group across two Availability Zones for high availability. There is also an RDS MySQL database deployed in a private subnet, which is replicated to a stand-by instance in another Availability Zone for disaster recovery. Logs and Metrics are sent to CloudWatch, and Amazon Simple Storage Service (Amazon S3) is used to store logs and source code. Finally, a Network Address Translation (NAT) gateway and Internet gateway manage inbound and outbound traffic to subnets.

    The following sections focus on the four main security configurations numbered in Figure 1:

    1. Deploying the EC2 and RDS instances from the web and database layer in private subnets.
    2. Encrypting the logging and source code S3 bucket.
    3. Encrypting the RDS instance and its stand-by replica.
    4. Encrypting data in transit by using the HTTPS protocol.
    Strengthening your Elastic Beanstalk application based on the Security pillar of the Well-Architected Framework

    To harden the security of your Elastic Beanstalk application, you can build on top of the default setup to incorporate the following security best practices:

    1. Protect networks In the default Elastic Beanstalk setup, the EC2 instances are deployed together with an Application Load Balancer (ALB) in a public subnet. In most cases, EC2 instances do not need to be directly accessible from the internet and therefore should be placed in private subnets. The ALB should be left in the public subnet to provide a single entry-point for inbound traffic from external clients and forward this traffic to the instances over a private network. If these instances need to make a direct outbound connection to the internet, for example to call third-party APIs, we recommend creating a Network Address Translation (NAT) gateway in a public subnet, and adding a route from the private subnet where your instances are running to the NAT Gateway. Your instances can then send requests to the internet and receive corresponding responses through the NAT gateway, but the instances themselves will not be directly accessible from the internet. For more options on interactively accessing instances see AWS Systems Manager.
    2. Protect data at rest We recommend encrypting data at rest. Elastic Beanstalk does not encrypt data stored in Amazon S3 buckets by default, so you should modify the default setup to encrypt the bucket. Similarly, when you set up an RDS database directly through Elastic Beanstalk, you don’t have the option to encrypt the database, so you need to set up your database independently and enable encryption.
    3. Protect data in transit – Web traffic sent between your clients and the ALB over the internet should use HTTPS rather than HTTP. The HTTPS protocol creates an encrypted connection through TLS (Transport Layer Security) between client and server before sending any web traffic. The default setup in Elastic Beanstalk uses HTTP, so the choice to use HTTPS and how to enable it sits with the user. Setting up HTTPS can be done with SSL / TLS server certificates (X.509 certificates) which you manage inside AWS using AWS Certificate Manager or through an external provider. ALB supports TLS-termination, which means that it takes care of the encryption and decryption of the traffic communicated with clients, and then forwards the traffic to the instances over the AWS private network.
    Implementing the recommended best practices for your application

    To implement the best practices from the section above, you will take the following steps to launch your application, protect networks and to protect data at rest and in transit:

    1. Create your own VPC with public and private subnets.
    2. Create a highly-available Elastic Beanstalk application.
    3. Modify the configuration to deploy instances in private subnets.
    4. Encrypt the log and source code bucket.
    5. Launch an encrypted RDS instance.
    6. Set up encryption in-transit by using the HTTPS protocol.

    Create your VPC with public and private subnets

    1. In the AWS Management Console, go to VPC, and select Launch VPC wizard.
    2. Select the VPC with Public and Private Subnets option on the left-hand side, as shown in Figure 2.

    Figure 2: Launch VPC wizard

    1. Click Select.
    2. Adjust the VPC specifications as needed. Specify a CIDR range and a name for the VPC. For the private and public subnets, you need to additionally specify the subnets CIDR range as well as which Availability Zone they should be created in. In order for instances in the private subnet to access the internet, the set-up creates a NAT gateway that resides in the public subnet. In order to do that, you need to specify an Elastic IP ID. If you don’t have an Elastic IP yet, under the VPC console go to Elastic IP addresses, click on Allocate Elastic IP address and Allocate. Use the Allocation ID in the VPC wizard.
    3. Select Create VPC.
    4. Because the target architecture is highly available, another set of public and private subnets needs to be created and set to reside in a different Availability Zone from the subnets you configured in step 4. This is done by going to the Subnets section in the VPC Console. Click on Create subnet, select the VPC you just created, add a new subnet, making sure to assign it to a different Availability Zone. Press Add new subnet to add a second subnet on the same configuration page. When done, press Create subnet.
    5. By default, the subnets will use the main routing table, which will treat them as private subnets. In order to make one of the newly created subnets public, it needs to be added to the route table, which has a route to the Internet Gateway. Go to the Route Tables section in the VPC Console and find the route table associated with your newly created VPC, which has the route to the Internet Gateway. This should be the Route Table which has 1 explicit subnet association. Click on the Route Table’s ID, and verify that there’s a route to a target with the igw- prefix. Then, under the Subnet association tab, edit the explicit subnet associations to include the newly created subnet.

    After this is done, you should have two public and two private subnets across two Availability Zones for your new VPC.

    Create a highly available Elastic Beanstalk application

    The following steps will show you how to create a highly available Elastic Beanstalk application.

    1. In the AWS Management Console, choose Elastic Beanstalk, and then, in the Get Started section, select Create Application.
    2. Provide a name for the application and define the platform it should run on. In our example, the platform is Ruby.
    3. Provide the source code for your web application or use the sample code provided in the Elastic Beanstalk setup console.
      • To use the sample code, select Sample Application.
      • To upload your own source code, in the Source code origin section, for Version label, enter the name of your application code, and then for Source code origin, choose Local file, select Choose File, and navigate to the file that you want to use, as shown in Figure 3.

    Figure 3: Source code origin section of the Elastic Beanstalk console

    1. Select Configure more options
    2. Depending on your application’s needs, you can select a configuration preset that includes recommended values for several configurations. Select High Availability to include a load balancer and auto scaling for multiple Availability Zones.

    Deploy your instances in private subnets

    In this step, you will set up Elastic Beanstalk to deploy the Application Load Balancer in public subnets to provide a point of access for inbound traffic from the internet, and you deploy the EC2 instances in a private subnet.

    While still in the Configure more options settings:

    1. In the Network section, select Edit, and then, from the dropdown list, select the VPC that you just created.
    2. To deploy your instances in private subnets, in the Load balancer settings section, for Load balancer subnets, check the box next to each public subnet, and in the Instance settings section, for Instance subnets, check the box next to each private subnet, as shown in Figure 4.

    Figure 4: Elastic Beanstalk subnet settings for Load Balancer and instances

    1. Select Save.

    Encrypt the log and source code bucket and block public access

    After Elastic Beanstalk has created the application, you can encrypt the S3 bucket.

    1. Open the S3 console and choose the bucket that was created automatically as part of the Elastic Beanstalk setup. The bucket name will have the following structure: elasticbeanstalk-region-account-id.
    2. To encrypt the bucket, choose Properties, and then, for Default Encryption, select Edit, and for Server-side encryption, select Enable.
    3. For Encryption key type, you can use an S3-managed encryption key by selecting Amazon S3 key (SSE-S3). If you want more control over the keys used for encryption, select AWS Key Management Service key (SSE-KMS), which is an encryption key protected by AWS Key Management Service (KMS). Here, you can specify to use an AWS managed key or one of your own Customer managed keys from KMS. For more information on SSE-KMS, visit Protecting Data Using Server-Side Encryption with KMS keys Stored in AWS Key Management Service (SSE-KMS).
    4. Select Save changes.

    Even though the bucket that was created by Elastic Beanstalk is non-public by default, we recommend to enable “S3 Block Public Access” at the account level or at least at the bucket level to prevent tampering or accidentally changing this setting in the future.

    1. In your S3 console, click on Block Public Access settings for this account.
    2. If Block all public access is not yet enabled, click on Edit, check the box next to Block all public access and click Save.
    3. Apart from that, you can also block public access at the bucket level. For this, click on the respective bucket, open the Permissions section and edit Block public access (bucket settings) similarly to how you did in step 2.

    Launch an encrypted RDS instance

    Elastic Beanstalk allows you to set up and run RDS instances in your Elastic Beanstalk environment. Until recently, the database was tied to the lifecycle of the Elastic Beanstalk environment, and its use was recommended to be limited to development and testing environments only. For example, if you previously launched an RDS instance using Elastic Beanstalk, and the Elastic Beanstalk environment was terminated, the RDS instance would also be deleted. As of October 6, 2021, Elastic Beanstalk now supports Database Decoupling, so that the database will persist when the environment is deleted.

    However, Elastic Beanstalk currently does not allow you to set up encryption for your database. For this reason, this post shows you how to set up your Elastic Beanstalk application with a decoupled database, by creating the database directly in the RDS service, separate from your Elastic Beanstalk application. RDS allows you to encrypt your database.

    Decoupling your database and setting it up directly through the RDS service in the AWS console will require additional steps for integration with your Elastic Beanstalk application, which this post will walk you through.

    Note: If you are using the Elastic Beanstalk service to create your RDS instance, you can select one of three options:

  • The first option, enabled by selecting the Create snapshot retention option in the database settings in the Elastic Beanstalk console, makes sure that Elastic Beanstalk creates a snapshot of your database prior to termination. You can restore an existing snapshot of your database through the Elastic Beanstalk console. Bear in mind that there will be downtime of your database between snapshot creation and snapshot restore.
  • The second option, Retain, creates a decoupled database, which persists if the Elastic Beanstalk environment has been terminated.
  • The third option Delete removes the database upon termination.
  • In this step, you will create an encrypted RDS database, allow access to the database from your application’s instances only, and add the required environment variables to your application so you can use your database in the application.

    1. On the RDS service page in the console, select Create database.
    2. For the database creation method, select Standard create.
    3. For Engine options, choose MySQL and select the latest version.
    4. For Templates choose either the Dev/Test or Production template according to your use case.
    5. In the Settings section, provide a name to use as the database identifier and set a username and password.
    6. Select the appropriate DB instance class that meets your processing power and memory requirements.
    7. For Storage, select your storage type and allocate storage.
    8. If you need Multi-AZ deployment, in the Availability & durability section, choose Create a standby instance.
    9. In the Connectivity section, select the VPC that you created in the Create your VPC with public and private subnets section earlier in this blog post, and verify that Public access has been set to No. For VPC security group, choose Create new and provide a name to identify the group later on.
    10. In the Additional configuration section, under Encryption, choose Enable Encryption. You can choose the default AWS KMS key if you’re happy with AWS managing the keys, or provide a custom key if you want more control. Bear in mind that the encryption option cannot be changed after the database has been created.
    11. Leave the defaults for the remaining settings and select Create database.

    After you set up the RDS database and your new Elastic Beanstalk application, you can add the database to your application.

    1. In the RDS console, go to your newly created RDS database and scroll down to Security group rules.
    2. Select the security group that has the CIDR/IP – Inbound type.
    3. Under Inbound rules, select the rule that is listed, and then select Edit inbound rules.
    4. Under the Source column, make sure Custom is selected, and in the search-box next to it, select the security group associated with your Elastic Beanstalk Auto Scaling group.

    Important: As a security best practice, you should allow traffic to your RDS database from your instances only. Therefore, make sure the security group allows traffic only from the Auto Scaling group’s security group, and that it has no additional entries.

    1. To add the RDS details to the Elastic Beanstalk environment properties, go to your application’s environment in the Elastic Beanstalk service and navigate to Configuration > Software > Edit > Environment Properties. Add RDS_HOSTNAME, RDS_PORT, RDS_DB_NAME, RDS_USERNAME and RDS_PASSWORD as properties and provide the values that you used to set up the database.
    2. Restart the application by going back to your Elastic Beanstalk environment, and then under Environment actions, choose Restart app server(s).

    After the server has restarted, you can access the RDS database in your web application by using the environment properties you set in the console, just as you would if you attached the database directly through the Elastic Beanstalk setup. For more information on using environment properties, visit Environment properties and other software settings.

    The new database is now separate from your application and it is encrypted to provide data protection at rest.

    Important: The environment properties, including the database username and password, are visible and stored in plain text in the Environment Properties in Elastic Beanstalk.

    Depending on your security requirements, you can choose to use AWS Secrets Manager to protect your database credentials, which you can then fetch programmatically in your Elastic Beanstalk instance or through Elastic Beanstalk’s custom environment configuration files (.ebextensions). To learn more about using Secrets Manager to protect and rotate database credentials, see Rotate Amazon RDS database credentials automatically with AWS Secrets Manager. However, this will require additional configuration for Elastic Beanstalk and is beyond the scope of this post.

    A second possibility is to use IAM database authentication, which allows you to use your Elastic Beanstalk’s EC2 IAM role to connect to your database. This method leverages short-lived authentication tokens rather than a static database password. In order to set this up, you need to enable IAM database authentication, create an IAM policy to allow IAM database access and create a database account for IAM authentication using the AWSAuthenticationPlugin (for MySQL). Authentication tokens are valid for 15 minutes, and if your web instances need to create a new database connection, or reconnect, authentication tokens will need to be refreshed if they have expired, otherwise the connection will be rejected.

    For an implementation guide, check out How do I allow users to authenticate to an Amazon RDS MySQL DB instance using their IAM credentials. For Ruby applications, you can get the authentication token in your application by leveraging the auth_token_generator method in the Ruby aws-sdk.

    Set up encryption in transit using the HTTPS protocol

    In the Elastic Beanstalk architecture, you can encrypt data in transit at three connection points: from your clients to the load balancer, from the load balancer to the EC2 instances, and from the EC2 instances to the RDS database.

    Securing the connection from clients to the ALB

    You can use a custom domain name to use HTTPS for your Elastic Beanstalk environment and have your clients can connect securely to your environment. If you don’t have a domain name, you can assign a self-signed server certificate to your ALB to use HTTPS for development and testing purposes.

    To secure the connection to your ALB, add a HTTPS listener for the traffic inbound port (typically 443) and attach an TLS / SSL server certificate (X.509 certificate). To generate certificates, use AWS Certificate Manager or third-party providers such as Let’s Encrypt. For a walkthrough on how to set up an HTTPS listener through the console or through .ebextensions configuration files, see the Configuring your Elastic Beanstalk environment’s load balancer to terminate HTTPS.

    Securing the connection from the ALB to the EC2 instances

    While securing the connection between clients and the ALB is enough for most applications, in some cases a complete end-to-end encryption may be required; for example, to comply with (external) regulations. To secure the connection from your ALB to your application running on an EC2 instance, you must use the .ebextensions configuration files to modify the software running on the instance. You then need to allow the HTTPS traffic to pass through from the ALB to your EC2 instance by allowing inbound traffic on port 443 on the instance’s security group. For a Ruby specific example, see Terminating HTTPS on EC2 instances running Ruby.

    For a complete end-to-end encryption walkthrough, see How can I configure HTTPS for my Elastic Beanstalk environment?

    Securing the RDS connection

    To securely connect from your application to your RDS database, you can use SSL or TLS to encrypt the connection. You will need to download an RDS root certificate and require your application to use this certificate when connecting to the RDS instance to verify the RDS server certificate. For more information on how to download and use the root certificate to setup a secure RDS connection, see the Using SSL with a MySQL DB instance documentation page.

    This post has shown you how to align your application with some of the security best practices of the Well-Architected Framework. After completing these steps, your architecture includes four key modifications to improve security:

    1. The EC2 and RDS instances are deployed in a private subnet.
    2. The logging and source code S3 bucket is encrypted.
    3. An encrypted RDS instance is attached.
    4. Encryption occurs in transit by using the HTTPS protocol.
    Conclusion

    In this post, we’ve covered the additional configuration you should be aware of to harden the security posture of your Elastic Beanstalk applications, aligning to the Security pillar of the Well-Architected Framework. The final setup you created uses a VPC and private subnets to allow internet access only to resources that require it, and provides encryption at rest and in transit using AWS Cloud Security services and secure protocols. The Well-Architected Framework describes additional concepts, design principles, and architectural best practices for designing and running workloads in the cloud. To learn more, see AWS Well-Architected.

     
    If you have feedback about this post, submit comments in the Comments section below.

    Want more AWS Security news? Follow us on Twitter.

    Using CloudTrail to identify unexpected behaviors in individual workloads

    =======================

    In this post, we describe a practical approach that you can use to detect anomalous behaviors within Amazon Web Services (AWS) cloud workloads by using behavioral analysis techniques that can be used to augment existing threat detection solutions. Anomaly detection is an advanced threat detection technique that should be considered when a mature security baseline as described in the security pillar of the AWS Well-Architected framework is in place.

    Why you should consider behavior-based detection in the cloud

    Traditionally, threat detection solutions focus on the endpoint and the network and analyze log events for known indicators of attack and indicators of compromise. Other forms of threat detection focus on the user and data using products such as data loss prevention and user and endpoint behavior analytics to detect suspicious user behavior at the data layer. Both solution types analyze operating system, application level, and network logs and focus on the detection of known tactics, techniques, and procedures, but the cloud control plane and other cloud native log sources are outside the use case of traditional threat detection solutions

    Being able to detect malicious behavior in your environment is necessary to stay secure in the cloud. This includes the detection of events when cloud services might have been misused. The challenge is that related activities are logged on a control plane level and don’t leave any traces in log sources that are traditionally analyzed for threat detection. For example, unwanted data movements between cloud services or cloud accounts use the cloud backplane for data transfers and don’t necessarily touch any endpoint or network gateway. Therefore, related events only appear within cloud native logs such as AWS CloudTrail or AWS Config and not in network or operating system logs.
     

    Figure 1: Solution architecture example

    In the simplified example shown in Figure 1, only data streams that pass from the cloud to the firewall and then to AWS services are visible to the endpoint (an Amazon EC2 instance) or the gateway security solution.

    Data streams that pass through serverless solutions and activities of cloud native services are only visible in cloud native logs.

    Amazon GuardDuty is a threat detection service that continuously monitors for malicious activity and unauthorized behavior to protect AWS accounts, and analyzes not only the network flow logs but also the cloud control plane. GuardDuty uses threat intelligence coupled with machine learning and behavior models to detect threats such as account compromise and unusual data access or communications, and should be activated in each cloud account.

    But not all unwanted behavior follows known attack patterns. Unwanted behaviors can also include normal activity inside a cloud environment that is different from the intended behavior of a particular workload. Each activity or log entry by itself might not look malicious, but a series of events can reveal possible malicious intent when compared to the individual context of the application. Because there are no bad events as such in CloudTrail like in a firewall or antivirus log, the challenge is to detect threats based on noncompliant behaviors in the context of the application use case and not on known threat vectors.

    Anomaly detection is playing an increasingly important role in defense strategies because of the constantly evolving attack and obfuscation techniques that make it hard to detect threats based on known tactics, techniques, and procedures.

    What does unwanted behavior look like?

    One approach to identifying key events that are related to unwanted behaviors is to identify a set of anomaly-related questions around common cloud activities that consider the workload context. Depending on the workload type, unwanted cloud API events and related questions could look like the following:

    Event: An EC2 instance was launched. 
    Question: Was an unexpected user or role used or was the EC2 instance launched outside the pipeline?

    Event: A user or role performs many API list and describe events within a short timeframe. 
    Questions: Does the application normally generate list and describe API calls in production? If not, this could be reconnaissance activity performed by an intruder.

    Event: A user or role creates and shares an Amazon Elastic Block Store (Amazon EBS) snapshot with another account. 
    Question: Is the snapshot sharing event expected? If not, it could be an attempt to exfiltrate data.

    Event: Many failed API calls are detected in CloudTrail. 
    Question: Are these failed calls around sensitive services or information? If yes, an unauthorized user could be exploring the environment.

    Event: Many ListBucket events are detected for a sensitive Amazon Simple Storage Service (Amazon S3) bucket. 
    Question: Are these events unexpected and performed by an unexpected identity? If yes, an unauthorized user performing an S3 bucket enumeration might indicate a reconnaissance activity.

    After a set of questions has been identified, they can be converted into application specific threat detection use cases, which can be applied to sensitive production environments. This is a useful strategy because these environments typically have a predictable usage pattern. The predictable patterns reduce the chance of false positives, making it worth the effort of developing use cases for monitoring anomalies. Threat detection use cases can be identified within CloudTrail logs using security information and event management (SIEM) tools or Amazon CloudWatch rules.

    Detecting anomalies in CloudTrail with CloudWatch

    Activities within your AWS account can be recorded with CloudTrail, which makes it the ideal service not only for deeper investigations into past cloud activities but also to detect unwanted behaviors in near real time. CloudTrail sends logs to an S3 bucket and can forward events to CloudWatch. Using CloudWatch, you can perform searches across all CloudTrail events and define CloudWatch alarms for automatic notifications.

    You can create alerts for individual CloudTrail events that you consider an anomaly by creating CloudWatch filters and alarms. A filter defines the events that you want to monitor and an alarm defines the threshold when you want to be notified.

    To create a filter for the preceding S3 bucket enumeration example, you would select the CloudTrail log group, and then select Metric Filters and create a new metric filter, as shown in Figure 2.
     

    Figure 2: Create CloudWatch metric filter

    Excluding the userAgent AWS Internal excludes S3 access activities performed by other AWS services such as AWS Access Analyzer or Amazon Macie which can be considered normal behavior.

    Save this metric filter in a new name space that you use for all of your anomaly detection monitoring. After you have created the filter, create a new CloudWatch alarm based on your filter. Depending on your filter and alarm thresholds, you will receive CloudWatch alarm notifications through a Amazon Simple Notification Service (Amazon SNS) topic and have the opportunity to automatically launch other actions that can perform incident response activities.

    After an alert is raised, you can use the same filter pattern to search for the relevant events in CloudWatch. The CloudTrail events will provide more information about who performed the S3 ListBucket events such as IP address (sourceIPAddress), who performed the action (userIdentity), or if the action was performed through the AWS Management Console or AWS Command Line Interface (AWS CLI) (userAgent = aws-internal or aws-cli).   Figure 3 that follows is an example of a CloudTrail log.
     

    Figure 3: CloudTrail example log

    Detecting anomalies using traps

    Another simple, but effective technique to detect intruders based on unwanted behaviors is to use decoy services such as canaries or honey pots. Honey pots are designed to provide information about the behavior of attackers by providing them fake production environments that they can explore—such as hosts within a subnet or data stores such as databases or storage services with dummy data. Canaries are identities or access tokens within honey pot environments that look like privileged identities. Honey pots and canaries both appear attractive to attackers due to the names that are used for users, databases, or host names, but don’t expose the organization to risk if compromised.

    Using CloudWatch alarms, you can monitor CloudTrail for events that indicate that attackers have started to explore the honey pot or tried to laterally move using the canary access token. By acting like an attacker yourself, you can generate test events within CloudTrail that will help you to identify the event details—such as event sources, event names, and parameters—that you want to monitor. Here are some examples of CloudTrail events you can monitor for different kinds of traps.

    Trap Event source Event name Example instance or user name
    Login attempt using a canary identity signin.amazonaws.com ConsoleLogin Backup_Admin
    Assume role attempt using a canary role sts.amazonaws.com AssumeRole DevOps_role
    Exploration of a honey pot database dynamodb.amazonaws.com ListTable CustomerAccounts
    Exploration of a honey pot storage service s3.amazonaws.com GetObject PasswordBackup

    Traps are typically deployed in production environments where access and use patterns are predictable and strictly controlled.  They’re a cost effective and easy to implement solution that can provide alarms with a high degree of certainty. Traps also offer a good chance to catch even the most sophisticated threat actors; especially when they use highly automated attacks.

    Detecting statistical anomalies

    AWS CloudTrail Insights is a feature of CloudTrail that can be used to identify unusual operational activity in your AWS accounts such as spikes in resource provisioning, bursts of AWS Identity and Access Management (IAM) activity, or gaps in periodic maintenance activity.

    CloudTrail Insights can provide primary indicators for noncompliant behaviors by establishing a baseline for normal behavior and then generating Insights events when it detects unusual patterns. Primary indicators are events that initiate an investigation.

    But even when statistical changes haven’t reached alert thresholds and no issue is raised, statistical insights can be used as a supporting secondary indicator during investigations to better understand the context of an incident. Even minor changes of specific API calls around sensitive data can provide valuable information after an alert from another solution such as GuardDuty, or when the previously described anomaly detection techniques have been raised.

    Figure 4 that follows is an example of an Insights chart showing API calls over time.
     

    Figure 4: CloudTrail Insights example chart

    Conclusion

    In this post I described the importance of monitoring sensitive workloads for noncompliant or unwanted behaviors to complement existing security solutions. Anomaly detection in the cloud monitors cloud service activities on the control plane and checks to see if the behavior is expected in the context of each workload. The effort to set up and support the tools described in this blog post leads to an affordable, practical, and powerful mechanism for the detection of sophisticated threat actors in the cloud. To learn more about how you can analyze API activities in the cloud, see Analyzing AWS CloudTrail in Amazon CloudWatch in the AWS Management & Governance Blog.

    If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, contact AWS Support.

    Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

    AWS attained MTCS Level 3 certification under the new SS584:2020 standard

    =======================

    We’re excited to announce the completion of the Multi-Tier Cloud Security (MTCS) Level 3 certification under the new SS584:2020 standard in November 2021 for three Amazon Web Services (AWS) Regions: Singapore, Korea, and United States, excluding AWS GovCloud (US) Regions. The new standard, released in October 2020, includes more stringent controls for greater assurance as compared to the prior version SS584:2015, and a new CSP Self-Disclosure Form to provide to cloud service customers (CSC) for transparency. With the MTCS Level 3 certification, customers can be assured AWS security processes meet the stringent security controls set forth by the new MTCS SS 584:2020 standard for hosting their sensitive workloads.

    AWS was the first cloud service provider (CSP) to attain the MTCS Level 3 certification for Singapore, in 2014, and is now one of the first few CSPs certified under the new SS584:2020 Level 3 standard. The services in scope have increased from 130 to 145, about a 10% increase since the last audit (September 2020).

    The following services are newly added as in scope:

    1. Amazon Augmented AI (Amazon A2I)
    2. Amazon CloudWatch SDK Metrics for Enterprise Support
    3. Amazon Detective
    4. Amazon Finspace
    5. Amazon Kendra
    6. Amazon Keyspaces (for Apache Cassandra)
    7. Amazon Timestream
    8. AWS App Mesh
    9. AWS Audit Manager
    10. AWS Cloud Map
    11. AWS Device Farm
    12. AWS Glue DataBrew
    13. AWS Ground Station
    14. AWS Personal Health Dashboard

    MTCS was the world’s first cloud security standard to specify a management system for cloud security that covers multiple tiers, and it can be applied by CSPs to meet differing cloud user needs for data sensitivity and business criticality. An intent of MTCS is for certified CSPs to be able to better specify the levels of security they can offer their users. AWS achieved this through third-party certification and fulfillment of the self-disclosure requirement for CSPs that covers service-oriented information normally captured in service level agreements. The MTCS framework establishes that the different levels of security help local businesses to pick the right CSP, and use of MTCS is mandated by the Singapore government as a requirement for public sector agencies and regulated organizations.

    MTCS has three levels of security, Level 1 being the base and Level 3 the most stringent:

  • Level 1 was designed for non–business critical data and systems with basic security controls, to counter certain risks and threats targeting low-impact information systems (for example, a website that hosts public information).
  • Level 2 addresses the needs of organizations that run their business-critical data and systems in public or third-party cloud systems (for example, confidential business data and email).
  • Level 3 was designed for regulated organizations with specific and more stringent security requirements. Industry-specific regulations can be applied in addition to the baseline controls, to help supplement and address security risks and threats in high-impact information systems (for example, highly confidential business data, financial records, and medical records).
  • Benefits of MTCS Level 3 certification

    AWS’s certification enables Singapore customers in regulated industries with the strictest security requirements to securely host applications and systems with highly sensitive information, ranging from confidential business data to financial and medical records, in a level-3-compliant MTCS environment. With the scope extended beyond Singapore to AWS Regions in Korea and the United States, it provides an alternative for Singapore government agencies to leverage AWS services which haven’t yet launched locally, and also provides resiliency and recovery use cases.

    Financial Services Industry (FSI) customers in Korea are able to accelerate cloud adoption with MTCS controls that cover relevant regulations (the Financial Security Institute’s Guideline on Use of Cloud Computing Services in the Financial Industry, and the Regulation on Supervision on Electronic Financial Transactions (RSEFT)).

    With increasing cloud adoption across different industries, MTCS certification has the potential to provide assurance to customers globally. Please reach out to your AWS representative if you have any services or Regions you would like to see in scope for the next MTCS audit.

    You can now download the latest MTCS certificates and the MTCS Self-Disclosure Form in AWS Artifact.

    If you have feedback about this post, submit comments in the Comments section below.

    Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

    How to automate AWS Managed Microsoft AD scaling based on utilization metrics

    =======================

    AWS Directory Service for Microsoft Active Directory (AWS Managed Microsoft AD), provides a fully managed service for Microsoft Active Directory (AD) in the AWS cloud. When you create your directory, AWS deploys two domain controllers in separate Availability Zones that are exclusively yours for high availability. For use cases requiring even higher resilience and performance, in a specific Region or during specific hours, AWS Managed Microsoft AD allows you to scale by deploying additional domain controllers to meet your needs. These domain controllers can help load-balance, increase overall performance, or simply provide additional nodes to protect against temporary availability issues. AWS Managed Microsoft AD allows you to define the correct number of domain controllers for your directory based on your individual use case.

    This post will walk you through how to automate scaling in AWS Managed Microsoft AD using utilization metrics from your directory. You’ll do this using Amazon CloudWatch Alarms, SNS notifications, and a Lambda function to increase the number of domain controllers in your directory based on utilization peaks.

    Simplified directory scaling

    AWS Managed Microsoft AD has now simplified this directory scaling process by integrating with Amazon CloudWatch metrics. This new integration enables you to:

    1. Analyze your directory to identify expected average and peak directory utilization
    2. Scale your directory based on utilization data to adequately address the expected load
    3. Automate the addition of domain controllers to handle unexpected load.

    Integration is available for both domain controller utilization metrics such as CPU, Memory, Disk and Network, and for AD-specific metrics, such as LDAP searches, binds, DNS queries, and Directory reads/writes. Analyzing this data over time to identify expected average and peak utilization on your directory can help you deploy additional domain controllers in Regions that need them. Once you’ve established this utilization baseline, you can deploy additional domain controllers to service this load, and configure alarms for anything exceeding this baseline.

    Solution overview

    In this example, our AWS Managed Microsoft AD has the default two domain controllers; once your utilization threshold is reached, you’ll add one additional domain controller (domain controller 3 in the diagram) to cover this additional load. 

    Figure 1: Solution overview

    To create a CloudWatch Alarm with SNS topic notifications
    1. In the AWS Console, navigate to CloudWatch
    2. Choose Metrics to see the Browse Metrics panel
    3. Choose the Directory Service namespace, then choose AWS Managed Microsoft AD.
    4. In the Directory ID column, select your directory and check search for this only.
    5. From the Metric Category column, select Processor from Metric Category and check add to search. This view will show the processor utilization for your directory.
    6.  

      Figure 2. Processor utilization metrics

    7. To see the average utilization across all domain controllers, choose Add Math, then All Functions, then AVG to create a metric math expression for average CPU utilization across all domain controllers.
    8.  

      Figure 3. Adding a math function to compute average

    9. Next, choose the Graphed Metrics tab in the CloudWatch metrics console, select the newly created expression, then select the bell icon from the Actions column to create a CloudWatch alarm based on this metric.
    10. Figure 4. Create a CloudWatch Alarm using Metric Math Expression

    11. Configure the threshold alarm to trigger when CPU utilization exceeds 70%.
    12.  In the Metrics section, under Period, choose 1 Hour.
       In the Conditions section, under Threshold Type, choose Static. Under Define the alarm condition, choose Greater than threshold. Under Define the threshold value, enter 70. See Figure 5 for an image of how alarm parameters should look on your screen. Choose Next to Configure actions

      Figure 5. Configure the alarm parameters

    13. On the Configure actions screen, configure the actions using the parameters listed below to send an email notification when the alarm state is triggered. See Figure 6 for an image of how email notifications are configured.
    14.   In the Notification section, set Alarm state trigger to In alarm.   Set Select an SNS topic to Create topic.  Fill in the name of the alarm in the Create a new topic field, and add the email where notifications should be sent to the Email endpoints that will receive notification field. An email address is required to create the SNS topic and you should use an email address that’s accessible by your operations team. This SNS topic will be used to trigger the Lambda automation described in a later section. Note: make a note of the SNS topic name you chose; you will use it later when creating the Lambda function in the To create an AWS Lambda function to automate scale out procedure below. 

      Figure 6. Create SNS topic and email notification

    15. In the Alarm name field, provide a name for the alarm. You can optionally also add an Alarm description. Choose Next.
    16. Review your configuration, and choose Create alarm to create the alarm.

    Once you’ve completed these steps, you will now have an alarm implemented for when domain controller CPU utilization exceeds an average of 70% across both domain controllers. This will trigger an SNS topic when your directory is experiencing a heavy load, which will be used to start the Lambda automation and will send an informational email notification. In the next section, we’ll configure an AWS Lambda function to automate the addition of a domain controller based on this SNS topic.

    For additional details on CloudWatch Alarms, please see the Amazon CloudWatch documentation.

    To create an AWS Lambda function to automate scale out

    The sample Lambda function shown below checks the number of domain controllers in this Region, and increases that by adding one additional domain controller. This procedure describes how to configure the IAM role required for this Lambda function, then how to deploy the Lambda function to execute when the alarm is triggered to automatically add a domain controller when your load exceeds your typical usage baseline.

    Note: For additional details on Lambda creation, please see the AWS Lambda documentation.

    To automate scale-out using AWS Lambda

    1. In the AWS Console, navigate to IAM and choose Policies, then choose Create Policy.
    2. Choose the JSON tab, and create a new IAM role using the policy provided in JSON below.
    3. For more details on this configuration, see the AWS Directory Service documentation.

      Sample policy

      { "Version":"2012-10-17", "Statement":[ { "Effect":"Allow", "Action":[ "ds:DescribeDomainControllers", "ds:UpdateNumberOfDomainControllers", "ec2:DescribeSubnets", "ec2:DescribeVpcs", "ec2:CreateNetworkInterface", "ec2:DescribeNetworkInterfaces", "ec2:DeleteNetworkInterface" ], "Resource":"*" } ] }
    4. Choose Next:Tags to add tags (optional) before choosing Next:Review.
    5. On the Create Policy screen, provide a name in the Name field. You can optionally also add a description. Choose Create policy to complete creating the new policy.
    6.  
      Note: make a note of the policy name you chose; you will use it later when updating the execution role for the Lambda function.  

      Figure 7. Provide a name to create the IAM policy

    7. In the AWS Console, navigate to Lambda and choose Create Function
    8. On the Create Function screen, select Author from Scratch and provide a Name, then choose Create Function.
    9. Figure 8. Create a Lambda function

    10. Once created, on the Lambda function’s page, choose the Configuration tab, then choose Permissions from the sidebar and choose the execution role name linked under Role name. This will open the IAM console in another tab, preloaded to your Lambda execution role.
    11.  

      Figure 9: Select the Execution Role

    12. On the execution role screen, choose Attach policies and select the IAM policy you’ve just created (e.g. DirectoryService-DCNumber Update). On the Attach Permissions screen, choose Attach policy to complete updating the execution role. Once completed this step, you may close this tab and return the previous browser tab.
    13.  

      Figure 10. Select and attach the IAM policy

    14. On the Lambda function screen, choose the Configuration tab, then choose Triggers from the sidebar.
    15. On the Add Trigger screen, choose the pulldown under Trigger configuration and select SNS. On the SNS topic box, select the SNS topic you created in Step 9 of the To create a CloudWatch Alarm with SNS topic notifications procedure above. Then choose Add to complete the trigger configuration.
    16. On the Lambda function screen, choose the Configuration tab, then choose Environment variables from the sidebar.
    17. On the Environment variables card, click Edit.
    18. On the Edit environment variables screen, choose Add environment variables and use the Key “DIRECTORY_ID” and the Value will be the directory ID for you AWS Managed Microsoft AD.
    19.  

      Figure 11. The “Edit environment variables” screen

    20. On the Lambda function screen, choose the Code tab to open the in-browser code editor experience inside the Code source card. Paste in the sample Lambda function code given below to complete the implementation.
    21.  

      Figure 12. Paste sample code to complete the Lambda function setup

    Sample Lambda function code

    The sample Lambda function given below automates adding another domain controller to your directory. When your CloudWatch alarm triggers, you will receive a notification email, and an additional domain controller will be deployed to provide the added capacity to support the increase in directory usage.

    Note: The example code contains a variable for the maximum number of domain controllers (maxDcNum), to prevent you from over provisioning in the event of a missed configuration. This value is set to 3 for this blog post’s example and can be increased to suit your use case. 

    import json import boto3 maxDcNum = 10 minDcNum = 2 region = "us-east-1" dsId = "d-906752246f" ds = boto3.client('ds', region_name=region) def lambda_handler(event, context): ## get the current number of domain controllers dcs = ds.describe_domain_controllers(DirectoryId = dsId) DomainControllers = dcs["DomainControllers"] DCcount = len(DomainControllers) print(">>> Current number of DCs:" + str(DCcount)) #increase the number of DCs if DCcount < maxDcNum: NewDCnumber = DCcount + 1 response = ds.update_number_of_domain_controllers(DirectoryId = dsId, DesiredNumber = NewDCnumber); return { 'statusCode': 200, 'body': json.dumps("New DC number will be " + str(NewDCnumber)) } else: return { 'statusCode': 200, 'body': json.dumps("Max number of DCs reached. The number of DCs is" + str(DCcount)) }

    Note: When testing this Lambda function, remember that this will increase the number of domain controllers for your directory in that Region. If the additional domain controller is not needed, please reduce the count after the test to avoid costs for an additional domain controller. The same principles used in this article to automate the addition of domain controllers can be applied to automate the reduction of domain controllers and you should consider automating the reduction to optimize for resilience, performance and cost.

    Conclusion

    In this post, you’ve implemented alarms based on thresholds in Domain Controller utilization using AWS CloudWatch and automation to increase the number of domain controllers using AWS Lambda functions. This solution helps to cost-effectively improve resilience and performance of your directory, by scaling your directory based on historical load patterns.

    To learn more about using AWS Managed Microsoft AD, visit the AWS Directory Service documentation. For general information and pricing, see the AWS Directory Service home page. If you have comments about this blog post, submit a comment in the Comments section below. If you have implementation or troubleshooting questions, start a new thread on the Directory Service forum or contact AWS Support.

    Want more AWS Security news? Follow us on Twitter.

    AWS Security Profiles: Jenny Brinkley, Director, AWS Security

    =======================


    In the week leading up to AWS re:Invent 2021, we’ll share conversations we’ve had with people at AWS who will be presenting, and get a sneak peek at their work.


    How long have you been at AWS, and what do you do in your current role?

    I’ve been at AWS for 5½ years. I get to focus on the future of security and compliance. It gives me a lot of space to experiment and try new things, which is how I like to operate.

    How did you get started in AWS Security?

    I joined AWS through a startup acquisition, and I actually didn’t think I was going to go with the acquisition. I thought AWS would be way too big and move way too slow. I love being in environments where I get to move fast and be entrepreneurial. I started on the product side. I was able to learn what it takes to build and ship products at the scale of AWS – which is on another level and mind-blowing.

    Then, like others at AWS, I was able to reinvent myself, find different passions, and experiment with new things. One of those areas for me was compliance. I started to get perspective on how that space was being defined by regulatory activity for the cloud, and it started opening my mind in different ways.

    I started thinking, how do you make compliance easier for customers? How do you work with regulated entities to understand how to audit, and to understand the function of how the cloud operates? From there, my career has been about changing how to think about product, about how to make security easier. Layering in this compliance aspect, too, means I get to play in all these different worlds, work with internal and external customers, and work to simplify security, while also understanding where and how compliance fits in, without slowing down innovation.

    How do you explain your job to non-tech friends?

    I explain my work as removing the fear around security. You go see images of people in hoodies, with darkened faces, and binary code running behind them, and my job is to break that perception and walk in the light – yes, that’s my nod to Olivia Pope in Scandal. I love the idea of that gladiator mentality. You’re going in and solving the big problems, but you’re also creating more visibility and transparency around how security operates. And you’re doing this without making anyone afraid that they’re being watched or monitored, and without holding back innovation. My job is to provide that transparency and clarity, and give people prescriptive guidance on how to operate securely on AWS.

    What are you currently working on that you’re excited about?

    So much! That’s what I really love about my job – I get to play in a lot of spaces, and the context switching is something that really fuels me. One of the top projects I’m working on is something we just released in response to an ask from the White House, which I feel really privileged to work on. We released a new Cybersecurity Awareness training which is now available to everyone in the world. You can access this training right now, and you can share it with your grandparents or implement it in your corporation or small business. We were able to take a training product we built for all Amazon employees–and then externalize it. The size and the scope is something I’m really excited about. Making security easier for everybody is a big mission for us.

    Another big area is up-skilling. You hear a lot about security jobs being the future, so we’re building everything from apprenticeships to new learning paths for anyone interested in security. We’re thinking about how we can build quick learning modules for people to listen to on the go. That’s something I get really excited about in this job – creating opportunities for people to understand that security jobs and opportunities are vast. If you’re curious and want to learn new things, AWS is endless.

    You’re presenting at re:Invent this year – can you give readers a sneak peek at what you’re covering?

    I am partnering with Eric Brandwine, AWS VP/Distinguished Engineer for a session called Introverts and extroverts collide: Build an inclusive workforce (SEC204). Eric and I are night and day in terms of how we work. In our talk we’ll touch on some of the challenges we had when we first started working together, but how we found value in our different approaches.

    We’ll be discussing how he solves problems with technology and how I solve problems regarding people, and thinking about how that empathetic layer resonates between the two perspectives. Not every problem needs technology, and not every problem needs a people-focused solution. But, humans are behind any of those aspects of impact.

    We’ll give prescriptive guidance to customers on how they should think about their security culture as it relates to people and as it relates to technology. We’ll talk about how those two worlds can blend together in a way that empowers an entire organization to prioritize security, and that they shouldn’t be afraid of it. We want to help bridge the gaps between the technologists and the empathetic individuals who think about how the technology lands in use cases across a business.

    From your perspective, what’s the most important thing leaders can do to create an inclusive work environment?

    Listening. Sitting back, getting the feedback, being vulnerable, asking the questions. So much of what we need to do now is practice that listening skill, really understand the motivations of our teams, and then try to create these safe working environments where people feel comfortable sharing their perspectives. It’s not that you’re going to act on everything everyone’s talking about, but at least you get diverse perspectives and points of view to help create an inclusive work environment that makes everyone want to show up, support each other, and do the best work possible.

    What’s your favorite Leadership Principle at Amazon and why?

    I have two. One is Learn and Be Curious because that is how I like to operate. I think, “what if…” or “why can’t we…”. Then Think Big pairs with “why can’t we…” The culture within AWS really supports that. On a daily basis, we can flip the script on how we think about our jobs and how we position the business.

    If you’re entrepreneurial and like to create, this place is like a magic playground. Some people look at my job and they’re so confused with all the different things I get to do – but it goes back to that context switching. I believe that Learn and Be Curious and Think Big fit in that realm for me–I feel like I can be anything, I can do anything. I also had parents who told me as a kid that I could do anything and be anything, so I think that’s just who I am. Those two leadership principles help me to produce and do my best work.

    What’s the thing you’re most proud of in your career?

    That’s hard. It’s a couple of things. I’ve had a lot of incredible opportunities. One of which was being involved in a startup. We raised the money quickly, we worked with incredible customers, we solved really challenging business issues. The fact that I was able to bring that here to AWS, in a way that now hundreds of thousands of people get to see the kind of work we’re able to produce, is pretty cool.

    But honestly, working with some of our new hires who are just getting into the workforce–especially with our diverse candidates–I’m at a place in my career where I want to create opportunities for others. I’m working to create safe spaces for people to operate and do their best work and really break down barriers for people who might not otherwise get those opportunities. That’s what I’m most excited about for the future, and also the most proud about–giving people opportunities to work in careers they never thought were available to them. I love that, and I get to do it daily.

    If you had to pick any other job, what would you want to do?

    Sports agent. I think I’d be so good at it. I would love to go work with young athletes, especially with the new NCAA ruling that college athletes can get paid for the use of their likeness. I would love to help them develop really interesting business plans.

    If you have feedback about this post, submit comments in the Comments section below.

    Want more AWS Security news? Follow us on Twitter.

    AWS Security Profiles: Megan O’Neil, Sr. Security Solutions Architect

    =======================


    In the week leading up to AWS re:Invent 2021, we’ll share conversations we’ve had with people at AWS who will be presenting, and get a sneak peek at their work.


    How long have you been at Amazon Web Services (AWS), and what do you do in your current role?

    I’ve been at AWS nearly 4 years, and in IT security over 15 years. I’m a solutions architect with a specialty in security. I work with commercial customers in North America, helping them solve security problems and build out secure foundations for their AWS workloads.

    How did you get started in security?

    I took part in a Boeing internship for three summers starting my junior year of high school. This internship gave me the opportunity to work with mechanical engineers at Boeing. The specific team I worked with were engineers responsible for building digital tools and robots for the 767-400 line at the Everett plant in Washington state. The purpose of these custom tools and robots was to help build the planes more efficiently and accurately. I had a lot of fun and learned a lot from my time working with them. I asked the group for career advice during lunch one day, and they all pointed me towards computer science (CS) instead of mechanical engineering. Because of their strong support for CS, I took the first course, Intro to Computer Science, and was excited that something that I previously thought was intimidating was actually approachable and a subject I really enjoyed.

    During my sophomore year there was a new elective class offered called Digital Security, which piqued my interest and influenced my senior project. I built (coded) an intrusion detection program that identified nefarious network traffic. I also worked on campus during college in the sound services department and participated in the Dance Ensemble Program, where I met the IT manager for a local hospital in Washington state, Good Samaritan Hospital in Puyallup. He was helping mix music at the studio I worked in. After showing him my senior project, he told me about a job opening for a network security specialist at the hospital. No one else had applied for the role. I then interviewed with the team, which was made up of only three engineers including the manager. They were responsible for the all-backend systems including the hospital information system, patient telemetry and clinic systems, the hospital network, etc. The group of people I worked with at the hospital is still very special to me, we are all still friends.

    How do you explain your job to non-tech friends?

    I’m in tech, and I help companies protect their websites and their customers’ data.

    What are you currently working on that you’re excited about?

    I’m very excited about re:Invent. It’s the 10th anniversary, we’re back in person, and I’ve got quite a few sessions I’m delivering.

    Speaking of AWS re:Invent 2021 – can you give readers a sneak peek at what you’re covering?

    The first is a session I’m delivering is called Use AWS to improve your security posture against ransomware (SEC308) with Merritt Baer, Principal in the Office of the CISO. We’re discussing what AWS services and features you can use to help you protect your systems from ransomware.

    The second is a chalk talk, Automating and evidencing key compliance security controls (STP211-R1 and STP211-R2), I’m delivering with Kristin Haught, Principal Security TPM, and we’re discussing strategies for automating, monitoring, and evidencing common controls required for multiple compliance standards.

    The third session is a builder session called Grant least privilege temporary access securely at scale (WPS304). We’ll use AWS Secrets Manager, AWS Identity and Access Management (IAM), and the isolated compute functionality provided by AWS Nitro Enclaves to allow system administrators to request and retrieve narrowly scoped and limited-time access.

    The fourth session is another builder session called Detecting security threats with Amazon GuardDuty (SEC213-R1 and SEC213-R2). It includes several simulated scenarios, representing just a small sample of the threats that GuardDuty can detect. We will review how to view and analyze GuardDuty findings, how to send alerts based on the findings, and, finally, how to remediate findings.

    From your perspective, what’s the most important thing to know about ransomware?

    Whenever we see a security event continue to make news, it’s a call to action and an opportunity for customers to analyze their security programs including operations and controls. There’s no silver bullet when it comes to protection from ransomware, but it’s time to level up your security operations and controls. This means minimize human access, translate security policies into code, build mechanism and measure them, streamline the use of environment and infrastructure, and use advanced data/database service features.

    For example, we still see customers with large amounts of long-lived credentials; it’s time to take inventory and minimize or eliminate them. While there is a small subset of use cases where they may be required, such as on-premises to AWS access, I recommend the following:

    1. Inventory your long-lived credentials.
    2. Ensure the access is least privilege, absolutely no wildcard actions and/or resources.
    3. If the access is interactive, apply multi-factor authentication (MFA).
    4. Ask if you can architect a better option that doesn’t rely on static access keys.
    5. Rotate access keys on a regular, frequent basis.
    6. Enable alerts on login events.

    For more information, check out Ransomware mitigation: Top 5 protections and recovery preparation actions and Ransomware Risk Management on AWS Using the NIST Cyber Security Framework (CSF).

    What’s your favorite Leadership Principle at Amazon and why?

    Learn and Be Curious! I am the most happy in my job and personal life when I’m learning new things. I also believe that this principle is a way of life for us technology folks. Learning new technology and finding better ways of implementing technology is our job. My favorite quote/laptop sticker is:

    “I hate programming”

    “I hate programming”

    “I hate programming”

    “IT WORKS! ”

    “I love programming.”

    It just makes me laugh because it’s so true. Of course we are only that frustrated when something is very new. It’s like solving a puzzle. When a project comes together, it’s absolutely worth it – the puzzle pieces now fit.

    What’s the best career advice you’ve ever gotten?

    Work with a mentor. This can be casual by finding projects where you can collaborate with folks who have more experience than you. Or it can be more formal by asking someone to be your mentor and setting up a regular cadence of meetings with them. I’ve done both, a simple example is by collaborating with Merritt and Kristen on upcoming re:Invent presentations, I’ve already learned a lot from both of them just through the preparation process and developing the content. Having a mentor by your side can be especially helpful when setting new goals. Sometimes we need someone to push us out of our comfort zone and believe that we can achieve bigger things than we would have thought and then can help devise a plan to help you achieve those goals. All it takes is someone else believing in us.

    If you had to pick any other job, what would you want to do?

    I’ve always been interested in naturopathic medicine and getting to the root cause of an issue. It’s somewhat similar to my job in that I’m solving puzzles and complex problems, but in technology, instead of the body.
     

    If you have feedback about this post, submit comments in the Comments section below.

    Want more AWS Security news? Follow us on Twitter.

    How to enable secure seamless single sign-on to Amazon EC2 Windows instances with AWS SSO

    =======================

    Today, we’re launching new functionality that simplifies the experience to securely access your AWS compute instances running Microsoft Windows. We took on this update to respond to customer feedback around creating a more streamlined experience for administrators and users to more securely access their EC2 Windows instances. The new experience utilizes your existing identity solutions to run and manage your Microsoft Windows workloads on AWS. You can create and administer users in AWS Single Sign-On (AWS SSO) or an AWS SSO supported identity provider (such as Okta, Ping, and OneLogin), and provide a one-click single sign-on to your EC2 Windows instances from the AWS Fleet Manager console. You can also use your existing corporate usernames, passwords, and multi-factor authentication devices to securely access your EC2 windows instances, without having to enter your credentials multiple times.

    Using AWS SSO eliminates the use of shared administrator credentials and the need to configure remote access client software. You can centrally grant and revoke access to your EC2 Windows instances at scale across multiple AWS accounts. For example, if you remove an employee from your AWS SSO integrated identity system, their access to all AWS resources (including EC2 Windows instances) is automatically revoked. Individual user actions can now be viewed in the Amazon EC2 Windows instances event log, making it easier to meet audit and compliance requirements.

    AWS SSO background

    AWS SSO simplifies managing SSO access to AWS accounts and business applications, and it is the central location where you can create or connect your workforce identities in AWS. You can control SSO access and user permissions across all your AWS accounts in AWS Organizations. You can choose to manage access to your AWS accounts, to cloud applications, or both.

    When managing access to AWS accounts, AWS SSO enables you to define and assign roles centrally across your AWS Organizations account using permission sets. Permission sets are role definitions (templates) that AWS SSO uses to create and maintain roles in your AWS Organizations accounts. The permission set defines the session duration and policies for the role. When you assign a permission set to a user or group in a selected AWS account, AWS SSO creates a corresponding role in the target account, and AWS SSO controls access to the role through the AWS SSO user portal.

    This post uses a permission set that manages access to AWS Fleet Manager to deliver one-click access into EC2 instances.

    You will accomplish this in three steps:

    1. Create an AWS SSO permission set (for example, demoFMPermissionSet)
    2. Assign the permission set to an existing AWS SSO group (for example, demoFMGroup)
    3. Login to the AWS SSO User Portal and connect to your EC2 Windows instance via the AWS Fleet Manager console
    Prerequisites

    The prerequisites for this example are that you have:

    1. Configured AWS SSO in your account with provisioned users and groups
    2. An EC2 Windows instance managed by AWS Systems Manager Fleet Manager
    Solution architecture

    The following diagram shows the steps you will follow to configure and use an AWS SSO user identity to login to an EC2 Windows instance. 

    Figure 1: Architecture diagram showing steps implemented in this solution

    How it works

    The AWS SSO permission set creates a role in a target account that gives an authorized user permissions to use AWS Fleet Manager to sign into EC2 Windows instances. When a user chooses the role in the account, the user signs onto the AWS Fleet Manager console and selects the EC2 instance where they want to sign in.

    AWS Fleet Manager creates a local Windows user account and a credential for that user, and then automates their sign-in to the instance.

    To create an AWS SSO permission set

    This procedure creates a permission set that grants assigned users and groups permissions to use AWS Fleet Manager for single sign-on to EC2 instances.

    1. From the AWS SSO console, go to AWS Accounts, select the Permission sets tab, select Create permission set and choose Create a custom permission set.
    2. Name your permission set, and fill out the required fields, making sure to select Create a custom permissions policy at the bottom of the page. See Sample custom permissions policy below for details on the policy.
    3. After creating the custom permissions policy, you can also apply optional tagging. When you are done, review and choose Create to complete creating your custom permission set, as shown in Figure 2.

     

    Figure 2: Reviewing the custom permission set

    Sample custom permissions policy

    This is the sample policy you’ll use; you can download it here.

    This permission policy contains a separate statement ID (Sid) for each service, with the required actions for each.

    On line 84, notice the reference to an AWSSSO-CreateSSOUser document resource. This document is responsible for creating a local Windows account based on the AWS SSO logged in user, as well as setting/resetting the user’s password for automatic log in to the Windows instance.

    On lines 96-98, you will see a new ssm-guiconnect action. This is used to make the secure connection to your EC2 Windows instance, and render the GUI desktop in the Fleet Manager console.

    To assign your AWS SSO group

    Assign your AWS SSO group to the AWS Fleet Manager permission set in your selected accounts

    In this procedure, we will select two AWS accounts in our AWS organization, and grant our AWS SSO group access to the previously-created permission set that enables sign-in via Fleet manager.

    1. From the AWS SSO console, navigate to AWS accounts and select an account (for example, demoAccount1 and demoAccount2), as shown in Figure 3.
    2. Choose the Assign users button. If you wish, you may also assign access to multiple groups or to users individually.
    3.  

      Figure 3: Selecting AWS Account to assign users or groups

    4. To enable multiple AWS SSO users to access this feature, choose an AWS SSO group from the Groups tab and then choose the Next button, as shown in Figure 4
    5.  

      Figure 4: Assigning group to AWS accounts

    6. Select the permission set you created previously and choose the Next button.
    7.  

      Figure 5: Selecting permission set to AWS accounts

    8. Review your choices, and press Submit to submit your assignments, as shown in Figure 6.
    9.  

      Figure 6: Reviewing submit assignments to AWS accounts

    AWS SSO will now use the permission set definition to create a role in each selected account, which grants users access to sign in via Fleet Manager. Users gain access to that role by signing into the AWS SSO user portal.

    To access Fleet Managed EC2 instances

    1. From the console, navigate to your AWS SSO user portal URL and login as any AWS SSO user who is a member of the group (e.g., demoFMGroup) you selected in step 3 above.
    2. From the AWS SSO user portal page, choose Management console and navigate to the Fleet Manager console where you have your EC2 Windows managed instance, as shown in Figure 7
    3.  

      Figure 7: Navigating to the Management console from the user portal

    4. Select a managed Windows instance and select Instance actions and then Connect with Remote Desktop as shown in Figure 8.
    5.  

      Figure 8: Connecting with Remote Desktop

    6. Select Single Sign-On and then select Connect, as shown in Figure 9.
    7. This automatically logs you in using your AWS SSO credential. If this is the first time connecting to the instance, a new local user will be created. 

      Figure 9: Selecting Single Sign-On

      Once connected, you will see your EC2 Windows instance in the All sessions tab, enabling you to have up to four concurrent sessions in a single view, as shown in Figure 10. For a single session view, select the Instance ID tab. 

      Figure 10: Selecting expanded desktop view

    8. From the single session tab, we can see that AWS Fleet Manager created a local Windows Server user for the AWS SSO user (demoUser1).

    After creating the local user, AWS Fleet Manager used the credentials it created to sign into the EC2 Windows server as sso-demoUser1 from the Windows Event Viewer, giving you individual user logging on your EC2 Windows servers. These logs are also available from within the Fleet Manager console. 

    Figure 11: Showing AWS SSO username in Amazon EC2 Windows instance event log

    Conclusion

    This post described how to provide a single sign-in experience to Windows EC2 instances using AWS Fleet Manager with AWS Single Sign-On. Doing this allows you to create users in AWS SSO, or to connect any supported identity provider to AWS SSO, and to give users one-click access to their EC2 instances through AWS Fleet Manager.

    This is done by creating an AWS SSO permission set that grants users access to AWS Fleet Manager, then assigning a group from AWS SSO to the permission set in the selected AWS accounts. Users can sign into the AWS SSO user portal, navigate to the AWS Fleet Manager, select their Windows EC2 instance, and land in the Windows user experience without having to enter Windows credentials separately.

    To learn more about AWS SSO, visit the AWS Single Sign-On Documentation. To learn more about Fleet Manager, visit the AWS Systems Manager Fleet Manager Documentation.

    If you have feedback about this blog post, submit comments in the Comments section below. If you have questions about this post, start a new thread on the AWS Single Sign-On forum.

    Want more AWS Security news? Follow us on Twitter.

    2021 PCI 3DS report now available

    =======================

    We are excited to announce that Amazon Web Services (AWS) has released the latest 2021 PCI 3-D Secure (3DS) attestation to support our customers implementing EMV® 3-D Secure services on AWS. Although AWS doesn’t directly perform the functions of 3DS Server (3DSS), 3DS Directory Server (DS), or 3DS Access Control Server (ACS), AWS customers can host their 3DS environments on AWS, using services such as Amazon Elastic Compute Cloud (Amazon EC2), Amazon Elastic Container Service (Amazon ECS) and Amazon Virtual Private Cloud (Amazon VPC).

    The new AWS PCI 3DS attestation of compliance means customers can now attain their own PCI 3DS compliance for services running on AWS. All AWS Regions in scope for PCI DSS are included in the 3DS attestation. AWS was assessed by Coalfire, an independent Qualified Security Assessor (QSA). AWS compliance reports, including this latest PCI 3DS attestation, are available on demand through AWS Artifact. The 3DS package available in AWS Artifact includes the 3DS Attestation of Compliance (AOC) and a Shared Responsibility Guide.

    To learn more about our PCI program and other compliance and security programs, please visit AWS Compliance Programs.

    We value your feedback and questions—feel free to reach out to our team or give feedback about this post through our Contact Us page.

     
    If you have feedback about this post, submit comments in the Comments section below.

    Want more AWS Security news? Follow us on Twitter.

    AWS Security Profiles: Merritt Baer, Principal in OCISO

    =======================


    In the week leading up AWS re:Invent 2021, we’ll share conversations we’ve had with people at AWS who will be presenting, and get a sneak peek at their work.


    How long have you been at Amazon Web Services (AWS), and what do you do in your current role?

    I’m a Principal in the Office of the Chief Information Security Officer (OCISO), and I’ve been at AWS about four years. In the past, I’ve worked in all three branches of the U.S. Government, doing security on behalf of the American people.

    My current role involves both internal- and external- facing security.

    I love having C-level conversations around hard but simple questions about how to prioritize the team’s resources and attention. A lot of my conversations revolve around organizational change, and how to motivate the move to the cloud from a security perspective. Within that, there’s a technical “how”—we might talk about the move to an intelligent multi-account governance structure using AWS Organizations, or the use of appropriate security controls, including remediations like AWS Config Rules and Amazon EventBridge. We might also talk about the ability to do forensics, which in the cloud looks like logging and monitoring with AWS CloudTrail, Amazon CloudWatch, Amazon GuardDuty, and others aggregated in AWS Security Hub.

    I also handle strategic initiatives for our security shop, from operational considerations like how we share threat intelligence internally, to the ways we can better streamline our policy and contract vehicles, to the ways that we can incorporate customer feedback into our products and services. The work I do for AWS’ security gives me the empathy and credibility to talk with our customers—after all, we’re a security organization, running on AWS.

    What drew you to security?

    (Sidebar: it’s a little bit of who I am— I mean, doesn’t everyone rely on polaroid photos? just kidding— kind of :))
     

    I always wanted to matter.

    I was in school post-9/11, and security was an imperative. Meanwhile, I was in Mark Zuckerberg’s undergrad class at Harvard. A lot of the technologies that feel so intimate and foundational—cloud, AI/ML, IoT, and the use of mobile apps, for example—were just gaining traction back then. I loved both emerging tech and security, and I was convinced that they needed to speak to and with one another. I wanted our approach to include considerations around how our systems impact vulnerable people and communities. I became an expert in child pornography law, which continues to be an important area of security definition.

    I am someone who wonders what we’re all doing here, and I got into security because I wanted to help change the world. In the words of Poet Laureate Joy Harjo, “There is no world like the one surfacing.”

    How do you explain your job to non-tech friends?

    I often frame my work relative to what they do, or where we are when we’re chatting. Today, nearly everyone interacts with cloud infrastructure in our everyday lives. If I’m talking to a person who works in finance, I might point to AWS’ role providing IT infrastructure to the global financial system; if we’re walking through a pharmacy I might describe how research and development cycles have accelerated because of high-performance computing (HPC) on AWS.

    What are you currently working on that you’re excited about?

    Right now, I’m helping customer executives who’ve had a tumultuous (different, not necessarily all bad) couple of years. I help them adjust to a new reality in their employee behavior and access needs, like the move to fully remote work. I listen to their challenges in the ability to democratize security knowledge through their organizations, including embedding security in dev teams. And I help them restructure their consumption of AWS, which has been changing in light of the events of the last two years.

    On a strategic level, I have a lot going on … here’s a good sampling: I’ve been championing new work based on customers asking our experts to be more proactive by “snapshotting” metadata about their resources and evaluating that metadata against our well-architected security framework. I work closely with our Trust and Safety team on new projects that both increase automation for high volume issues but also provide more “high touch” and prioritized responses to trusted reporters. I’m also building the business case for security service teams to make their capabilities even more broadly available by extended free tiers and timelines. I’m providing expertise to our private equity folks on a framework for evaluating the maturity of security capabilities of target acquisitions. Finally, I’ve helped lead our efforts to add tighter security controls when AWS teams provide prototyping and co-development work. I live in Miami, Florida, USA, and I also work on building out the local tech ecosystem here!

    I’m also working on some of the ways we can address ransomware. During our interview process, Amazon requests that folks do an hour-long presentation on a topic of your choice. I did mine on ransomware in the cloud, and when I came on board I pointed to that area of need for security solutions. Now we have a ransomware working group I help lead, with efforts underway to help out customers doing both education and architectural guidance, as well as curated solutions with industries and partners, including healthcare.

    You’re presenting at AWS re:Invent this year—can you give readers a sneak peek at what you’re covering?

    One talk is on cloud-native approaches to ransomware defense, encouraging folks to think innovatively as they mature their IT infrastructure. And a second talk highlights partner solutions that can help meet customers where they are, and improve their anti-ransomware posture using vendors—from MSSPs and systems integrators, to endpoint security, DNS filtering, and custom backup solutions.

    What are you hoping the audience will take away from the sessions?

    These days, security doesn’t just take the form of security services (like GuardDuty and AWS WAF), but will also manifest in the ways you design a cloud-aware architecture. For example, our managed database service Aurora can be cloned; that clone might act as a canary when you see data drift (a canary is security concept for testing your expectations). You can use this to get back to a known good state.

    Security is a bottom line proposition. What I mean by that is:

    1. It’s a business criticality to avoid a bad day
    2. Embracing mature security will enable your entity’s development innovation
    3. The security of your products is a meaningful part of what you deliver on to your customers.
    From your perspective, what’s the most important thing to know about ransomware?

    Ransomware is a big headline-maker right now, but it’s not new. Most ransomware attacks are not based on zero days; they’re knowable but opportunistic. So, without victim-blaming, I mean to equip us with the confidence to confront the security issue. There’s no need to be ransomed.

    I try not to get wrapped around particular issues, and instead emphasize building the foundation right. So sure, we can call it ransomware defense, but we can also point to these security maturity measures as best practices in general.

    I think it’s fair to say that you’re passionate about women in tech and in security specifically. You recently presented at the Day of Shecurity conference and the Women in Business Summit, and did an Instagram takeover for Women in CyberSecurity (WiCyS). Why do you feel passionately about this?

    I see security as an inherently creative field. As security professionals, we’re capable of freeing the business to get stuff done, and to get it done securely. That sounds simple, and it’s hard!

    Any time you’re working in a creative field, you rely on human ingenuity and pragmatism to ensure you’re doing it imaginatively instead of simply accepting old realities. When we want to be creative, we need more of the stuff life is made of: human experience. We know that people who move through the world with different identities and experiences think differently. They approach problems differently. They code differently.

    So, I think having women in security is important, both for the women who choose to work in security, and for the security field as a whole.

    What advice would you give a woman just starting out in the security industry?

    No one is born with a brain full of security knowledge. Technology is human-made and imperfect, and we all had to learn it at some point. Start somewhere. No one is going to tap you on the shoulder and invite you to your life :)

    Operationally, I recommend:

  • Curate your “elevator pitch” about who you are and what you’re looking for, and be explicit when asking for folks for a career conversation or a referral (you can find me on Twitter @MerrittBaer, feel free to send a note).
  • Don’t accept a first job offer—ask for more.
  • Beware of false choices. For example, sometimes there’s a job that’s not in the description—consider writing your own value proposition and pitching it to the organization. This is a field that’s developing all the time, and you may be seeing a need they hadn’t yet solidified.
  • What’s your favorite Leadership Principle at Amazon and why?

    I think Bias for Action takes precedence for me— there’s a business decision here to move fast. We know that comes with some costs and risks, but we’ve made that calculated decision to pursue high velocity.

    I have a law degree, and I see the Leadership Principles sort of like the Bill of Rights: they are frequently in tension and sometimes even at odds with one another (for example, Bias for Action and Are Right, A Lot might demand different modes). That is what makes them timeless—yet even more contingent on our interpretation—as we derive value from them. As a security person, I want us to pursue the good, and also to transcend the particular fears of the day.

    If you had to pick any other industry, what would you want to do?

    Probably public health. I think if I wasn’t doing security, I would want to do something else landscape-level.

    Even before I had a daughter, but certainly now that I have a one-year-old, I would calculate the ROI of my life’s existence and my investment in my working life.

    That being said, there are days I just need to come home to some unconditional love from my rescue pug, Peanut Butter.
     

    If you have feedback about this post, submit comments in the Comments section below.

    Want more AWS Security news? Follow us on Twitter.

    Page 1|Page 2|Page 3|Page 4