Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Free Learning
Arrow right icon
AWS Security Cookbook
AWS Security Cookbook

AWS Security Cookbook: Practical solutions for managing security policies, monitoring, auditing, and compliance with AWS

eBook
€24.99 €35.99
Paperback
€44.99
Subscription
Free Trial
Renews at €18.99p/m

What do you get with eBook?

Product feature icon Instant access to your Digital eBook purchase
Product feature icon Download this book in EPUB and PDF formats
Product feature icon Access this title in our online reader with advanced features
Product feature icon DRM FREE - Read whenever, wherever and however you want
Product feature icon AI Assistant (beta) to help accelerate your learning
OR
Modal Close icon
Payment Processing...
tick Completed

Billing Address

Table of content icon View table of contents Preview book icon Preview Book

AWS Security Cookbook

Securing Data on S3 with Policies and Techniques

Amazon S3 is an object store on the AWS platform. In simple terms, an object store is a key-value store for objects with a name as the key and an object as the value, unlike a filesystem store, which is hierarchical. In this chapter, we will learn to secure S3 data with access control lists (ACLs), bucket policies, pre-signed URLs, encryption, versioning, and cross-region replication. We have already seen how to secure S3 data using an IAM policy in Chapter 1, Managing AWS Accounts with IAM and Organizations.

This chapter will cover the following recipes:

  • Creating S3 access control lists
  • Creating an S3 bucket policy
  • S3 cross-account access from the CLI
  • S3 pre-signed URLs with an expiry time using the CLI and Python
  • Encrypting data on S3
  • Protecting data with versioning
  • Implementing S3 cross-region replication within the same account
  • Implementing S3 cross-region replication across accounts

Technical requirements

Creating S3 access control lists

In this recipe, we will learn to grant permissions to the public (everyone) using ACLs from a console, using predefined groups from the CLI, and using canned ACLs from the CLI. ACLs can be used to grant basic read/write permissions to buckets, objects, and their ACLs. ACL grantees can be either an AWS account or a predefined group.

Getting ready

We need a working AWS account with the following resources configured:

  1. A bucket with a file: I will be using a bucket name awsseccookbook with a file named image-heartin-k.png. Replace these with your own bucket name and filename.
  2. A user with no permission and a user with administrator permission: Configure CLI profiles for these users. I will name users and their profiles testuser and awssecadmin, respectively.
It is good practice to add users to groups and give permissions to these groups instead of directly assigning permissions to users.
    1. Uncheck the two Block all public access settings related to ACLs. Leave the other settings checked and click Save:

    We can manage block public access settings for a bucket by going to Block public access under the bucket's Permissions tab. We can also manage these settings at account level from the S3 dashboard sidebar.

    How to do it...

    We will discuss various usages of S3 ACLs in this section.

    Granting READ ACLs for a bucket to everyone from the console

    Perform the following steps to allow everyone to list the bucket's contents:

    1. Go to the S3 service in the console.
    2. Go to the Access Control List tab under the bucket's Permissions tab of the bucket, click on Everyone, select List objects, and then click Save.
    1. Access the bucket from the browser and we should be able to list the contents of the bucket:

    Next, we will learn to grant READ for AWS users using predefined groups.

    Granting READ for AWS users using predefined groups from the CLI

    We can grant READ for any AWS user using the AuthenticatedUser predefined group by performing the following steps:

    1. If you followed along with the previous section, remove the List objects permission for the bucket that was granted to Everyone.
    2. Create a policy that grants access to the AuthenticatedUsers group and save it as acl-grant-authenticated-users.json:
    {
    "Owner": {
    "DisplayName": "awsseccookbook",
    "ID": "5df5b6014ae606808dcb64208aa09e4f19931b3123456e152c4dfa52d38bf8fd"
    },
    "Grants": [
    {
    "Grantee": {
    "Type": "Group",
    "URI": "http://acs.amazonaws.com/groups/global/AuthenticatedUsers"
    },
    "Permission": "READ"
    }
    ]
    }

    Here, the Owner element has the current account's display name and canonical ID. The Grants element grants the READ permission to the AuthenticatedUsers group.

    1. Execute the put-bucket-acl command by providing the preceding policy document:
    aws s3api put-bucket-acl \
    --bucket awsseccookbook \
    --access-control-policy file://resources/acl-grant-authenticated-users.json \
    --profile awssecadmin
    1. The testuser user should now be able to list the contents of the S3 bucket. However, we won't be able to list the bucket contents from the browser.

    Granting public READ for an object with canned ACLs from the CLI

    We can upload an object and grant public read access using a canned ACL as follows:

    1. Download the image file using the admin user profile. On this occasion, downloading should be successful:
    1. Upload the same file as an administrator, providing the canned ACL for public-read:
    aws s3 cp image-heartin-k.png s3://awsseccookbook/image-heartin-new.png \
    --acl public-read \
    --profile awssecadmin
    1. Download the new file using the testuser profile:

    We should now be able to download the file successfully.

    How it works...

    In this recipe, we learned about ACLs.

    In the Granting READ ACLs for a bucket to everyone from the console section, we granted the READ permission to everyone through ACLs. In the Granting READ for AWS users using predefined groups from the CLI section, we granted the READ permission using a predefined group: AuthenticatedUsers.

    The policy document for granting access through ACLs has the following structure:

    {
      "Grants": [
        {
          "Grantee": {
            "DisplayName": "string",
            "EmailAddress": "string",
            "ID": "string",
            "Type": "CanonicalUser"|"AmazonCustomerByEmail"|"Group",
            "URI": "string"
          },
          "Permission": "FULL_CONTROL"|"WRITE"|"WRITE_ACP"|"READ"|"READ_ACP"
        }
        ...
      ],
      "Owner": {
        "DisplayName": "string",
        "ID": "string"
      }
    }

    The grantee can be specified in one of the following ways:

    • With Type as AmazonCustomerByEmail, along with the canonical ID of the account in the EmailAddress field
    • With Type as CanonicalUser, along with the email for the account in the ID field
    • With Type as Group, along with the URI for a predefined group in the URI field

    The account can be specified using an email address or the canonical ID of the account. We can get the canonical ID of an account from the Security Credentials page of our account.

    The following are globally the URIs for predefined groups and should be used in the JSON policy:

    • AuthenticatedUser: http://acs.amazonaws.com/groups/global/AuthenticatedUsers
    • AllUsers: http://acs.amazonaws.com/groups/global/AllUsers
    • LogDelivery: http://acs.amazonaws.com/groups/s3/LogDelivery

    ACLs can be used to grant the following permissions to buckets/objects:

    • READ: List objects for a bucket. Read an object and its metadata.
    • WRITE: Create, overwrite, or delete objects for a bucket. Not applicable for an object.
    • READ_ACP: Read the ACL of a bucket or object.
    • WRITE_ACP: Write the ACL for a bucket or object.
    • FULL_CONTROL: All the previous permissions.

    In the Granting public READ for an object with canned ACLs from the CLI section, we used a canned policy, pubic-read, which allows everyone to read that object. Canned ACLs are short-hand ACL permissions that can be used to provide permission for a resource from the command line. Currently, the following canned ACLs are supported: private, public-read, public-read-write, aws-exec-read, authenticated-read, bucket-owner-read, bucket-owner-full-control, and log-delivery-write.

    In the case of cross-account access, if a user from account A uploads an object to a bucket in account B (owned by account B), account B will have no access to that object even if it is the bucket owner. Account A can, however, grant permission to the bucket owner while uploading the document using the bucket-owner-read or bucket-owner-full-control canned ACL.

    We used the put-bucket-acl sub-command of the aws s3api command in this recipe to set permissions on a bucket using ACLs. Similarly, put-object-acl sets permission for an object. If we forget the policy structure for a put policy, we can execute a get policy to get the structure and modify it for our purpose. The get-bucket-acl sub-command of the aws s3api command gets the bucket's ACL policy, and get-object-policy gets an object's ACL policy.

    There's more...

    S3 is considered to be secure by default. A new object will have no access except for the account owner. An account owner of an S3 resource is the account that created that resource.

    Let's go through some important concepts related to ACLs:

    • ACLs provide basic read/write permission to buckets, objects, and their ACLs.
    • ACLs can only grant access to AWS accounts and predefined groups.
    • ACLs, by default, allow full control to the owner of the resource and nothing to everyone else.
    • ACLs can only grant permission; they cannot deny access.
    • ACLs are represented internally as XML documents.
    • ACLs are generally considered legacy and, wherever possible, it is preferable to use either an IAM policy or a bucket policy. However, there are some scenarios where ACLs are the best, or the only, choice:
      • ACLs can be used to grant access to objects not owned by the bucket owner. For example, when a user in one account uploads an object to another accounts' bucket, canned ACLs can be used to provide access to the bucket owner.
      • ACLs are used to grant permission to an S3 log delivery group for a bucket.
      • ACLs can be used to grant individual permissions to many objects. Even though this can be done with a bucket policy, it is easier to achieve this with ACLs.
    • While ACLs are specified per resource, bucket policies are specified per bucket and prefixes. IAM policies have resources specified in a similar way to bucket policies, but are applied to IAM users.

    Let's quickly go through some more important concepts related to canned ACLs:

    • The bucket-owner-read and bucket-owner-full-control canned ACLs are only applicable to objects and are ignored if specified while creating a bucket.
    • The log-delivery-write canned ACL only applies to a bucket.
    • With the aws-exec-read canned ACL, the owner gets the FULL_CONTROL permission and Amazon EC2 gets READ access to an Amazon Machine Image (AMI) from S3.
    • With the log-delivery-write canned ACL, the LogDelivery group gets WRITE and READ_ACP permissions for the bucket. This is used for S3 access logging.
    • When making an API call, we can specify a canned ACL in our request using the x-amz-acl request header.

    Comparing ACLs, bucket policies, and IAM policies

    ACLs differ from IAM policies and bucket policies in the following ways:

    • ACLs provide only basic read/write permission to buckets, objects, and their ACLs. IAM policies and bucket policies provide more fine-grained permissions than ACLs.
    • ACLs can only grant access to AWS accounts and predefined groups. ACLs cannot grant permissions to IAM users. IAM policies and bucket policies can be used to grant access to IAM users.
    • ACLs, by default, allow full control to the owner of the resource and nothing to everyone else. Bucket policies and IAM policies are not attached to a resource by default.
    • ACLs can only grant permissions. Bucket policies and IAM policies can explicitly deny access.
    • ACLs cannot conditionally allow or deny access. Bucket policies and IAM policies can conditionally allow or deny access.
    • ACLs are represented internally as XML documents. Bucket policies and IAM policies are represented as JSON documents, and the maximum size of such a JSON document is 20 KB.

    IAM policies differ from ACLs and bucket policies in the following ways:

    • IAM policies are user-based and are applied to users. ACLs and bucket policies are resource-based policies and are applied to resources.
    • IAM policies can be inline (embedded directly into a user, group, or role) or standalone (can be attached to any IAM user, group, or role). ACLs and bucket policies are sub-resources of a bucket.
    • IAM policies can only give access to an IAM user. Bucket policies and ACLs can be used to provide anonymous access as well as access to a root user.
    We can mix ACLs, bucket policies, and IAM policies. All policies are evaluated at the same time if the bucket and user are within the same account.

    See also

    • You can read about IAM policies in the Creating IAM policies recipe in Chapter 1, Managing AWS Accounts with IAM and Organizations.

    Creating an S3 bucket policy

    In this recipe, we will learn to create bucket policies for our S3 buckets. Whenever possible, it is preferable to use a bucket policy or IAM policy instead of ACLs. The choice between bucket and IAM policies is mostly a personal preference. We can also create bucket policies using prefixes. S3 is an object store with no concept of folders, but prefixes can be used to imitate folders. Prefixes can represent objects as well.

    Getting ready

    We need a working AWS account with following resources configured:

    1. A bucket and a file in it: I will be using a bucket name awsseccookbook with a file named image-heartin-k.png. Replace them with your bucket name and filename.
    2. A user with no permission and a user with administrator permission: Configure CLI profiles for these users. I will be calling users and their profiles testuser and awssecadmin, respectively.
    1. Uncheck the two Block all public access settings related to bucket policies. Leave the the other settings checked, as shown in the following screenshot, and click Save:
    1. Verify that your bucket does not allow listing for everyone by going to the bucket URL from the browser.

    Next, we will use bucket policies to give permissions to everyone to list the contents of our bucket and then retry this step.

    How to do it...

    We will first generate a policy from the console using the policy generator. Later, we will execute the policy from the CLI.

    Bucket public access with a bucket policy from the console

    We can give public access to list the contents of a bucket as follows:

    1. Go to the S3 service in the console, click on your bucket's name, go to the Permissions tab, and then go to Bucket Policy.
    2. Click on Policy generator in the lower-left corner.
    3. Within Policy generator, select/enter data as follows:
      • Select Type of Policy as Bucket Policy.
      • Select Principal as *.
      • Select AWS Service as Amazon S3.
      • Select Actions as ListBucket.
      • Select Amazon Resource Name (ARN) as arn:aws:s3:::awsseccookbook.
    4. Click on Add Conditions (Optional).
    5. Click Add Condition and enter the following:
      • Condition as DateLessThan
      • Key as aws:EpochTime
      • Value as a future date in epoch format (for example, 1609415999)
    6. Click Add Condition.
    7. Click Add Statement.
    8. Click Generate Policy. The policy should look similar to the following. I have changed the Sid to a meaningful name:
    {
    "Id": "Policy1560413644620",
    "Version": "2012-10-17",
    "Statement": [
    {
    "Sid": "ListBucketPermissionForAll",
    "Action": [
    "s3:ListBucket"
    ],
    "Effect": "Allow",
    "Resource": "arn:aws:s3:::awsseccookbook",
    "Condition": {
    "DateLessThan": {
    "aws:EpochTime": "1609415999"
    }
    },
    "Principal": "*"
    }
    ]
    }
    1. Copy and paste the policy from the policy generator into the bucket policy editor, and click Save. The contents of the bucket should now be listed:
    1. In the bucket policy, change the value of Action to s3:GetObject and Resource to arn:aws:s3:::awsseccookbook/*, and then click Save. Access any object from within the bucket from the browser. We should be able to successfully retrieve the object:

    If we change the resource to arn:aws:s3:::awsseccookbook/* without an object operation such as s3:GetObject, we will get an error stating that the action does not apply to any resource. This is because, when we add any prefix to the bucket, it is considered an object operation and we have not defined any object operations yet.

    Bucket list access with a bucket policy from the CLI

    In this section, we will see how to add a bucket policy from the CLI:

    1. If you are following along from the previous section, remove the bucket policy that was added. Verify that you do not have access to list the bucket or get the object from the browser.
    2. Create a bucket policy to allow our test user to access it and save it as bucket-policy-allow-test-user.json:
    {
    "Id": "Policy1560416549842",
    "Version": "2012-10-17",
    "Statement": [
    {
    "Sid": "ListAllBuckets",
    "Action": [
    "s3:ListBucket"
    ],
    "Effect": "Allow",
    "Resource": "arn:aws:s3:::awsseccookbook",
    "Principal": {
    "AWS": "arn:aws:iam::135301570106:user/testuser"
    }
    }
    ]
    }

    The condition element is an optional element.

    1. Attach the policy to the bucket:
    aws s3api put-bucket-policy \
    --bucket awsseccookbook \
    --policy file://resources/bucket-policy-allow-test-user.json \
    --profile awssecadmin
    1. List the contents of the bucket using the testuser user from the command line, as shown in the following screenshot:

    Now that you have seen how to create policies from the console and the CLI, practice more scenarios with each of the available actions and conditions.

    How it works...

    In this recipe, we created S3 bucket policies. A bucket policy statement can have the following components: Sid, Principal, Effect, Action, Resource, and Condition. All of these except Principal are the same as an IAM policy and we explored them in the Creating IAM policies recipe in Chapter 1, Managing AWS Accounts with IAM and Organizations.

    Principal for a bucket policy can be an account, user, or everyone (denoted by *). Principals can contain an ARN for a resource (specified using the ARN element) or a canonical ID (specified using the CanonicalUser element).

    Resource in the case of a bucket policy is a bucket or object and is denoted using a bucket ARN. The bucket ARN should be in the form: arn:aws:s3:::bucket_name. An object resource is represented in the form: arn:aws:s3:::bucket_name/key_name. To denote all objects within a bucket, we can use arn:aws:s3:::bucket_name/*. We can denote every resource in every bucket as arn:aws:s3:::*.

    Conditions allow us to conditionally execute policies. We used conditions in one of the examples. We will see further practical uses of conditions in the next recipe.

    There's more...

    Bucket policies follow the same JSON document structure as IAM policies, but have an additional principal field. The principal is the user or entity for which a policy statement is applicable. There is no principal for an IAM policy as it is attached to an IAM user. The IAM user who executes that policy is the principal in the case of an IAM policy.

    Consider the following examples when using Principal in bucket policies:

    • A root user can be represented as follows:
    "Principal" : {
    "AWS": "arn:aws:iam::135301570106:root"
    }
    • An IAM user can be represented as follows:
    "Principal" : {
    "AWS": "arn:aws:iam::135301570106:user/testuser"
    }
    • A canonical user ID can be represented as follows:
    "Principal" : {
    "CanonicalUser":"5df5b6014ae606808dcb64208aa09e4f19931b3123456e152c4dfa52d38bf8fd"
    }

    Canonical IDs were used in the previous recipe, Creating S3 access control lists.

    • An anonymous user can be represented as follows:
    "Principal" : "*"

    Let's quickly go through some more important details relating to S3 bucket policies:

    • Currently, we have around 50 bucket policy actions, including those that work on an object (for example, s3:PutObject), a bucket (for example, s3:CreateBucket), or a bucket sub-resource (for example, PutBucketAcl).
    • The current list of bucket sub-resources with permissions includes BucketPolicy, BucketWebsite, AccelerateConfiguration, BucketAcl, BucketCORS, BucketLocation, BucketLogging, BucketNotification, BucketObjectLockConfiguration, BucketPolicyStatus, BucketPublicAccessBlock, BucketRequestPayment, BucketTagging, BucketVersioning, EncryptionConfiguration, InventoryConfiguration, LifecycleConfiguration, MetricsConfiguration, ReplicationConfiguration, and AnalyticsConfiguration.
    • We cannot specify an IAM group as a principal in an S3 bucket policy. If we add a group instead of a user, we will get an error: Invalid principal in policy.
    • Here are some S3-specific condition keys available for use in conditions within a policy: s3:x-amz-acl, s3:x-amz-copy-source, s3:x-amz-metadata-directive, s3:x-amz-server-side-encryption, s3:VersionId, s3:LocationConstraint, s3:delimiter, s3:max-keys, s3:prefix, s3:x-amz-server-side-encryption-aws-kms-key-id, s3:ExistingObjectTag/<tag-key>, s3:RequestObjectTagKeys, s3:RequestObjectTag/<tag-key>, s3:object-lock-remaining-retention-days, s3:object-lock-mode, s3:object-lock-retain-until-date, and s3:object-lock-legal-hold.

    See also

    • You can read about IAM policies in the Creating IAM policies recipe in Chapter 1, Managing AWS Accounts with IAM and Organizations.
    • For a detailed comparison of ACLs, bucket policies, and IAM policies, refer to the There's more section in the Creating S3 access control lists recipe.

    S3 cross-account access from the CLI

    In this recipe, we will allow cross-account access to a bucket in one account (let's call this account A) to users in another account (let's call this account B), both through ACLs and bucket policies. Logging is a common use case for cross-account access. We can store our logs in a different account to provide access to an auditor or to secure them in case the account is compromised.

    Getting ready

    We need two working AWS accounts (let's call them account A and account B), configured as follows:

    1. Note down the canonical ID of account B: I have noted down mine as e280db54f21834544a8162b8fc5d23851972d31e1ae3560240156fa14d66b952.
    2. A bucket in account A with a file in it: I will be using a bucket name awsseccookbook, with a file named image-heartin-k.png. Replace them with your bucket name and filename.
    1. A user with administrator permission in account A and account B: We'll create profiles for these users in the CLI. I am using the awssecadmin and awschild1admin CLI profiles, respectively.
    2. A user or group with no permission in account B: I have created a group, testusergroup, and added the testuser user to the group. I will call this user's CLI profile child1_testuser.

    Verify that both the administrator user and the non-administrator user from account B have no permission to upload to the bucket in account A.

    We can make use of the AWS Organizations service to manage multiple accounts and switch between them with ease.

    How to do it...

    We will implement cross-account access using CLI commands. You can follow the CLI commands and the tips provided within the following recipe to implement the cross-account access in the console or by using APIs.

    Uploading to a bucket in another account

    Perform the following steps to upload files as a user from account B to a bucket in account A:

    1. Create an access control policy document that grants access to account B and save it as acl-write-another-account.json:
    {
    "Owner": {
    "DisplayName": "awsseccookbook",
    "ID": "5df5b6014ae606808dcb64208aa09e4f19931b3123456e152c4dfa52d38bf8fd"
    },
    "Grants": [
    {
    "Grantee": {
    "Type": "CanonicalUser",
    "ID": "e280db54f21834544a8162b8fc5d23851972d31e1ae3560240156fa14d66b952"
    },
    "Permission": "WRITE"
    }
    ]
    }

    The canonical ID of account A is provided under the Owner section, and the canonical ID of account B is provided under the Grants section.

    1. Update the ACL on the bucket owned by account A, as an administrator of account A:
    aws s3api put-bucket-acl \
    --bucket awsseccookbook \
    --access-control-policy file://resources/acl-write-another-account.json \
    --profile awssecadmin

    We should now be able to upload objects to the bucket as an administrator from account B. However, a non-administrator from account B will not be able to upload files:

    To grant permissions from the console, go to the bucket's ACL, click Add account, enter the canonical ID, and give the required permissions.
    1. Create a policy to delegate s3:PutObject access and the s3:PutObjectAcl action to administrator users in account B, and save this file as iam-policy-s3-put-obj-and-acl.json:
    {
    "Version": "2012-10-17",
    "Statement": [
    {
    "Sid": "DelegateS3WriteAccess",
    "Effect": "Allow",
    "Action": [
    "s3:PutObject",
    "s3:PutObjectAcl"
    ],
    "Resource": "arn:aws:s3:::awsseccookbook/*"
    }
    ]
    }

    The s3:PutObjectAcl action is required to use canned ACLs later.

    1. Create a policy in account B using the preceding policy document as an administrator in account B:
    aws iam create-policy \
    --policy-name MyS3PutObjAndAclPolicy \
    --policy-document file://resources/iam-policy-s3-put-obj-and-acl.json \
    --profile awschild1admin

    We should get a response as follows:

    1. Attach the preceding policy to the test user's group:
    aws iam attach-group-policy \
    --group-name testusergroup \
    --policy-arn arn:aws:iam::380701114427:policy/MyS3PutObjAndAclPolicy \
    --profile awschild1admin

    We may also attach the policy directly to the user instead; however, using a group is a recommended practice.

    1. Upload the object to the bucket as a non-administrator user in account B:
    aws s3 cp image-heartin-k.png s3://awsseccookbook/image-from-b-user.png \
    --profile child1_testuser

    We should be able to upload the file successfully.

    If we try to download the object as an administrator in account A, the request will fail as follows:

    1. Upload the object to the bucket as a user in account B with the bucket-owner-full-control canned ACL:
    aws s3 cp image-heartin-k.png s3://awsseccookbook/image-from-b-user.png \
    --acl bucket-owner-full-control \
    --profile child1_testuser

    Account A should now be able to download the file successfully:

    In the next section, we will learn to enforce the situation whereby account B should always give this permission to account A, with the bucket owner using bucket policies.

    Uploading to a bucket in another account with a bucket policy

    If you followed along with the previous section, remove the ACL granted on account A before proceeding with the following steps:

    1. Create a bucket policy that explicitly allows our non-administrator user, testuser, from account B to perform a PutObject action. Also, make sure that the user gives full control to the bucket owner through a canned ACL. Save the file as bucket-policy-write-another-account-user.json:
    {
    "Id": "SomeUniqueId1",
    "Version": "2012-10-17",
    "Statement": [
    {
    "Sid": "AllPutForOtherAccountUser",
    "Action": [
    "s3:PutObject"
    ],
    "Effect": "Allow",
    "Resource": "arn:aws:s3:::awsseccookbook/*",
    "Condition": {
    "StringEquals": {"s3:x-amz-acl":"bucket-owner-full-control"}
    },
    "Principal": {
    "AWS": [
    "arn:aws:iam::380701114427:user/testuser"
    ]
    }
    }
    ]
    }
    1. Attach the bucket policy to the bucket:
    aws s3api put-bucket-policy \
    --bucket awsseccookbook \
    --policy file://resources/bucket-policy-write-another-account-user.json \
    --profile awssecadmin
    1. Attach a policy in account B to the non-administrator user, testuser, which allows the s3:PutObject and s3:PutObjectAcl actions. This step has already been performed in the previous section. If you haven't already done this in the previous section (or if you deleted the policy), work through the previous section to complete it.
    1. Upload image as testuser from account B to a bucket in account A with and without canned ACLs:

    Here, we used a bucket policy to ensure that the user from account B provides full control to the bucket owner in account A using canned ACLs; otherwise the upload will fail.

    How it works...

    In the Uploading to a bucket in another account section, we first granted permissions to account B through a policy in account A. Later, the account B administrator delegated the permission to an administrator user through the user's group. We also saw that the account A administrator won't have access to an object uploaded by the user of account B, even though account A is the bucket owner, unless account B explicitly grants permission.

    For account A to have access, the user of account B should grant permission while uploading the file, and this can be done using canned ACLs. An account B user with s3:PutObjectAcl permission can grant permission to account A, the bucket owner, using the bucket-owner-read or bucket-owner-full-control canned ACLs. With ACLs, there is no way to enforce a constraint, such as that account B should always give permission to account A, the bucket owner. This can, however, be enforced with bucket policies.

    In the Uploading to a bucket in another account with a bucket policy section, we directly gave permission to the account B user through a bucket policy. We also added a Condition element to our bucket policy to ensure that the user in account B should always use the bucket-owner-full-control ACL to give complete control to account A, the bucket owner. The s3:PutObjectAcl permission is required for account B to specify a canned ACL.

    There's more...

    Account A can grant access to its S3 resources to account B in one of the following ways:

    • The account A administrator grants access to account B through a bucket policy or ACL. The account B administrator delegates that permission to a user using a user policy. The user in account B can then access the S3 resources in account A according to the permissions granted to them. In this recipe, we followed this approach using ACL in the Uploading to a bucket in another account section, and the same is also possible with a bucket policy.
    • The account A administrator grants access directly to a user in account B through a bucket policy. The account B administrator still has to delegate permission to the user using a policy. The user in account B can then access the S3 resources in account A according to the permissions granted to them. In this recipe, we followed this approach in the Uploading to a bucket in another account with a bucket policy section.
    • The account A administrator creates a role with the required permissions to its S3 resources in account A. The role will have a trust relationship with account B as a trusted entity and account A as the trusting entity. The account B administrator delegates that permission to a user using user policy. The user in account B can then assume that role and access the S3 resources in account A in accordance with the permissions granted to them. We saw a variation of IAM role-based, cross-account access in Chapter 1, Managing AWS Accounts with IAM and Organizations, in the Switching role with AWS organizations recipe.

    Let's quickly go through some scenarios to understand cross-account policies better:

    • Account A created a bucket and gave PutObject ACL permissions to everyone (public access):
      • Can a user from the same AWS account with no permissions (no policies attached) upload a file to that bucket from the AWS CLI? Yes.
      • Can a user from another AWS account with no permissions (no policies attached) upload a file to that bucket from the AWS CLI? No.
      • Can an administrator user from another AWS account upload a file to that bucket from the AWS CLI? Yes.
    • Account A created a bucket and gave PutObject ACL permissions to account B using the account's canonical ID:
      • Can a user with no permissions (no policies attached) from account B upload a file to that bucket from the AWS CLI? No.
      • Can an administrator user from account B upload a file to that bucket from the AWS CLI? Yes.
    • Account B uploaded a file to account A with cross-account access and no canned ACL (equivalent to the canned private ACL).
      • Can a user with no permissions (no policies attached) from the bucket owner account read that object? No.
      • Can an administrator user from the bucket owner account read that object? No.
      • Can an administrator user from the bucket owner account delete that object? Yes.
    • Account A created a bucket and gave the PutObject permission directly to a user, testuser, in account B through a bucket policy.
      • Can testuser upload to a bucket in account A without additional permissions in account B? No, they still need to have the PutObject permission to the bucket assigned through a user policy within account B.
      • Can an administrator in account B upload to a bucket in account A? No, we have explicitly granted permission to testuser.
    • Can the account B administrator delegate more access to its users than it was granted by account A? This will not result in an error, but it will not have any impact as the permissions will be evaluated again from account A.
    • Can we enforce the usage of canned ACL through a bucket policy? Yes, using a condition that checks the value of the s3:x-amz-acl condition key, for example, for the bucket-owner-full-control value.

    See also

    • We saw a variation of IAM role-based, cross-account access in Chapter 1, Managing AWS Accounts with IAM and Organizations, in the Switching Role with AWS Organizations recipe.

    S3 pre-signed URLs with an expiry time using the CLI and Python

    In this recipe, we will learn to use pre-signed URLs from the CLI and then via the Python SDK. We can grant temporary permission to access S3 objects using pre-signed URLs with an expiry time. Currently, we cannot do this from the console. We have to do it through APIs from the CLI or by using an SDK.

    Getting ready

    We need a working AWS account with the following resources configured:

    1. A bucket and a file in it: I will be using a bucket name awsseccookbook with a file named mission-impossible.txt. Replace them with your bucket name and filename.
    2. A user with administrator permission on S3: We will configure a CLI profile for this user. I will be calling both the user and the CLI profile awssecadmin.

    To execute the Python code, we need to install Python and Boto3 in the following order:

    1. Install python3.
    2. Install boto3 (if pip3 is installed, you can install boto3 as follows:
     pip3 install boto3

    How to do it...

    We will first create a pre-signed URL from the CLI and then use the Python SDK.

    Generating a pre-signed URL from the CLI

    We can create a pre-signed URL from the CLI and test it as follows:

    1. Pre-sign a URL from the CLI as follows:
    aws s3 presign s3://awsseccookbook/image-heartin-k.png \
    --expiry 100 \
    --profile awssecadmin

    This command will output a signed URL with an expiry time:

    1. Copy and paste the URL and run it from a browser within the specified time. We should be able to see the contents of our file:

    If we run the URL after the specified time, we should get an access denied error message:

    Next, we will look at how to do pre-signing using the Python SDK.

    Generating a pre-signed URL using the Python SDK

    We can create a pre-signed URL using the Python SDK and test it as follows:

    1. Create a file named s3presign.py with the following code:
    import boto3

    boto3.setup_default_session(profile_name='awssecadmin')
    s3_client = boto3.client('s3')

    url = s3_client.generate_presigned_url('get_object', Params={'Bucket': 'awsseccookbook', 'Key': 'mission-impossible.txt'}, ExpiresIn=300)
    print(url)
    1. Execute the code as python3 s3presign.py:

    This will return the pre-signed URL:

    Run the URL from a browser (much as we did in the previous section) before and after the specified time.

    How it works...

    In the Generating a pre-signed URL from the CLI section, we pre-signed a URL from the CLI. In the Generating a pre-signed URL using the Python SDK section, we pre-signed a URL using the Python SDK. We used the boto3 library for our Python SDK demo. Boto is the AWS SDK for Python. It facilitates the creation, configuration, and management of AWS services, such as EC2 and S3 using Python.

    Most APIs related to pre-signing will accept the following data for generating pre-signed, timed URLs:

    • Bucket and object
    • Expiry date and time
    • HTTP method
    • Security credentials

    In this recipe, we specified the bucket, object, and expiry in code. The HTTP operation was GET. For security credentials, we specified a user profile that has permissions for the operation, which was get_object in our case. Anyone with valid credentials can generate a pre-signed URL. However, if the user does not have permission to perform the intended operation (for example, get_object), then the operation will eventually fail.

    There's more...

    In this recipe, we generated pre-signed URLs using both CLI commands and Python code. The following code snippet shows how pre-signing can be done from Java:

    GeneratePresignedUrlRequest generatePresignedUrlRequest = new             
    GeneratePresignedUrlRequest(bucketName, objectKey)
    .withMethod(HttpMethod.PUT)
    .withExpiration(expiration);
    URL url = s3Client.generatePresignedUrl(generatePresignedUrlRequest);

    You can follow the AWS documentation to do the same with other supported SDKs as well.

    See also

    Encrypting data on S3

    In this recipe, we will learn to encrypt data on S3 at rest using server-side encryption techniques. Encryption on the server side can be done in three ways: server-side encryption with S3-managed keys (SSE-S3), server-side encryption with KMS-managed keys (SSE-KMS), and server-side encryption with customer-provided keys (SSE-C). In client-side encryption, data is encrypted on the client side and then sent to the server.

    Getting ready

    We need a working AWS account with the following resources configured:

    1. A bucket: I will be using a bucket with the name awsseccookbook. Replace it with your bucket name.
    2. A user with administrator permission on S3: Configure a CLI profile for this user. I will be calling both the user and the profile on the awssecadmin CLI.
    3. A customer-managed key created in KMS: Follow the Creating keys in KMS recipe in Chapter 4, Key Management with KMS and CloudHSM, to create a key. I have created one named MyS3Key.

    How to do it...

    In this recipe, we will learn about various use cases for server-side encryption.

    Server-side encryption with S3-managed keys (SSE-S3)

    We can upload an object from the console with SSE-S3 as follows:

    1. Go to the S3 bucket.
    2. Click Upload, click Add Files, select your file, and then click Next, selecting the defaults in the Set Properties tab.
    3. In the Set Properties tab, scroll down and select Amazon S3 master key under Encryption. Follow the on-screen options to complete the upload. We can verify this from the object's properties:
    It is important to note that, if we try to open or download the object, we will still be able to see the object as-is because S3 will decrypt the object using the same key.

    We can change encryption for an existing object to SSE-S3 as follows:

    1. Go to the object's Properties tab.
    2. Go to Encryption, select AES-256, and then click Save:

    We can upload an object from the CLI with SSE-S3 using the following command:

    aws s3 cp image-heartin-k.png s3://awsseccookbook/image-heartin-k.png \
    --sse AES256 \
    --profile awssecadmin

    Next, we will execute SSE with KMS managed keys.

    Server-side encryption with KMS-managed keys (SSE-KMS)

    We can upload an object from the console with SSE-KMS as follows:

    1. Go to the bucket.
    2. Click Upload, click Add Files, select your file, and then click Next, selecting the defaults in the Set Properties tab.
    3. In the Set Properties tab, scroll down, select AWS KMS master key, and then select our KMS key (refer to the Getting ready section). Follow the options on the screen to complete the upload:

    We can change encryption for an existing object to SSE-KMS as follows:

    1. Go to the object's Properties tab.
    2. Go to Encryption, select AWS-KMS, then select your KMS key (refer to the Getting ready section), and then click Save:

    We can upload an object from the CLI with SSE-KMS using the following command:

    aws s3 cp image-heartin-k.png s3://awsseccookbook/image-heartin-k.png \
    --sse aws:kms \
    --sse-kms-key-id cd6b3dff-cfe1-45c2-b4f8-b3555d5086df \
    --profile awssecadmin
    sse-kms-key-id is the ID of the KMS key you created (refer to the Getting ready section).

    Server-side encryption with customer-managed keys (SSE-C)

    We can upload an object from the CLI with SSE-C as follows:

    1. Upload an object from the CLI with SSE-C by using the following command:
    aws s3 cp image-heartin-k.png s3://awsseccookbook/image-heartin-k.png \
    --sse-c AES256 \
    --sse-c-key 12345678901234567890123456789012 \
    --profile awssecadmin
    1. Retrieve the object encrypted using SSE-C, providing the same key we used in the previous command, as follows:
    aws s3 cp s3://awsseccookbook/image-heartin-k.png image-heartin-k1.png \
    --sse-c AES256 \
    --sse-c-key 12345678901234567890123456789012 \
    --profile awssecadmin
    If we do not specify the sse-c option while downloading an object encrypted with SSE-C, we will get an exception as follows: fatal error: An error occurred (400) when calling the HeadObject operation: Bad Request. If we do not specify the correct key that was used for encryption (using the sse-c-key option) while downloading an object encrypted with SSE-C, we will get an exception as follows: fatal error: An error occurred (403) when calling the HeadObject operation: Forbidden.

    How it works...

    In the Server-side encryption with S3-managed keys (SSE-S3) section, we uploaded an object from the console with SSE-S3 encryption. We changed the encryption for an existing object to SSE-S3 encryption. We also uploaded an object with SSE-S3 encryption. When performing SSE-S3 encryption from the CLI, the value of the sse parameter is optional. The default is AES256.

    In the Server-side encryption with KMS-managed keys (SSE-KMS) section, we uploaded an object with SSE-KMS encryption. We changed encryption for an existing object to SSE-KMS encryption. We also uploaded an object from the CLI with SSE-KMS encryption. When performing SSE-KMS encryption from the CLI, the value of the sse-c parameter is optional. The default is AES256.

    In the Server-side encryption with customer-managed keys (SSE-C) section, we uploaded an object from the CLI with SSE-C encryption. Unlike the other two server-side encryption techniques SSE-S3 and SSE-KMS, the console does not currently have an explicit option for SSE-C. We will need to execute this using APIs. In this recipe, we used a 32-digit number as the key. However, in the real world, keys are generally generated using a key generation tool. We will learn more about keys when we discuss KMS later in this book.

    There's more...

    Let's quickly go through some important concepts related to S3 encryption:

    • Data on S3 can be encrypted while at rest (stored on AWS disks) or in transit (moving to and from S3). Encryption at rest can be done using server-side encryption or by uploading encrypted data from the client.
    • S3 server-side encryption techniques for data at rest use symmetric keys for encryption.
    • Encryption of data in transit using SSL/TLS (HTTPS) uses asymmetric keys for encryption.
    • S3 default encryption (available as bucket properties) provides a way to set the default encryption behavior for an S3 bucket with SSE-S3 or SSE-KMS. Enabling this property does not affect existing objects in our bucket, and applies only for new objects uploaded.
    • With client-side encryption, we need to manage keys on our own. We can also use KMS to manage keys through SDKs. However, it is not currently supported by all SDKs.
    • Encryption in transit can be achieved with client-side encryption or by using SSL/TLS (HTTPS).
    • Server-side encryption types, SSE-S3 and SSE-KMS, follow envelope encryption, while SSE-C does not use envelope encryption.
    • Some important features of SSE-S3 include the following:
      • AWS takes care of all key management.
      • It follows envelope encryption.
      • It uses symmetric keys to encrypt data.
      • Each object is encrypted with a unique key.
      • It uses the AES-256 algorithm.
      • A data key is encrypted with a master key that is automatically rotated periodically.
      • It is free.
    • Some important features of SSE-KMS include the following:
      • Keys are managed by AWS KMS.
      • Keys can be shared by multiple services (including S3).
      • As customers, we get more control over keys, such as creating master and data keys, and disabling and rotating master keys.
      • It follows envelope encryption.
      • It uses symmetric keys to encrypt data.
      • A data key is encrypted with a master key.
      • It uses the AES-256 algorithm.
      • We can choose which object key to encrypt while uploading objects.
      • We can use CloudTrail to monitor KMS API calls, enabling better auditing.
      • It is not free.
    • Some important features of SSE-C include the following:
      • Keys are managed by us (customers).
      • The customer provides a key along with data. S3 uses this key for encryption and deletes the key.
      • The key must be supplied for decryption as well.
      • It does not use envelope encryption.
      • It uses symmetric keys to encrypt data.
      • It uses the AES-256 algorithm.
      • AWS will force you to use HTTPS while uploading data since you are uploading your symmetric key as well.
      • It is free.
    • By default, S3 allows both HTTP and HTTPS access to data. HTTPS can be enforced with the help of a bucket policy with the following condition element:
    "Condition": {
    "Bool": {
    "aws:SecureTransport": "false"
    }
    }

    Any requests without HTTPS will fail with this condition.

    See also

    Protecting data with versioning

    In this recipe, we will learn to enable versioning on an S3 bucket. If versioning is enabled for a bucket, S3 keeps a copy of every version of the file within the bucket. Versioning protects data by providing a means to recover it in the case of unintentional actions such as deletes and overwrites.

    Getting ready

    We need a working AWS account with the following resources configured:

    1. A bucket: I will be using a bucket name awsseccookbook. Replace it with your bucket name.
    2. A user with administrator permission on S3: Configure a CLI profile for this user if you want to execute this recipe from the CLI. I will be calling both the user and the awssecadmin CLI profile.

    How to do it...

    We can enable versioning as follows:

    1. Go to the S3 bucket's Properties tab, click on Versioning, select Enable Versioning, and then click Save.
    2. Suspend versioning from the same screen by selecting Suspend versioning and click Save.

    How it works...

    In this recipe, we enabled and suspended versioning from the console. After we enable versioning, S3 stores every version of the object with a version ID. While making a GET request, we can specify the ID of the version to be returned. If you do not specify any version while making a GET request, S3 will return the latest version of the object.

    We can restore an S3 version using either of the following ways:

    • Retrieve the version we want to restore and add it to the bucket with a PUT request (recommended).
    • Delete every version of the object available from the present version until the required version becomes the current version.

    When you delete an object with versioning enabled, a delete marker is added as the latest version of the object. If you delete the delete marker, another version of the delete marker is created. We can delete a specific version of an object by specifying the version ID. When we delete a version, no delete markers are inserted.

    Once versioning is enabled, it cannot be disabled, only suspended. No further versions are created when versioning is suspended. However, all previous versions will still be present. Once versioning is suspended, any new object will be stored with a NULL version ID and becomes the current object.

    There's more...

    We can enable and suspend versioning from the CLI using the put-bucket-versioning sub-command providing that bucket and versioning-configuration. versioning-configuration contain two parameters: MFADelete, which denotes the required state of MFA Delete (Enabled or Disabled), and Status, which denotes the required state of versioning (Enabled or Suspended). For versioning configuration, we can either use the shorthand form, --versioning-configuration MFADelete=Disabled,Status=Enabled, or we can specify a JSON file with the configuration as --versioning-configuration file://resources/versioning-configuration.json; the JSON file will look as follows:

    {
    "MFADelete": "Disabled",
    "Status": "Enabled"
    }

    Complete CLI commands for enabling and suspending versioning are available with the code files.

    Let's quickly go through some important concepts related to S3 versioning:

    • Versioning is a sub-resource of an S3 object.
    • A delete request on a suspended bucket will work as follows:

      • If there is a version with the NULL version ID(this is present only if the object was modified after suspending versions), it is deleted and then a delete marker with the NULL version ID is inserted.
      • If there is no version with the NULL version ID, a delete marker with the NULL version ID is inserted.
    • We can use life cycle management rules to transition older versions to other S3 storage tiers (archives) or even delete them.
    • We can protect versions by enabling MFA Delete. With MFA Delete for versioning, an extra level of authentication is required to delete versions. The MFA Delete configuration is stored within the versioning sub-resource.

    Let's also quickly go through some scenario-based questions to understand versioning better:

    • We enabled versioning and PUT the same object twice (with modifications). We then disabled versioning and PUT the same object twice (with modifications). How many versions of the object will now be available if you check? 3.
    • We enabled versioning and PUT the same object twice, creating two versions as version 1 and version 2. We then disabled versioning and PUT the same object again, creating version 3. Later, we deleted the object. Can we restore this object? If yes, which version will be the latest? We can restore the object and the latest one following the restoration will be version 2.

    See also

    Implementing S3 cross-region replication within the same account

    In this recipe, we will learn to implement cross-region replication with S3 buckets. If cross-region replication is enabled for a bucket, the data in a bucket is asynchronously copied to a bucket in another region. Cross-region replication provides better durability for data and aids disaster recovery. Replicating data may be also done for compliance and better latency.

    Getting ready

    We need a working AWS account with the following resources configured:

    • A user with administrator permission for S3 for a source bucket's account. I will be calling the user awssecadmin.
    • Create two buckets, one each in two regions, with versioning enabled. I will be using the awsseccookbook bucket in the us-east-1 (N. Virginia) region and the awsseccookbookmumbai bucket in ap-south-1 (Mumbai).

    How to do it...

    We can enable cross-region replication from the S3 console as follows:

    1. Go to the Management tab of your bucket and click on Replication.
    2. Click on Add rule to add a rule for replication. Select Entire bucket. Use the defaults for the other options and click Next:
    1. In the next screen, select the Destination bucket. Leave the other options as-is and click Next:
    1. In the the Configure options screen, ask S3 to create the required IAM role, name your rule (by selecting the relevant option), and then click Next:
    1. In the next screen, review the rule and click Save.
    2. Upload an object to the source bucket and verify whether the object is replicated in the destination bucket. Also, verify that the value of the Replication Status field of the object in the object's Overview tab is COMPLETE once replication is completed.

    The autogenerated role's permissions policy document and trust policy document are available with code files as replication-permissions-policy.json and assume-role-policy.json.

    How it works...

    In this recipe, we enabled cross-region replication within the same account. We replicated the entire bucket. We can also specify a subset of objects using a prefix or tags. We did not change the storage class of replicated objects in this recipe even though we can.

    Here are the prerequisites for cross-region replication:

    1. Source and destination buckets must be version-enabled and should be in different regions.
    2. Replication can be done only to a single destination bucket.
    3. S3 should have permission to replicate to a destination bucket.

    We asked S3 to create the required role for replication. The autogenerated role has a permissions policy with s3:Get* and s3:ListBucket permissions on the source bucket, and s3:ReplicateObject, s3:ReplicateDelete, s3:ReplicateTags, and s3:GetObjectVersionTagging permissions on the destination bucket.

    There's more...

    The steps to enable cross-region replication from the CLI can be summarized as follows:

    1. Create a role that can be assumed by S3, with a permissions policy with the s3:Get* and s3:ListBucket actions for the source bucket and objects, and the s3:ReplicateObject, s3:ReplicateDelete, s3:ReplicateTags, and s3:GetObjectVersionTagging actions for the destination bucket objects.
    2. Create (or update) a replication configuration for the bucket using the aws s3api put-bucket-replication command providing a replication configuration JSON.

    Complete CLI commands and policy JSON files are available with the code files.

    Let's quickly go through some more details related to S3 cross-region replication:

    • Cross-region replication is done via SSL.
    • Only objects that were added after enabling cross-region replication are replicated.
    • If the source bucket owner does not have read object or read ACL permission, objects are not replicated.
    • By default, the source object's ACLs are replicated. However, changing ownership to the destination bucket owner can be configured.
    • Objects with SSE-C encryption are not currently replicated.
    • To replicate objects with SSE-KMS encryption, we need to provide one or more KMS keys as required for S3 to decrypt the objects. KMS requests related to S3 in the source and destination regions can cause us to exceed the KMS request limit for our account. We can request an increase in our KMS request limit from AWS.
    • Since replication happens asynchronously, it might take some time (even up to hours for larger objects) to replicate.
    • Sub-resource changes are not currently replicated. For example, automated life cycle management rules are not replicated. However, we can configure a change in the current storage class of the object during replication.
    • We cannot replicate from a replica bucket.
    • Deleting a version in the source bucket does not delete the version in the destination bucket. This adds additional protection to data. A delete marker was replicated with the old schema if DeleteMarkerReplication is enabled. However, the new schema does not support delete marker replication, which would prevent any delete actions from replicating.

    See also

    Implementing S3 cross-region replication across accounts

    In this recipe, we will implement cross-region replication across accounts.

    Getting ready

    We need a working AWS account with the following resources configured:

    • A user with administrator permission for S3 for a source bucket's account. I will be calling this user asawssecadmin.
    • Create one bucket each in two accounts with two different regions and versioning enabled. I will be using the awsseccookbook bucket for the us-east-1 (N. Virginia) region in the source account and the awsseccookbookbackupmumbai bucket with ap-south-1 (Mumbai) in the destination account.

    How to do it...

    We can enable cross-region replication from the S3 console as follows:

    1. Go to the Management tab of your bucket and click on Replication.
    2. Click on Add rule to add a rule for replication. Select Entire bucket.
    Screens that do not change for those shown in previous sections are not shown again. Refer to earlier sections if you have any doubts.
    1. In the next screen, select a destination bucket from another account, providing that account's account ID, and click Save:
    1. Select the option to change ownership of the object destination bucket owner and click Next:
    1. In the Configure options screen, ask S3 to create the required IAM role for replication (as we did in previous recipes). Also, copy the bucket policy that is provided by S3 and apply it to the destination bucket:

    The bucket policy for the destination bucket should appear as follows:

    {
    "Version": "2008-10-17",
    "Id": "S3-Console-Replication-Policy",
    "Statement": [
    {
    "Sid": "S3ReplicationPolicyStmt1",
    "Effect": "Allow",
    "Principal": {
    "AWS": "arn:aws:iam::135301570106:root"
    },
    "Action": [
    "s3:GetBucketVersioning",
    "s3:PutBucketVersioning",
    "s3:ReplicateObject",
    "s3:ReplicateDelete",
    "s3:ObjectOwnerOverrideToBucketOwner"
    ],
    "Resource": [
    "arn:aws:s3:::awsseccookbookbackupmumbai",
    "arn:aws:s3:::awsseccookbookbackupmumbai/*"
    ]
    }
    ]
    }
    1. Review the rule and click Save. The autogenerated role's permissions policy document and trust policy document are available with the code files as replication-permissions-policy-other-account.json and assume-role-policy.json, respectively.
    1. Log in to the account where the destination bucket is present and update the bucket policy copied in step 5 on the destination bucket.
    2. Upload an object to the source bucket and verify whether the object is replicated in the destination bucket. Also, verify that the destination account is the owner of the uploaded file.

    How it works...

    We enabled cross-region replication across accounts. For general work on cross-region replication, refer to the Implementing S3 cross-region replication within the same account recipe. Replicating objects in another AWS account (cross-account replication) will provide additional protection for data against situations such as someone gaining illegal access to the source bucket and deleting data within the bucket and its replications.

    We asked S3 to create the required role for replication. The autogenerated role has a permissions policy with s3:Get* and s3:ListBucket permissions on the source bucket, and s3:ReplicateObject, s3:ReplicateDelete, s3:ReplicateTags, and s3:GetObjectVersionTagging permissions on the destination bucket. Given that we selected the option to change object ownership to the destination bucket owner, which is required for cross-region replication across accounts, the destination policy included the s3:ObjectOwnerOverrideToBucketOwner action. These actions are required for owner override. Without owner override, the destination bucket account won't be able to access the replicated files or their properties.

    With cross-region replication across accounts, the destination bucket in another account should also provide permissions to the source account using a bucket policy. The generated trust relationship document has s3.amazonaws.com as a trusted entity. The trust policy (assume role policy) allows the trusted entity to assume this role through the sts:AssumeRole action. Here, the trust relationship document allows the S3 service from the source account to assume this role in the destination account. For reference, the policy is provided with the code files as assume-role-policy.json.

    There's more...

    The steps to implement cross-region replication across accounts from the CLI can be summarized as follows:

    1. Create a role that can be assumed by S3 and has a permissions policy with the s3:Get* and s3:ListBucket actions for the source bucket and objects, and the s3:ReplicateObject, s3:ReplicateDelete, s3:ReplicateTags, s3:GetObjectVersionTagging and s3:ObjectOwnerOverrideToBucketOwner actions for the destination bucket objects.
    2. Create (or update) the replication configuration for the bucket using the aws s3api put-bucket-replication command by providing a replication configuration JSON. Use the AccessControlTranslation element to give ownership of the file to the destination bucket.
    3. Update the bucket policy of the destination bucket with the put-bucket-policy sub-command.

    Complete CLI commands and policy JSON files are provided with the code files for reference.

    Now, let's quickly go through some more concepts and features related to securing data on S3:

    • The S3 Object Lock property can be used to prevent an object from being deleted or overwritten. This is useful for a Write-Once-Read-Many (WORM) model.
    • We can use the Requester Pays property so that the requester pays for requests and data transfers. While Requester Pays is enabled, anonymous access to this bucket is disabled.
    • We can use tags with our buckets to track our costs against projects or other criteria.
    • By enabling Server Access Logging for our buckets, S3 will log detailed records for requests that are made to a bucket.
    • We can log object-level API activity using the CloudTrail data events feature. This can be enabled from the S3 bucket's properties by providing an existing CloudTrail trail from the same region.
    • We can configure Events under bucket properties to receive notifications when an event occurs. Events can be configured based on a prefix or suffix. Supported events include PUT, POST, COPY, multipart upload completion, object create events, object lost in RSS, permanently deleted, delete marker created, all object delete events, restore initiation, and restore completion.
    • S3 Transfer Acceleration is a feature that enables the secure transfer of objects over long distances between a client and a bucket.
    • We can configure our bucket to allow cross-origin requests by creating a CORS configuration that specifies rules to identify the origins we want to allow, the HTTP methods supported for each origin, and other operation-specific information.
    • We can enable storage class analysis for our bucket, prefix, or tags. With this feature, S3 analyzes our access patterns and suggests an age at which to transition objects to Standard-IA.
    • S3 supports two types of metrics: the daily storage metric (free and enabled by default) and request and data transfer metrics (paid and need to be opted in to). Metrics can be filtered by bucket, storage type, prefix, object, or tag.

    See also

    Left arrow icon Right arrow icon
    Download code icon Download Code

    Key benefits

    • Explore useful recipes for implementing robust cloud security solutions on AWS
    • Monitor your AWS infrastructure and workloads using CloudWatch, CloudTrail, config, GuardDuty, and Macie
    • Prepare for the AWS Certified Security-Specialty exam by exploring various security models and compliance offerings

    Description

    As a security consultant, securing your infrastructure by implementing policies and following best practices is critical. This cookbook discusses practical solutions to the most common problems related to safeguarding infrastructure, covering services and features within AWS that can help you implement security models such as the CIA triad (confidentiality, integrity, and availability), and the AAA triad (authentication, authorization, and availability), along with non-repudiation. The book begins with IAM and S3 policies and later gets you up to speed with data security, application security, monitoring, and compliance. This includes everything from using firewalls and load balancers to secure endpoints, to leveraging Cognito for managing users and authentication. Over the course of this book, you'll learn to use AWS security services such as Config for monitoring, as well as maintain compliance with GuardDuty, Macie, and Inspector. Finally, the book covers cloud security best practices and demonstrates how you can integrate additional security services such as Glacier Vault Lock and Security Hub to further strengthen your infrastructure. By the end of this book, you'll be well versed in the techniques required for securing AWS deployments, along with having the knowledge to prepare for the AWS Certified Security – Specialty certification.

    Who is this book for?

    If you are an IT security professional, cloud security architect, or a cloud application developer working on security-related roles and are interested in using AWS infrastructure for secure application deployments, then this Amazon Web Services book is for you. You will also find this book useful if you’re looking to achieve AWS certification. Prior knowledge of AWS and cloud computing is required to get the most out of this book.

    What you will learn

    • Create and manage users, groups, roles, and policies across accounts
    • Use AWS Managed Services for logging, monitoring, and auditing
    • Check compliance with AWS Managed Services that use machine learning
    • Provide security and availability for EC2 instances and applications
    • Secure data using symmetric and asymmetric encryption
    • Manage user pools and identity pools with federated login

    Product Details

    Country selected
    Publication date, Length, Edition, Language, ISBN-13
    Publication date : Feb 27, 2020
    Length: 440 pages
    Edition : 1st
    Language : English
    ISBN-13 : 9781838827427
    Category :
    Concepts :
    Tools :

    What do you get with eBook?

    Product feature icon Instant access to your Digital eBook purchase
    Product feature icon Download this book in EPUB and PDF formats
    Product feature icon Access this title in our online reader with advanced features
    Product feature icon DRM FREE - Read whenever, wherever and however you want
    Product feature icon AI Assistant (beta) to help accelerate your learning
    OR
    Modal Close icon
    Payment Processing...
    tick Completed

    Billing Address

    Product Details

    Publication date : Feb 27, 2020
    Length: 440 pages
    Edition : 1st
    Language : English
    ISBN-13 : 9781838827427
    Category :
    Concepts :
    Tools :

    Packt Subscriptions

    See our plans and pricing
    Modal Close icon
    €18.99 billed monthly
    Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
    Feature tick icon Constantly refreshed with 50+ new titles a month
    Feature tick icon Exclusive Early access to books as they're written
    Feature tick icon Solve problems while you work with advanced search and reference features
    Feature tick icon Offline reading on the mobile app
    Feature tick icon Simple pricing, no contract
    €189.99 billed annually
    Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
    Feature tick icon Constantly refreshed with 50+ new titles a month
    Feature tick icon Exclusive Early access to books as they're written
    Feature tick icon Solve problems while you work with advanced search and reference features
    Feature tick icon Offline reading on the mobile app
    Feature tick icon Choose a DRM-free eBook or Video every month to keep
    Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just €5 each
    Feature tick icon Exclusive print discounts
    €264.99 billed in 18 months
    Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
    Feature tick icon Constantly refreshed with 50+ new titles a month
    Feature tick icon Exclusive Early access to books as they're written
    Feature tick icon Solve problems while you work with advanced search and reference features
    Feature tick icon Offline reading on the mobile app
    Feature tick icon Choose a DRM-free eBook or Video every month to keep
    Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just €5 each
    Feature tick icon Exclusive print discounts

    Frequently bought together


    Stars icon
    Total 126.97
    AWS for Solutions Architects
    €48.99
    AWS for System Administrators
    €32.99
    AWS Security Cookbook
    €44.99
    Total 126.97 Stars icon
    Banner background image

    Table of Contents

    11 Chapters
    Managing AWS Accounts with IAM and Organizations Chevron down icon Chevron up icon
    Securing Data on S3 with Policies and Techniques Chevron down icon Chevron up icon
    User Pools and Identity Pools with Cognito Chevron down icon Chevron up icon
    Key Management with KMS and CloudHSM Chevron down icon Chevron up icon
    Network Security with VPC Chevron down icon Chevron up icon
    Working with EC2 Instances Chevron down icon Chevron up icon
    Web Security Using ELBs, CloudFront, and WAF Chevron down icon Chevron up icon
    Monitoring with CloudWatch, CloudTrail, and Config Chevron down icon Chevron up icon
    Compliance with GuardDuty, Macie, and Inspector Chevron down icon Chevron up icon
    Additional Services and Practices for AWS Security Chevron down icon Chevron up icon
    Other Books You May Enjoy Chevron down icon Chevron up icon

    Customer reviews

    Top Reviews
    Rating distribution
    Full star icon Full star icon Full star icon Full star icon Half star icon 4.8
    (8 Ratings)
    5 star 87.5%
    4 star 0%
    3 star 12.5%
    2 star 0%
    1 star 0%
    Filter icon Filter
    Top Reviews

    Filter reviews by




    John Mar 23, 2020
    Full star icon Full star icon Full star icon Full star icon Full star icon 5
    Very comprehensive view on the subject. Also interesting and unique assist program. The book support program for the book available at Heartin.tech is an innovative concept. It provides an opportunity to interact with the author directly and clear all your doubts.
    Amazon Verified review Amazon
    Akuma Oct 18, 2021
    Full star icon Full star icon Full star icon Full star icon Full star icon 5
    It's a good book with clear examples but I'm worried the how to sections (which is all of the book) will be our of date in a few years.If your looking for an explanation of why to do something you'd need to look somewhere else.
    Amazon Verified review Amazon
    Ravi Kumar k Aug 28, 2020
    Full star icon Full star icon Full star icon Full star icon Full star icon 5
    AWS Security Cookbook by Heartin Kanikathottu is special in many ways.1. The book is currently 8th among the best cloud computing books of all time.2. Irrespective of what version of the book you buy, you get book support directly from author and students through at BuddyCult.com, a platform developed and maintained by author. Earlier it was through heartin.tech, even though you still have to make requests through heartin.tech. It could be bit confusing. Anyway, it is worth. Technology books on IT and especially cloud changes very fast. But able to interact and learn with the author is much more worth than the price you pay.3. This is the only recipe based book I could find on AWS Security Speciality certification.Some cons are that the physical copy price is bit higher compared to other books in India and the Kindle has some formatting issues here and there. Anyway, my objective was to join the book support group and learn about cloud security in depth, and hence bought kindle edition.
    Amazon Verified review Amazon
    Ashutosh Dash Aug 30, 2020
    Full star icon Full star icon Full star icon Full star icon Full star icon 5
    Recipes within the book are very good for preparing AWS security Speciality certification, even for beginners. This is the only recipe based book I could found for this certification. The book support program directly from author is of great help too to prepare for exam even to even learn more about IT security.
    Amazon Verified review Amazon
    Dr. Srini Vuggumudi Aug 21, 2023
    Full star icon Full star icon Full star icon Full star icon Full star icon 5
    "AWS Security Cookbook" by Heartin Kanikathottu is an excellent resource for getting hands-on experience with AWS security services. The author did a great job of making the readers familiar with IAM and S3 policies; then taking deeper into data security, application security, monitoring, and compliance topics. The target audience for this book is cloud security professionals. I am preparing for AWS-SCS 02 exam. This book is also helpful for individuals interested in taking up the AWS Certified Security – Specialty certification. I liked how the author used AWS CLI to acquire and interact with AWS services. As a former developer, I love CLI instead of UI. The author provided the source code required via GitHub for the hands-on activities. I wish the author had provided instructions at the end of each chapter regarding releasing services acquired. The author did a great job of presenting the material serving the book’s intended purpose. I recommend this book for cloud security architects and anyone interested in taking AWS Certified Security – Specialty certification.
    Amazon Verified review Amazon
    Get free access to Packt library with over 7500+ books and video courses for 7 days!
    Start Free Trial

    FAQs

    How do I buy and download an eBook? Chevron down icon Chevron up icon

    Where there is an eBook version of a title available, you can buy it from the book details for that title. Add either the standalone eBook or the eBook and print book bundle to your shopping cart. Your eBook will show in your cart as a product on its own. After completing checkout and payment in the normal way, you will receive your receipt on the screen containing a link to a personalised PDF download file. This link will remain active for 30 days. You can download backup copies of the file by logging in to your account at any time.

    If you already have Adobe reader installed, then clicking on the link will download and open the PDF file directly. If you don't, then save the PDF file on your machine and download the Reader to view it.

    Please Note: Packt eBooks are non-returnable and non-refundable.

    Packt eBook and Licensing When you buy an eBook from Packt Publishing, completing your purchase means you accept the terms of our licence agreement. Please read the full text of the agreement. In it we have tried to balance the need for the ebook to be usable for you the reader with our needs to protect the rights of us as Publishers and of our authors. In summary, the agreement says:

    • You may make copies of your eBook for your own use onto any machine
    • You may not pass copies of the eBook on to anyone else
    How can I make a purchase on your website? Chevron down icon Chevron up icon

    If you want to purchase a video course, eBook or Bundle (Print+eBook) please follow below steps:

    1. Register on our website using your email address and the password.
    2. Search for the title by name or ISBN using the search option.
    3. Select the title you want to purchase.
    4. Choose the format you wish to purchase the title in; if you order the Print Book, you get a free eBook copy of the same title. 
    5. Proceed with the checkout process (payment to be made using Credit Card, Debit Cart, or PayPal)
    Where can I access support around an eBook? Chevron down icon Chevron up icon
    • If you experience a problem with using or installing Adobe Reader, the contact Adobe directly.
    • To view the errata for the book, see www.packtpub.com/support and view the pages for the title you have.
    • To view your account details or to download a new copy of the book go to www.packtpub.com/account
    • To contact us directly if a problem is not resolved, use www.packtpub.com/contact-us
    What eBook formats do Packt support? Chevron down icon Chevron up icon

    Our eBooks are currently available in a variety of formats such as PDF and ePubs. In the future, this may well change with trends and development in technology, but please note that our PDFs are not Adobe eBook Reader format, which has greater restrictions on security.

    You will need to use Adobe Reader v9 or later in order to read Packt's PDF eBooks.

    What are the benefits of eBooks? Chevron down icon Chevron up icon
    • You can get the information you need immediately
    • You can easily take them with you on a laptop
    • You can download them an unlimited number of times
    • You can print them out
    • They are copy-paste enabled
    • They are searchable
    • There is no password protection
    • They are lower price than print
    • They save resources and space
    What is an eBook? Chevron down icon Chevron up icon

    Packt eBooks are a complete electronic version of the print edition, available in PDF and ePub formats. Every piece of content down to the page numbering is the same. Because we save the costs of printing and shipping the book to you, we are able to offer eBooks at a lower cost than print editions.

    When you have purchased an eBook, simply login to your account and click on the link in Your Download Area. We recommend you saving the file to your hard drive before opening it.

    For optimal viewing of our eBooks, we recommend you download and install the free Adobe Reader version 9.