Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon

Amazon S3 Security access and policies

Save for later
  • 7 min read
  • 03 May 2018

article-image

In this article, you will get to know about Amazon S3, and the security access and policies associated with it.  AWS provides you with S3 as the object storage, where you can store your object files from 1 KB to 5 TB in size at a low cost. It's highly secure, durable, and scalable, and has unlimited capacity. It allows concurrent read/write access to objects by separate clients and applications. You can store any type of file in AWS S3 storage.

[box type="shadow" align="" class="" width=""]This article is an excerpt taken from the book,' Cloud Security Automation', written by Prashant Priyam.[/box]

AWS keeps multiple copies of all the data stored in the standard S3 storage, which are replicated across devices in the region to ensure the durability of 99.999999999%. S3 cannot be used as block storage.


AWS S3 storage is further categorized into three different sections:

  • S3 Standard: This is suitable when we need durable storage for files with frequent access.
  • Reduced Redundancy Storage: This is suitable when we have less critical data that is persistent in nature.
  • Infrequent Access (IA): This is suitable when you have durable data with nonfrequent access. You can opt for Glacier. However, in Glacier, you have a very long retrieval time. So, S3 IA becomes a suitable option. It provides the same performance as the S3 Standard storage.


AWS S3 has inbuilt error correction and fault tolerance capabilities. Apart from this, in S3 you have an option to enable versioning and cross-region replication (cross-origin resource sharing (CORS)).

If you want to enable versioning on any existing bucket, versioning will be enabled only for new objects in that bucket, not for existing objects. This also happens in the case of CORS, where you can enable cross-region replication, but it will be applicable only for new objects.

Security in Amazon S3


S3 is highly secure storage. Here, we can enable fine-grained access policies for resource access and encryption.

To enable access-level security, you can use the following:

  • S3 bucket policy
  • IAM access policy
  • MFA for object deletion


The S3 bucket policy is a JSON code that defines what will be accessed by whom and at what level:

{
  "Version": "2008-10-17",
  "Statement": [
    {
      "Sid": "AllowPublicRead",
      "Effect": "Allow",
      "Principal": {
        "AWS": "*"
      },
      "Action": [
        "s3:GetObject"
      ],
      "Resource": [
        "arn:aws:s3:::prashantpriyam/*"
      ]
    }
  ]
}


In the preceding JSON code, we have just allowed read-only access to all the objects (as defined in the Action section) for an S3 bucket named prashantpriyam (defined in the Resource section).

Similar to the S3 bucket policy, we can also define an IAM policy for S3 bucket access:

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Action": [
        "s3:GetBucketLocation",
        "s3:ListAllMyBuckets"
      ],
      "Resource": "arn:aws:s3:::*"
    },
    {
      "Effect": "Allow",
      "Action": ["s3:ListBucket"],
      "Resource": ["arn:aws:s3:::prashantpriyam"]
    },
    {
      "Effect": "Allow",
      "Action": [
        "s3:PutObject",
        "s3:GetObject",
        "s3:DeleteObject"
      ],
      "Resource": ["arn:aws:s3:::prashantpriyam/*"]
    }
  ]
}


In the preceding policy, we want to give the user full permissions on the S3 bucket from the AWS console as well.

In the following section of policy (JSON code), we have granted permission to the user to get the bucket location and list all the buckets for traversal, but here we cannot perform other operations, such as getting object details from the bucket:

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Action": [
        "s3:GetBucketLocation",
        "s3:ListAllMyBuckets"
      ],
      "Resource": "arn:aws:s3:::*"
    },
    {
      "Effect": "Allow",
      "Action": ["s3:ListBucket"],
      "Resource": ["arn:aws:s3:::prashantpriyam"]
    },

While in the second section of the policy (specified as follows), we have given permission to users to traverse into the prashantpriyam bucket and perform PUT, GET, and DELETE operations on the object:


Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at R$50/month. Cancel anytime
{
      "Effect": "Allow",
      "Action": [
        "s3:PutObject",
        "s3:GetObject",
        "s3:DeleteObject"
      ],
      "Resource": ["arn:aws:s3:::prashantpriyam/*"]
    }


MFA enables additional security on your account where, after password-based authentication, it asks you to provide the temporary code generated from AWS MFA. We can also use a virtual MFA such as Google Authenticator.

AWS S3 supports MFA-based API, which helps to enforce MFA-based access policy on S3 bucket.

Let's look at an example where we are giving users read-only access to a bucket while all other operations require an MFA token, which will expire after 600 seconds:

{
    "Version": "2012-10-17",
    "Statement": [
      {
        "Sid": "",
        "Effect": "Deny",
        "Principal": "*",
        "Action": "s3:*",
        "Resource": "arn:aws:s3:::prashantpriyam/priyam/*",
        "Condition": {"Null": {"aws:MultiFactorAuthAge": true }}
      },
      {
        "Sid": "",
        "Effect": "Deny",
        "Principal": "*",
        "Action": "s3:*",
        "Resource": "arn:aws:s3:::prashantpriyam/priyam/*",
        "Condition": {"NumericGreaterThan":
         {"aws:MultiFactorAuthAge": 600 }}
       },
       {
         "Sid": "",
         "Effect": "Allow",
         "Principal": "*",
         "Action": ["s3:GetObject"],
         "Resource": "arn:aws:s3:::prashantpriyam/*"
       }
    ]
 }

In the preceding code, you can see that we have allowed all the operations on the S3 bucket if they have an MFA token whose life is less than 600 seconds.


Apart from MFA, we can enable versioning so that S3 can automatically create multiple versions of the object to eliminate the risk of unwanted modification of data. This can be enabled with the AWS Console only.

You can also enable cross-region replication so that the S3 bucket content can be replicated to the other selected regions. This option is mostly used when you want to deliver static content into two different regions, but it also gives you redundancy.

For infrequently accessed data you can enable a lifecycle policy, which helps you to transfer the objects to a low-cost archival storage called Glacier.

Let's see how to secure the S3 bucket using the AWS Console. To do this, we need to log in to the S3 bucket and search for S3. Now, click on the bucket you want to secure:

amazon-s3-security-access-policies-img-0


In the screenshot, we have selected the bucket called velocis-manali-trip-112017 and, in the bucket properties, we can see that we have not enabled the security options that we have learned so far. Let's implement the security.

Now, we need to click on the bucket and then on the Properties tab. From here, we can enable Versioning, Default encryption, Server access logging, and Object-level logging:

amazon-s3-security-access-policies-img-1


To enable Server access logging, you need to specify the name of the bucket and a prefix for the logs:

amazon-s3-security-access-policies-img-2


To enable encryption, you need to specify whether you want to use AES 256 or AWS KMS based encryption.

Now, click on the Permission tab. From here, you will be able to define the Access Control List, Bucket Policy, and CORS configuration:

amazon-s3-security-access-policies-img-3


In Access Control List, you can define who will access what and to what extent, in Bucket Policy you define resource-based permissions on the bucket (like we have seen in the example for bucket policy), and in CORS configuration we define the rule for CORS.

Let's look at a sample CORS file:

<!-- Sample policy -->
<CORSConfiguration>
  <CORSRule>
    <AllowedOrigin>*</AllowedOrigin>
    <AllowedMethod>GET</AllowedMethod>
    <MaxAgeSeconds>3000</MaxAgeSeconds>
    <AllowedHeader>Authorization</AllowedHeader>
  </CORSRule>
</CORSConfiguration>


It's an XML script that allows read-only permission to all the origins. In the preceding code, instead of a URL, the origin is the wildcard *, which means anyone.

Now, click on the Management section. From here, we define the life cycle rule, replication, and so on:

amazon-s3-security-access-policies-img-4


In life cycle rules, an S3 bucket object is transferred to the Standard-IA tier after 30 days and transferred to Glacier after 60 days.

This is how we define security on the S3 bucket. To summarize, we learned about security access and policies in Amazon S3.

If you've enjoyed reading this, do check out this book, 'Cloud Security Automation' to know how private cloud security functions can be automated for better time and cost-effectiveness.

Creating and deploying an Amazon Redshift cluster

Amazon Sagemaker makes machine learning on the cloud easy

How cybersecurity can help us secure cyberspace