S3 Bucket Policy: require KMS for all files except one

I have an S3 bucket that contains sensitive data, so I want to ensure that any objects put into the bucket are encrypted with a specific KMS key. I’m doing this already with a bucket policy statement and it works well:

{
    "Sid": "DenyWhenWrongCMK",
    "Effect": "Deny",
    "Principal": {
        "AWS": "*"
    },
    "Action": "s3:PutObject",
    "Resource": "arn:aws:s3:::mybucket/*",
    "Condition": {
        "StringNotEquals": {
            "s3:x-amz-server-side-encryption-aws-kms-key-id": "arn:aws:kms:REDACTED"
        }
    }
}

But I really want to create one exception to this Deny rule. One thing I like doing with my S3 buckets is putting a README.md file in the root directory, so that when future maintainers go looking around, they have documentation about the original intent and purpose. This works best if the README.md is not encrypted with a CMK (Customer Managed Key), so I want to make an exception to the rule above.

But Deny statements have precedence over Allow statements in S3 bucket policy, so there doesn’t seem to be any way for me to make an exception here. Am I missing something? Is there any way to enforce usage of a specific KMS CMK for all files except one?

Go to Source
Author: Nic

auto delete aws S3 backups

I need to automate deletion of aws S3 backups … evidently if I tag the backup with one of daily, weekly, monthly, yearly then aws will delete based on my desired retention counts per those periods however I see no easy way to determine which of those tags I give to my daily backup process which is

//   Upload backup file

aws s3 cp /tmp/backup_2007020644.20200703.1248_blobs.tar  s3://foo-bar-baz-someaccount-us-east-2/backup_2007020644.20200703.1248_blobs.tar --region  us-east-2 --only-show-errors 


aws s3api put-object-tagging --bucket foo-bar-baz-someaccount-us-east-2  --key backup_2007020644.20200703.1248_blobs.tar --tagging  --region  us-east-2 TagSet=[{Key=backuptype,Value=blobs}]

//   now lets list this s3 bucket

aws s3 ls s3://foo-bar-baz-someaccount-us-east-2 --region  us-east-2 

2020-07-01 22:55:57   31904428 backup_2007010938.20200701.2233_blobs.tar
2020-07-01 22:55:43     893239 backup_2007010938.20200701.2233_mongo.tar
2020-07-02 15:30:36   34343354 backup_2007010938.20200702.1508_blobs.tar
2020-07-02 15:30:22     893676 backup_2007010938.20200702.1508_mongo.tar
2020-07-03 01:20:04   30596405 backup_2007020644.20200703.0055_blobs.tar
2020-07-03 01:19:51     893741 backup_2007020644.20200703.0055_mongo.tar
2020-07-03 12:48:44   34658003 backup_2007020644.20200703.1226_blobs.tar
2020-07-03 12:48:30     895294 backup_2007020644.20200703.1226_mongo.tar
2020-07-03 15:05:00   34657972 backup_2007020644.20200703.1248_blobs.tar
2020-07-03 15:04:46     895279 backup_2007020644.20200703.1248_mongo.tar

alternatively I can code up my own logic to parse above listing then issue the delete command per backup file after my code keeps track of my retention policy … there must be a better way … any advice ?

Go to Source
Author: Scott Stensland

How do I apply a Statement in my bucket policy to all users in my account?

Setting the princable to “*” with “Effect”: “Allow”, would make it public and i don’t want that.

“arn:aws:iam::my account id:user/*” is showing as invalid.

currently im just listing all the users in the princable but thats not exactly very maintainable.

I can allow in the IAM policy attached to the users and then deny in the bucket policy useing NotPrincipal but I can see that getting a bit complex and it seems much more secure to white list than black list?

Go to Source
Author: doug