EC2 upload failed to S3 - amazon-ec2

I am trying to upload a file from an EC2 instance to S3 bucket and get this error:
[ec2-user#zzzzzzz parsers]$ aws s3 cp file.txt s3://bucket/output/file.txt
upload failed: ./file.txt to s3://bucket/output/file.txt A client error (InvalidAccessKeyId) occurred when calling the PutObject operation: The AWS Access Key Id you provided does not exist in our records.
I have already configured the aws configure file in EC2 as follows:
[ec2-user#zzzzz parsers]$ aws configure list
Name Value Type Location
---- ----- ---- --------
profile <not set> None None
access_key ****************NTr6 config-file
secret_key ****************AFJQ config-file
region us-west-2 config-file ~/.aws/config
What else should I do to make this work?

InvalidAccessKeyId indicates that the Access Key and Secret Key are not valid.
Access Keys (and their corresponding Secret Keys) can be associated to either either:
Master (or root) credentials, or
An Identity and Access Management (IAM) user
It is recommended that Master credentials not be used on a daily basis. (See IAM Best Practices.)
If your credentials are associated with an IAM user, you can generate a new set of credentials:
Go to Identity and Access Management (IAM)
Select the User
Manage Access Keys
Create Access Key
A new Access Key and Secret Key will be displayed. Try using them in CLI configuration.
Up to two sets of Access Keys can be associated with a User at any time.

It's recommended to use IAM roles instead of IAM access keys for EC2 instances. By simply creating a IAM role to access S3 and link it to your EC2 instance, you can list, download and upload files from and to your S3 bucket(s) based on the role's policy.
It's more secure and you don't have to configure your aws credentials.

Related

Lambda can't decrypt image

I am working in a multi aws account context.
I have Lambdas in account A,B,C and an ECR in account D. Lambdas pull image from account D.
There is a client managed KMS with a dedicated key that is used by the ECR in account D.
The KMS policy key allows ROLE used by lambda to do KMS operations.
Lambda roles in account A,B,C allow use of KMS.
When i try to run my lambdas i have the following response:
Lambda can't decrypt the container image because KMS access is denied. Check the function's KMS key settings.
KMS Exception: AccessDeniedExceptionKMS Message: The ciphertext refers to acustomer master key that does not exist,
does not exist in this region, or you are not allowed to access.
Here is my KMS key policy strategy
And following, here is my role used by the Lambda:
And finally my ECR using the key
I have followed this docs from aws : https://aws.amazon.com/fr/premiumsupport/knowledge-center/lambda-kmsaccessdeniedexception-errors/
but error messages discussed in this link are slighty differents

Can't copy AWS RDS DB snapshot because of key not existing or no access? (Administrator account)

I have administrator access to my AWS account and I'm trying to copy a DB snapshot that has has encryption on it. I'm specifying the key ID but it's still giving me the following error:
/opt/homebrew/lib/ruby/gems/3.0.0/gems/aws-sdk-core-3.124.0/lib/seahorse/client/plugins/raise_response_errors.rb:17:in
`call': The target KMS key [<my_key_id>] does not exist, is not
enabled or you do not have permissions to access it.
(Aws::RDS::Errors::KMSKeyNotAccessibleFault)
The only thing that has changed from the time it worked to the time it no longer works is me enabling encryption on the database, so now its snapshots are encrypted. As a result, I've added the kms_key_id parameter to my copy_db_snapshot method.
Here's how I'm doing this with the aws-sdk-rds gem:
client.copy_db_snapshot({
source_db_snapshot_identifier: source_db_arn,
target_db_snapshot_identifier: target_db_snapshot_identifier,
source_region: source_db_region,
kms_key_id: '<my_key_id>'
})
I don't quite fully understand this error message. The key definitely exists (I've tried just the key ID and the full ARN), and I definitely have permission. I'm using a key generated by AWS so not sure if this helps.
https://docs.aws.amazon.com/sdk-for-ruby/v3/api/Aws/RDS/Client.html#copy_db_snapshot-instance_method
If you copy an encrypted snapshot to a different Amazon Web Services
Region, then you must specify an Amazon Web Services KMS key
identifier for the destination Amazon Web Services Region. KMS keys
are specific to the Amazon Web Services Region that they are created
in, and you can't use KMS keys from one Amazon Web Services Region in
another Amazon Web Services Region.
You need to specify the KMS key id of a KMS key in the destination region. This is because the kms_key_id parameter is actually supposed to be the ID of the KMS Key used to encrypt the new snapshot copy, not your original snapshot.

What are permissions that my lambda function need to retrieve secrets from AWS Secrets Manager

What are permissions that my lambda function need to be able to retrieve secrets from AWS Secrets Manager and change it also ?
You need the secretsmanager:GetSecretValue policy to retrieve secrets and the secretsmanager:UpdateSecret policy to update secrets.
Note that if you are using a customer-managed AWS KMS key for encryption you will also need some KMS permissions:
kms:Decrypt for retrieving the secret.
kms:Decrypt and kms:GenerateDataKey for updating the secret.
https://docs.aws.amazon.com/cli/latest/reference/secretsmanager/get-secret-value.html
https://docs.aws.amazon.com/cli/latest/reference/secretsmanager/update-secret.html
If you are using the Lambda functions provided by AWS, then (as described in the docs) you will need: DescribeSecret, GetSecretValue, PutSecretValue, UpdateSecretVersionStage and GetRandomPassword. If you are using a Custom KMS Key (CMK) you will also need Decrypt and GenerateDataKey permissions for that CMK (both in the Lambda policy and in the KMS key policy).
If you are seeing Task timed out errors, it is likely your Lambda can not access either the secrets manager endpoint (try using a VPC endpoint), or the Lambda can not connect to the DB (check security group settings).

Amazon S3 : The application is giving access denied exception in linux. Why?

I have a spring-boot application to upload and delete a file in Amazon-S3 bucket.
The project is working fine on Windows but when I am trying to upload anything using curl command in linux through putty, it's giving me the access denied exception.
The exception given is :
com.amazonaws.services.s3.model.AmazonS3Exception: Access Denied
The screenshot :
You probably didn't setup your AWS credentials for your Linux.
The instructions are here
just make sure your have your aws_access_key_id and aws_secret_access_key
Can you check your using IAM credential and S3 policy setting ?
Credential
Regardless of platforms, it's necessary to use credentials (access key id & secret access key). Please check credential files have same access key id.
S3 policy
S3 policy can handle deny/allow access according to credentials or IP addresses. Do you configure such policy ?

How to refer AWS access key(secret key) in cloud-init without hard coding

I want to write cloud-init script which initializes REX-Ray docker plugin(A service which uses AWS credentials on its configuration).
I have considered the following methods. However, these methods have some disadvantages.
Hard code access key/secret key in cloud-init script.
Problem: This is not secure.
Create IAM role, then refer access key, secret key from instance meta data.
Problem: Access key will expires in a certain period.
So I need to restart REX-Ray daemon process, which causes service temporary unavailable.
Please tell me which is better way to refer access key/secret key, or another way if it exists.
Thanks in advance.
The docker plugin should get the credentials automatically. You don't have to do anything. Do not set any environment variables for AWS credentials.
AWS CLI / AWS SDK will get the credentials automatically from the meta data server.
You can use the following method of authentication
Environment variables
Export both access and secret keys in environment environment as follow:
$ export AWS_ACCESS_KEY_ID="anaccesskey"
$ export AWS_SECRET_ACCESS_KEY="asecretkey"
Shared Credential file
You can use an AWS credentials file to specify your credentials. The default location is $HOME/.aws/credentials on Linux and OS X, or "%USERPROFILE%.aws\credentials" for Windows users. If terraform fail to detect credentials inline, or in the environment, Terraform will check this location
You can optionally specify a different location in the configuration by providing the shared_credentials_file attribute as follow
provider "aws" {
region = "us-west-2"
shared_credentials_file = "/Users/tf_user/.aws/creds"
profile = "customprofile"
}
https://www.terraform.io/docs/providers/aws/

Resources