Amazon S3 : The application is giving access denied exception in linux. Why? - spring-boot

I have a spring-boot application to upload and delete a file in Amazon-S3 bucket.
The project is working fine on Windows but when I am trying to upload anything using curl command in linux through putty, it's giving me the access denied exception.
The exception given is :
com.amazonaws.services.s3.model.AmazonS3Exception: Access Denied
The screenshot :

You probably didn't setup your AWS credentials for your Linux.
The instructions are here
just make sure your have your aws_access_key_id and aws_secret_access_key

Can you check your using IAM credential and S3 policy setting ?
Credential
Regardless of platforms, it's necessary to use credentials (access key id & secret access key). Please check credential files have same access key id.
S3 policy
S3 policy can handle deny/allow access according to credentials or IP addresses. Do you configure such policy ?

Related

Databricks Azure Blob Storage access

I am trying to access files stored in Azure blob storage and have followed the documentation linked below:
https://docs.databricks.com/external-data/azure-storage.html
I was successful in mounting the Azure blob storage on dbfs but it seems that the method is not recommended anymore. So, I tried to set up direct access using URI using SAS authentication.
spark.conf.set("fs.azure.account.auth.type.<storage-account>.dfs.core.windows.net", "SAS")
spark.conf.set("fs.azure.sas.token.provider.type.<storage-account>.dfs.core.windows.net", "org.apache.hadoop.fs.azurebfs.sas.FixedSASTokenProvider")
spark.conf.set("fs.azure.sas.fixed.token.<storage-account>.dfs.core.windows.net", "<token>")
Now when I try to access any file using:
spark.read.load("abfss://<container-name>#<storage-account-name>.dfs.core.windows.net/<path-to-data>")
I get the following error:
Operation failed: "Server failed to authenticate the request. Make sure the value of Authorization header is formed correctly including the signature.", 403, HEAD,
I am able to mount the storage account using the same SAS token but this is not working.
What needs to be changed for this to work?
If you are using blob storage, then you have to use wasbs and not abfss. I have tried using using the same code as yours with my SAS token and got the same error with my blob storage.
spark.conf.set("fs.azure.account.auth.type.<storage_account>.dfs.core.windows.net", "SAS")
spark.conf.set("fs.azure.sas.token.provider.type.<storage_account>.dfs.core.windows.net", "org.apache.hadoop.fs.azurebfs.sas.FixedSASTokenProvider")
spark.conf.set("fs.azure.sas.fixed.token.<storage_account>.dfs.core.windows.net", "<token>")
df = spark.read.load("abfss://<container>#<storage_account>.dfs.core.windows.net/input/sample1.csv")
When I used the following modified code, I was able to successfully read the data.
spark.conf.set("fs.azure.account.auth.type.<storage_account>.blob.core.windows.net", "SAS")
spark.conf.set("fs.azure.sas.token.provider.type.<storage_account>.blob.core.windows.net", "org.apache.hadoop.fs.azurebfs.sas.FixedSASTokenProvider")
spark.conf.set("fs.azure.sas.fixed.token.<storage_account>.blob.core.windows.net", "<token>")
df = spark.read.format("csv").load("wasbs://<container>#<storage_account>.blob.core.windows.net/input/sample1.csv")
UPDATE:
To access files from azure blob storage where the firewall settings are only from selected networks, you need to configure VNet for the Databricks workspace.
Now add the same virtual network to your storage account as well.
I have also selected service endpoints and subnet delegation as following:
Now when I run the same code again using the file path as wasbs://<container>#<storage_account>.blob.core.windows.net/<path>, the file is read successfully.

Go storage client not able to access GCP bucket

I have a golang service which has an API exposed where we try to upload a CSV to a GCP bucket. On my local host, I set the environment variable GOOGLE_APPLICATION_CREDENTIAL
and point this variable to the filepath of service account json. But when deploying to an actual GCP instance, I'm getting the below error while trying to access this API. Ideally,the service should talk to GCP metadata server and fetch the credentials and then store them in a json file. So there are 2 problems here:
Service is not querying the metadata service to get the credentials.
If file is present(I created it manually), it's not able to access due to permission issues.
Any help would be appreciated.
Error while initializing storage Client:dialing: google: error getting credentials using well-known file (/root/.config/gcloud/application_default_credentials.json): open /root/.config/gcloud/application_default_credentials.json: permission denied
Finally, after long debugging and searching over the web, found out that there's already an open PR for the go-storage client which is open: https://github.com/golang/oauth2/issues/337. I had to make a few changes in the code using this method: https://pkg.go.dev/golang.org/x/oauth2/google#ComputeTokenSource where in basically we are trying to fetch the token explicitly from metadata server and then calling subsequent cloud API's.

How to refer AWS access key(secret key) in cloud-init without hard coding

I want to write cloud-init script which initializes REX-Ray docker plugin(A service which uses AWS credentials on its configuration).
I have considered the following methods. However, these methods have some disadvantages.
Hard code access key/secret key in cloud-init script.
Problem: This is not secure.
Create IAM role, then refer access key, secret key from instance meta data.
Problem: Access key will expires in a certain period.
So I need to restart REX-Ray daemon process, which causes service temporary unavailable.
Please tell me which is better way to refer access key/secret key, or another way if it exists.
Thanks in advance.
The docker plugin should get the credentials automatically. You don't have to do anything. Do not set any environment variables for AWS credentials.
AWS CLI / AWS SDK will get the credentials automatically from the meta data server.
You can use the following method of authentication
Environment variables
Export both access and secret keys in environment environment as follow:
$ export AWS_ACCESS_KEY_ID="anaccesskey"
$ export AWS_SECRET_ACCESS_KEY="asecretkey"
Shared Credential file
You can use an AWS credentials file to specify your credentials. The default location is $HOME/.aws/credentials on Linux and OS X, or "%USERPROFILE%.aws\credentials" for Windows users. If terraform fail to detect credentials inline, or in the environment, Terraform will check this location
You can optionally specify a different location in the configuration by providing the shared_credentials_file attribute as follow
provider "aws" {
region = "us-west-2"
shared_credentials_file = "/Users/tf_user/.aws/creds"
profile = "customprofile"
}
https://www.terraform.io/docs/providers/aws/

Forge configuration What to Put in Server Providers=>Amazon=>Key/Secret

I created an Ubuntu server on Amazon AWS.
Then I registered for Forge, and now trying to configure it.
I selected source control to be Bitbucket.
I selected Amazon in Server Provider Section,but now I am not sure what to put in key and secret
I found the answer to this question,
We need to create a IAM user and opt for api access key and secret.
also remember to give access to at least FullEC2Admin Access to this user before initiating the process to create and provision the server via forge.

EC2 upload failed to S3

I am trying to upload a file from an EC2 instance to S3 bucket and get this error:
[ec2-user#zzzzzzz parsers]$ aws s3 cp file.txt s3://bucket/output/file.txt
upload failed: ./file.txt to s3://bucket/output/file.txt A client error (InvalidAccessKeyId) occurred when calling the PutObject operation: The AWS Access Key Id you provided does not exist in our records.
I have already configured the aws configure file in EC2 as follows:
[ec2-user#zzzzz parsers]$ aws configure list
Name Value Type Location
---- ----- ---- --------
profile <not set> None None
access_key ****************NTr6 config-file
secret_key ****************AFJQ config-file
region us-west-2 config-file ~/.aws/config
What else should I do to make this work?
InvalidAccessKeyId indicates that the Access Key and Secret Key are not valid.
Access Keys (and their corresponding Secret Keys) can be associated to either either:
Master (or root) credentials, or
An Identity and Access Management (IAM) user
It is recommended that Master credentials not be used on a daily basis. (See IAM Best Practices.)
If your credentials are associated with an IAM user, you can generate a new set of credentials:
Go to Identity and Access Management (IAM)
Select the User
Manage Access Keys
Create Access Key
A new Access Key and Secret Key will be displayed. Try using them in CLI configuration.
Up to two sets of Access Keys can be associated with a User at any time.
It's recommended to use IAM roles instead of IAM access keys for EC2 instances. By simply creating a IAM role to access S3 and link it to your EC2 instance, you can list, download and upload files from and to your S3 bucket(s) based on the role's policy.
It's more secure and you don't have to configure your aws credentials.

Resources