Okay, so I've encountered an insanely frustrating problem while trying to reach an AWS S3 Bucket through AWS CLI via the command prompt in Windows 7. The AWS CLI is "unable to locate credentials" a.k.a. the config.txt file # C:\Users\USERNAME\.aws\config.txt.
I've tried pathing to it by creating the AWS_CONFIG_FILE environmental variable in ControlPanel>System>AdvancedSystemSettings>EnvironmentalVariables, but no dice. I've also tried all of the above on another Win7 machine. Again, no dice.
What could I be missing here. Are there any special permission that need to be set for AWS CLI to accest config.txt? Help, before I poke my own eyes out!
The contents of config.txt, in case you're interested, are:
[default]
aws_access_key_id = key id here
aws_secret_access_key = key here
region = us-east-1
There is a another way to configure aws credentials while using command line tool.
You can pass credentials using windows command instead of passing through file.
Execute below command from windows command prompt
aws configure
It prompt you to enter below things
AWS Access key ID:
AWS secrete key ID:
Default region Name:
Default output Format:
See this video tutorial: https://youtu.be/hhXj8lM_jBs
Okay, so the config file cannot be a text file (.txt). You should create the file in CMD, and it should be a generic file w/o any extension.
A couple of points on this as I had similar problems whilst trying to perform an S3 sync.
My findings were as follows.
Remove the spaces between hte = and the key value pair (see example below).
The OP has specified a [default] section in their example, but I got the same error when I had removed this section as I did not think it was needed so it's worth nothing this is needed.
I then reformed my file as follows and it worked...
[default]
aws_access_key_id=****
aws_secret_access_key=****
region=eu-west-2
[deployment-profile]
aws_access_key_id=****
aws_secret_access_key=****
region=eu-west-2
I had to include a blank line at the bottom of my credentials file.
Just posting this really as I struggled for a few hours with vague messages from AWS and these were the solutions that worked for me. Hope that it helps someone.
If like me you have a custom IAM user in your credentials file rather than 'default', try setting the AWS_DEFAULT_PROFILE env variable to the name of your IAM user, and then running commands.
[user1]
ACCESS_KEY=
SECRET_KEY=
set AWS_DEFAULT_PROFILE=user1
aws <command>
Alternatively you can specify the --profile variable each time you use the cli:
aws <command> --profile user1
Related
I was facing issue with running aws command via cli with certificate issue. So as per some blogs, I was trying to fix the issue using setx AWS_CA_BUNDLE "C:\data\ca-certs\ca-bundle.pem" command.
Now even after I removed the variable AWS_CA_BUNDLE from my aws configure file, terraform keeps throwing the below error on terraform apply.
Error: error configuring Terraform AWS Provider: loading configuration: open C:\data\ca-certs\ca-bundle.pem: The system cannot find the path specified.
Can someone please tell me where terraform/aws cli is taking this value from and how to remove it? I have tried deleting the entire aws config and credential files still this error is thrown, uninstall aws cli and reinstalling.
If its set in some system/environment variable, can you please tell me how to reset it to default value?
The syntax to add ca_bundle variable to config file is wrong.
Your config file should look like this
[default]
region = us-east-1
ca_bundle = dev/apps/ca-certs/cabundle-2019mar05.pem
But as I understand you want to use environment variable (AWS_CA_BUNDLE).
AWS_CA_BUNDLE:
Specifies the path to a certificate bundle to use for HTTPS certificate validation.
If defined, this environment variable overrides the value for the profile setting ca_bundle. You can override this environment variable by using the --ca-bundle command line parameter.
I would suggest remove environment variable (AWS_CA_BUNDLE) and add ca_bundle to config file. The delete .terraform folder and run terraform init
Go environment variables and delete the environment variable created by AWS_CA_BUNDLE. Shut down Terminal and again start. Run the commands now it will work properly.
I'm attempting to use a script which automatically creates snapshots of all EBS volumes on an AWS instance. This script is running on several other instances with no issue.
The current instance already has an AWS profile configured which is used for other purposes. My understanding is I should be able to specify the profile my script uses, but I can't get this to work.
I've added a new set of credentials to the /home/ubuntu/.aws file by adding the following under the default credentials which are already there:
[snapshot_creator]
aws_access_key_id=s;aldkas;dlkas;ldk
aws_secret_access_key=sdoij34895u98jret
In the script I have tried adding AWS_PROFILE=snapshot_creatorbut when I run it I get the error Unable to locate credentials. You can configure credentials by running "aws configure".
So, I delete my changes to /home/ubuntu/.aws and instead run aws configure --profile snapshot_creator. However after entering all information I get the error [Errno 17] File exists: '/home/ubuntu/.aws'.
So I add my changes to the .aws file again and this time in the script for every single command starting with aws ec2 I add the parameter --profile snapshot_creator, but this time when I run the script I get The config profile (snapshot_creator) could not be found.
How can I tell the script to use this profile? I don't want to change the environment variables for the instance because of the aforementioned other use of AWS CLI for other purposes.
Credentials should be stored in the file "/home/ubuntu/.aws/credentials"
I guess this error is because it couldn't create a directory. Can you delete the ".aws" file and re-run the configure command? It should create the credentials file under "/home/ubuntu/.aws/"
File exists: '/home/ubuntu/.aws'
I am trying to create a "aws_configure.bat" file which will run aws commands. I need to configure "aws_configure.bat" file as windows task. I created my script with below content.
aws configure set AWS_ACCESS_KEY_ID <mykey>
aws configure set aws_secret_access_key <myskey>
aws configure set region us-west-2
aws dynamodb list-tables
When I am trying to run this script then its printing the first line in cmd window. Can someone please suggest what is the problem here. Why my script is not able to run the aws cli commands. (I have installed aws cli in my system and when i am running these commands directly in cmd window, everything is working fine).
You should consider creating and configuring your AWS credentials outside of your batch file, then referencing the named profile from the batch file.
Run aws configure --profile myprofile, and provide the information required.
Then from your batch file, call aws dynamodb list-tables --profile myprofile.
To setup the prefered/default profile, set AWS_PROFILE=myprofile in system environment. With this method, you will not need to reference the profile in the batch file.
The Parse S3 Adapter's requirement of S3_ACCESS_KEY and S3_SECRET_KEY is now deprecated. It says to use the environment variables: AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY. We are have setup an AWS user with an Access Key ID and we have our secret key as well. We have updated to the latest version of the adapter and removed our old S3_X_Key variables. Unfortunately, as soon as we do this we are unable to access, upload or change files on our S3 bucket. The user does have access to our buckets properties and if we change it back to use the explicit S3_ACCESS_KEY and secret everything works.
We are hosting on Heroku and haven't had any issues until now.
What else needs to be done to set this up?
This deprecation notice is very vague on how to fix this.
(link to notice: https://github.com/parse-server-modules/parse-server-s3-adapter#deprecation-notice----aws-credentials)
I did the following steps and it's working now:
Installed Amazon's CLI
http://docs.aws.amazon.com/cli/latest/userguide/installing.html
Configured CLI by creating a user and then creating key id and secret
http://docs.aws.amazon.com/cli/latest/userguide/cli-chap-getting-started.html
Set the S3_BUCKET env variable
export S3_BUCKET=
Installed files adapter using command
npm install --save #parse/s3-files-adapter
In my parse-server's index.js added the files adapter
var S3Adapter = require('#parse/s3-files-adapter');
var s3Adapter = new S3Adapter();
var api = new ParseServer({
appId: 'my_app',
masterKey: 'master_key',
filesAdapter: s3Adapter
})
Arjav Dave's answer below is best if you are using AWS or a hosting solution where you can login to the server and run the AWS Configure command on the server. Or if you are running everything locally.
However, I was asking about Heroku and this goes for any server environment where you can set ENV variables.
Really it comes down to just a few steps. If you have a previous version setup you are going to switch your file adapter to just read:
filesAdapter: 'parse-server-s3-adapter',
(or whatever your npm installed package is called some are using the #parse/... one)
Take out the require statement and don't create any instance variables of S3Adapter or anything like that in your index.js.
Then in Heroku.com create config vars or with the CLI: heroku config:set AWS_ACCESS_KEY_ID=abc and heroku config:set AWS_SECRET_ACCESS_KEY=abc
Now run and test your uploading. All should be good.
The new adapter uses the environment variables for access and you just have to tell it what file adapter is installed in the index.js file. It will handle the rest. If this isn't working it'll be worth testing the IAM profile setup and making sure it's all working before coming back to this part. See below:
Still not working? Try running this example (edit sample.js to be your bucket when testing):
https://docs.aws.amazon.com/sdk-for-javascript/v2/developer-guide/getting-started-nodejs.html
Completely lost and no idea where to start?
1 Get Your AWS Credentials:
https://docs.aws.amazon.com/sdk-for-javascript/v2/developer-guide/getting-your-credentials.html
2 Setup Your Bucket
https://transloadit.com/docs/faq/how-to-set-up-an-amazon-s3-bucket/
(follow the part on IAM users as well)
3 Follow IAM Best Practices
https://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html
Then go back to the top of this posting.
Hope that helps anyone else that was confused by this.
I am using Elastic Beanstalk and I have created 3 different environments. I used awsebcli. All of a sudden the command eb list doesn't show me my enviroments because of which I am unable to deploy the environment. The error I am getting is ERROR: This branch does not have a default environment. You must either specify an environment by typing "eb status my-env-name" or set a default environment by typing "eb use my-env-name".
I tried eb status 'my-env-name', again I got an error : ERROR: The environment name 'my-env-name' could not be found. In short: I am unable to use any eb command.
Did you forgot to run eb create --single after eb init?
This command creates a new environment. See EB CLI Command Reference ยป
The message itself is clear. You haven't set an environment for the branch you are working on.
You can either switch to the branch it's configure, but this means the changes you have in the current branch won't be available on deploy, unless you merge thos changes or you can set an environment for the branch you currently are using the command eb use name-of-your-env. This last can also be configured in the Elastic Beanstalk configuration file of your application.
Hope this helps.
Perhaps this may help others. I already had an existing environment on Beanstalk and was setting up a new Mac.
For some reason, the eb init did create a file in ~/.aws/config. However, it only have key and secret. To get it to work, I need to add the region as well.
# ~/.aws/config
[profile eb-cli]
aws_access_key_id = XXX
aws_secret_access_key = XXX
region=us-west-2
Next, I find my beanstalk config file in my application (i.e. project/.ebelasticbeanstalk/config.yml) and ensure that under global, it has profile: eb-cli
# project/.ebelasticbeanstalk/config.yml
global:
default_region: us-west-2
profile: eb-cli
sc: git
workspace_type: Application
After making those edits, eb list shows the environment I am expecting and I can do eb deploy again.
I faced the same issue here. It turned out being a region related trick.
In my case, it initially seems like related to the way I created the environment. I used the following command to create the env:
eb create --sample -r ap-southeast-1 -im 2 -ix 4 --vpc.elbpublic --vpc.ec2subnets AppSubA, AppSubB --vpc.dbsubnets DbSubA, DbSubB node-express-dev
Note that I created the env in Singapore region. After that, if I use "eb list", the result is empty. Why? I will touch upon it later.
However, if I use the command like this:
eb create --sample node-express-env
The "eb list" will be able to find the created env.
For the 1st cmd, as I said, it was created in Singapore. However, before the creation, in the "eb init" cmd, I didn't specify region. So, "eb cli" is by default in us-west-2. That's why it cannot list the created env. To fix it, before the creation, the command
eb init -p "64bit Amazon Linux 2 v5.5.3 running Node.js 16" --region ap-southeast-1 should be perfomed.
For the 2nd cmd, it is created in us-west-2, and "eb cli" is also in the same region. Both are in the default region. As a result, it can show it.
Hopefully this can help some cases.