Terraform throwing error:error configuring Terraform AWS Provider: The system cannot find the path specified - windows

I was facing issue with running aws command via cli with certificate issue. So as per some blogs, I was trying to fix the issue using setx AWS_CA_BUNDLE "C:\data\ca-certs\ca-bundle.pem" command.
Now even after I removed the variable AWS_CA_BUNDLE from my aws configure file, terraform keeps throwing the below error on terraform apply.
Error: error configuring Terraform AWS Provider: loading configuration: open C:\data\ca-certs\ca-bundle.pem: The system cannot find the path specified.
Can someone please tell me where terraform/aws cli is taking this value from and how to remove it? I have tried deleting the entire aws config and credential files still this error is thrown, uninstall aws cli and reinstalling.
If its set in some system/environment variable, can you please tell me how to reset it to default value?

The syntax to add ca_bundle variable to config file is wrong.
Your config file should look like this
[default]
region = us-east-1
ca_bundle = dev/apps/ca-certs/cabundle-2019mar05.pem
But as I understand you want to use environment variable (AWS_CA_BUNDLE).
AWS_CA_BUNDLE:
Specifies the path to a certificate bundle to use for HTTPS certificate validation.
If defined, this environment variable overrides the value for the profile setting ca_bundle. You can override this environment variable by using the --ca-bundle command line parameter.
I would suggest remove environment variable (AWS_CA_BUNDLE) and add ca_bundle to config file. The delete .terraform folder and run terraform init

Go environment variables and delete the environment variable created by AWS_CA_BUNDLE. Shut down Terminal and again start. Run the commands now it will work properly.

Related

AWS S3 authentification from windows

I am using Pentaho (8.1) from windows environment (remote desktop).
To Upload files to S3 I am using config & credential files.
When I use default file location in %USERPROFILE%.aws\config and %USERPROFILE%.aws\credentials it works fine.
I don't want every user to manually handle credentials file, so I would like to use same location for all users.
I have set environment variables:
AWS_SHARED_CREDENTIALS_FILE D:\data.aws\credentials
AWS_CONFIG_FILE D:\data.aws\config
But looks like it doesn't pick up this location correctly.
I am sure that files in %USERPROFILE% are actually used. I have also done full restart after changing variables, but it doesn't help.
Is there something I am missing from configuration?
If you are willing to set environment variables, then you can simply put the credentials in environment variables for each user:
AWS_ACCESS_KEY_ID
AWS_SECRET_ACCESS_KEY

Specify an AWS CLI profile in a script when two exist

I'm attempting to use a script which automatically creates snapshots of all EBS volumes on an AWS instance. This script is running on several other instances with no issue.
The current instance already has an AWS profile configured which is used for other purposes. My understanding is I should be able to specify the profile my script uses, but I can't get this to work.
I've added a new set of credentials to the /home/ubuntu/.aws file by adding the following under the default credentials which are already there:
[snapshot_creator]
aws_access_key_id=s;aldkas;dlkas;ldk
aws_secret_access_key=sdoij34895u98jret
In the script I have tried adding AWS_PROFILE=snapshot_creatorbut when I run it I get the error Unable to locate credentials. You can configure credentials by running "aws configure".
So, I delete my changes to /home/ubuntu/.aws and instead run aws configure --profile snapshot_creator. However after entering all information I get the error [Errno 17] File exists: '/home/ubuntu/.aws'.
So I add my changes to the .aws file again and this time in the script for every single command starting with aws ec2 I add the parameter --profile snapshot_creator, but this time when I run the script I get The config profile (snapshot_creator) could not be found.
How can I tell the script to use this profile? I don't want to change the environment variables for the instance because of the aforementioned other use of AWS CLI for other purposes.
Credentials should be stored in the file "/home/ubuntu/.aws/credentials"
I guess this error is because it couldn't create a directory. Can you delete the ".aws" file and re-run the configure command? It should create the credentials file under "/home/ubuntu/.aws/"
File exists: '/home/ubuntu/.aws'

Issue in setting system variable ES_PATH_CONF for elasticsearch

I have cloned elasticsearch project from GitHub to my local machine. Built it successfully and imported it into eclipse.
When I am trying to run the main() method in org.elasticsearch.bootstrap.ElasticSearch class (which is the entry point for starting elasticsearch) getting the following error:
ERROR: the system property [es.path.conf] must be set
I tried setting the system variable ES_PATH_CONF to
E:\Elasticsearch\Github\elasticsearch-master\distribution\src\main\resources\config.
But it’s not working I am still getting the same error. Is the above location to ES_PATH_CONF variable correct? Is there any other way to solve this?
The location of the config directory can be set with following VM Option:
-Des.path.conf=/path/to/config/
Also, in your situation, it might be necessary to set the Elasticsearch path.home variable:
-Des.path.home=/path/to/elasticsearch/home/dir

AWS Elastic Beanstalk : the command eb list shows no environments

I am using Elastic Beanstalk and I have created 3 different environments. I used awsebcli. All of a sudden the command eb list doesn't show me my enviroments because of which I am unable to deploy the environment. The error I am getting is ERROR: This branch does not have a default environment. You must either specify an environment by typing "eb status my-env-name" or set a default environment by typing "eb use my-env-name".
I tried eb status 'my-env-name', again I got an error : ERROR: The environment name 'my-env-name' could not be found. In short: I am unable to use any eb command.
Did you forgot to run eb create --single after eb init?
This command creates a new environment. See EB CLI Command Reference »
The message itself is clear. You haven't set an environment for the branch you are working on.
You can either switch to the branch it's configure, but this means the changes you have in the current branch won't be available on deploy, unless you merge thos changes or you can set an environment for the branch you currently are using the command eb use name-of-your-env. This last can also be configured in the Elastic Beanstalk configuration file of your application.
Hope this helps.
Perhaps this may help others. I already had an existing environment on Beanstalk and was setting up a new Mac.
For some reason, the eb init did create a file in ~/.aws/config. However, it only have key and secret. To get it to work, I need to add the region as well.
# ~/.aws/config
[profile eb-cli]
aws_access_key_id = XXX
aws_secret_access_key = XXX
region=us-west-2
Next, I find my beanstalk config file in my application (i.e. project/.ebelasticbeanstalk/config.yml) and ensure that under global, it has profile: eb-cli
# project/.ebelasticbeanstalk/config.yml
global:
default_region: us-west-2
profile: eb-cli
sc: git
workspace_type: Application
After making those edits, eb list shows the environment I am expecting and I can do eb deploy again.
I faced the same issue here. It turned out being a region related trick.
In my case, it initially seems like related to the way I created the environment. I used the following command to create the env:
eb create --sample -r ap-southeast-1 -im 2 -ix 4 --vpc.elbpublic --vpc.ec2subnets AppSubA, AppSubB --vpc.dbsubnets DbSubA, DbSubB node-express-dev
Note that I created the env in Singapore region. After that, if I use "eb list", the result is empty. Why? I will touch upon it later.
However, if I use the command like this:
eb create --sample node-express-env
The "eb list" will be able to find the created env.
For the 1st cmd, as I said, it was created in Singapore. However, before the creation, in the "eb init" cmd, I didn't specify region. So, "eb cli" is by default in us-west-2. That's why it cannot list the created env. To fix it, before the creation, the command
eb init -p "64bit Amazon Linux 2 v5.5.3 running Node.js 16" --region ap-southeast-1 should be perfomed.
For the 2nd cmd, it is created in us-west-2, and "eb cli" is also in the same region. Both are in the default region. As a result, it can show it.
Hopefully this can help some cases.

AWS Command Line Interface Unable to Locate Credentials - Special Permissions

Okay, so I've encountered an insanely frustrating problem while trying to reach an AWS S3 Bucket through AWS CLI via the command prompt in Windows 7. The AWS CLI is "unable to locate credentials" a.k.a. the config.txt file # C:\Users\USERNAME\.aws\config.txt.
I've tried pathing to it by creating the AWS_CONFIG_FILE environmental variable in ControlPanel>System>AdvancedSystemSettings>EnvironmentalVariables, but no dice. I've also tried all of the above on another Win7 machine. Again, no dice.
What could I be missing here. Are there any special permission that need to be set for AWS CLI to accest config.txt? Help, before I poke my own eyes out!
The contents of config.txt, in case you're interested, are:
[default]
aws_access_key_id = key id here
aws_secret_access_key = key here
region = us-east-1
There is a another way to configure aws credentials while using command line tool.
You can pass credentials using windows command instead of passing through file.
Execute below command from windows command prompt
aws configure
It prompt you to enter below things
AWS Access key ID:
AWS secrete key ID:
Default region Name:
Default output Format:
See this video tutorial: https://youtu.be/hhXj8lM_jBs
Okay, so the config file cannot be a text file (.txt). You should create the file in CMD, and it should be a generic file w/o any extension.
A couple of points on this as I had similar problems whilst trying to perform an S3 sync.
My findings were as follows.
Remove the spaces between hte = and the key value pair (see example below).
The OP has specified a [default] section in their example, but I got the same error when I had removed this section as I did not think it was needed so it's worth nothing this is needed.
I then reformed my file as follows and it worked...
[default]
aws_access_key_id=****
aws_secret_access_key=****
region=eu-west-2
[deployment-profile]
aws_access_key_id=****
aws_secret_access_key=****
region=eu-west-2
I had to include a blank line at the bottom of my credentials file.
Just posting this really as I struggled for a few hours with vague messages from AWS and these were the solutions that worked for me. Hope that it helps someone.
If like me you have a custom IAM user in your credentials file rather than 'default', try setting the AWS_DEFAULT_PROFILE env variable to the name of your IAM user, and then running commands.
[user1]
ACCESS_KEY=
SECRET_KEY=
set AWS_DEFAULT_PROFILE=user1
aws <command>
Alternatively you can specify the --profile variable each time you use the cli:
aws <command> --profile user1

Resources