AWS Elastic Beanstalk : the command eb list shows no environments - bash

I am using Elastic Beanstalk and I have created 3 different environments. I used awsebcli. All of a sudden the command eb list doesn't show me my enviroments because of which I am unable to deploy the environment. The error I am getting is ERROR: This branch does not have a default environment. You must either specify an environment by typing "eb status my-env-name" or set a default environment by typing "eb use my-env-name".
I tried eb status 'my-env-name', again I got an error : ERROR: The environment name 'my-env-name' could not be found. In short: I am unable to use any eb command.

Did you forgot to run eb create --single after eb init?
This command creates a new environment. See EB CLI Command Reference ยป

The message itself is clear. You haven't set an environment for the branch you are working on.
You can either switch to the branch it's configure, but this means the changes you have in the current branch won't be available on deploy, unless you merge thos changes or you can set an environment for the branch you currently are using the command eb use name-of-your-env. This last can also be configured in the Elastic Beanstalk configuration file of your application.
Hope this helps.

Perhaps this may help others. I already had an existing environment on Beanstalk and was setting up a new Mac.
For some reason, the eb init did create a file in ~/.aws/config. However, it only have key and secret. To get it to work, I need to add the region as well.
# ~/.aws/config
[profile eb-cli]
aws_access_key_id = XXX
aws_secret_access_key = XXX
region=us-west-2
Next, I find my beanstalk config file in my application (i.e. project/.ebelasticbeanstalk/config.yml) and ensure that under global, it has profile: eb-cli
# project/.ebelasticbeanstalk/config.yml
global:
default_region: us-west-2
profile: eb-cli
sc: git
workspace_type: Application
After making those edits, eb list shows the environment I am expecting and I can do eb deploy again.

I faced the same issue here. It turned out being a region related trick.
In my case, it initially seems like related to the way I created the environment. I used the following command to create the env:
eb create --sample -r ap-southeast-1 -im 2 -ix 4 --vpc.elbpublic --vpc.ec2subnets AppSubA, AppSubB --vpc.dbsubnets DbSubA, DbSubB node-express-dev
Note that I created the env in Singapore region. After that, if I use "eb list", the result is empty. Why? I will touch upon it later.
However, if I use the command like this:
eb create --sample node-express-env
The "eb list" will be able to find the created env.
For the 1st cmd, as I said, it was created in Singapore. However, before the creation, in the "eb init" cmd, I didn't specify region. So, "eb cli" is by default in us-west-2. That's why it cannot list the created env. To fix it, before the creation, the command
eb init -p "64bit Amazon Linux 2 v5.5.3 running Node.js 16" --region ap-southeast-1 should be perfomed.
For the 2nd cmd, it is created in us-west-2, and "eb cli" is also in the same region. Both are in the default region. As a result, it can show it.
Hopefully this can help some cases.

Related

Terraform throwing error:error configuring Terraform AWS Provider: The system cannot find the path specified

I was facing issue with running aws command via cli with certificate issue. So as per some blogs, I was trying to fix the issue using setx AWS_CA_BUNDLE "C:\data\ca-certs\ca-bundle.pem" command.
Now even after I removed the variable AWS_CA_BUNDLE from my aws configure file, terraform keeps throwing the below error on terraform apply.
Error: error configuring Terraform AWS Provider: loading configuration: open C:\data\ca-certs\ca-bundle.pem: The system cannot find the path specified.
Can someone please tell me where terraform/aws cli is taking this value from and how to remove it? I have tried deleting the entire aws config and credential files still this error is thrown, uninstall aws cli and reinstalling.
If its set in some system/environment variable, can you please tell me how to reset it to default value?
The syntax to add ca_bundle variable to config file is wrong.
Your config file should look like this
[default]
region = us-east-1
ca_bundle = dev/apps/ca-certs/cabundle-2019mar05.pem
But as I understand you want to use environment variable (AWS_CA_BUNDLE).
AWS_CA_BUNDLE:
Specifies the path to a certificate bundle to use for HTTPS certificate validation.
If defined, this environment variable overrides the value for the profile setting ca_bundle. You can override this environment variable by using the --ca-bundle command line parameter.
I would suggest remove environment variable (AWS_CA_BUNDLE) and add ca_bundle to config file. The delete .terraform folder and run terraform init
Go environment variables and delete the environment variable created by AWS_CA_BUNDLE. Shut down Terminal and again start. Run the commands now it will work properly.

Specify an AWS CLI profile in a script when two exist

I'm attempting to use a script which automatically creates snapshots of all EBS volumes on an AWS instance. This script is running on several other instances with no issue.
The current instance already has an AWS profile configured which is used for other purposes. My understanding is I should be able to specify the profile my script uses, but I can't get this to work.
I've added a new set of credentials to the /home/ubuntu/.aws file by adding the following under the default credentials which are already there:
[snapshot_creator]
aws_access_key_id=s;aldkas;dlkas;ldk
aws_secret_access_key=sdoij34895u98jret
In the script I have tried adding AWS_PROFILE=snapshot_creatorbut when I run it I get the error Unable to locate credentials. You can configure credentials by running "aws configure".
So, I delete my changes to /home/ubuntu/.aws and instead run aws configure --profile snapshot_creator. However after entering all information I get the error [Errno 17] File exists: '/home/ubuntu/.aws'.
So I add my changes to the .aws file again and this time in the script for every single command starting with aws ec2 I add the parameter --profile snapshot_creator, but this time when I run the script I get The config profile (snapshot_creator) could not be found.
How can I tell the script to use this profile? I don't want to change the environment variables for the instance because of the aforementioned other use of AWS CLI for other purposes.
Credentials should be stored in the file "/home/ubuntu/.aws/credentials"
I guess this error is because it couldn't create a directory. Can you delete the ".aws" file and re-run the configure command? It should create the credentials file under "/home/ubuntu/.aws/"
File exists: '/home/ubuntu/.aws'

How to deploy Django Fixtures to Amazon AWS

I have my app stored on GitHub. To deploy it to Amazon, I use their EB deploy command which takes my git repository and sends it up. It then runs the container commands to load my data.
container_commands:
01_migrate:
command: "django-admin.py migrate"
leader_only: true
02_collectstatic:
command: "source /opt/python/run/venv/bin/activate && python manage.py collectstatic --noinput"
The problem is that I don't want the fixtures in my git. Git should not contain this data since it's shared with other users. How can I get my AWS to load the fixtures some other way?
You can use the old school way: scp to the ec2 instance.
You can go to the EC2 console to see the real EC2 instance associated to your EB environment (I assume you only have one instance). Write down the public ip, and then connect to the instance like you would do with a normal EC2 instance.
For example
scp -i [YOUR_AWS_KEY] [MY_FIXTURE_FILE] ec2-user#[INSTANCE_IP]:[PATH_ON_SERVER]
Note that the username has to be ec2-user.
But I do not recommend this way to deploy the project because you may need to manually execute the commands. This is, however, useful for me to get the fixture from a live server.
To avoid tracking fixtures in the git. I just use a simple workaround: create a local branch for EB deployment and track the fixtures along with other environment-specific credentials. Such EB branches should never be uploaded to the git remote repositories.

How to push container to Google Container Registry (unable to create repository)

EDIT: I'm just going to blame this on platform inconsistencies. I have given up on pushing to the Google Cloud Container Registry for now, and have created an Ubuntu VM where I'm doing it instead. I have voted to close this question as well, for the reasons stated previously, and also as this should probably have been asked on Server Fault in the first place. Thanks for everyone's help!
running $ gcloud docker push gcr.io/kubernetes-test-1367/myapp results in:
The push refers to a repository [gcr.io/kubernetes-test-1367/myapp]
595e622f9b8f: Preparing
219bf89d98c1: Preparing
53cad0e0f952: Preparing
765e7b2efe23: Preparing
5f2f91b41de9: Preparing
ec0200a19d76: Preparing
338cb8e0e9ed: Preparing
d1c800db26c7: Preparing
42755cf4ee95: Preparing
ec0200a19d76: Waiting
338cb8e0e9ed: Waiting
d1c800db26c7: Waiting
42755cf4ee95: Waiting
denied: Unable to create the repository, please check that you have access to do so.
$ gcloud init results in:
Welcome! This command will take you through the configuration of gcloud.
Settings from your current configuration [default] are:
[core]
account = <my_email>#gmail.com
disable_usage_reporting = True
project = kubernetes-test-1367
Your active configuration is: [default]
Note: this is a duplicate of Kubernetes: Unable to create repository, but I tried his solution and it did not help me. I've tried appending :v1, /v1, and using us.gcr.io
Edit: Additional Info
$ gcloud --version
Google Cloud SDK 116.0.0
bq 2.0.24
bq-win 2.0.18
core 2016.06.24
core-win 2016.02.05
gcloud
gsutil 4.19
gsutil-win 4.16
kubectl
kubectl-windows-x86_64 1.2.4
windows-ssh-tools 2016.05.13
+
$ gcloud components update
All components are up to date.
+
$ docker -v
Docker version 1.12.0-rc3, build 91e29e8, experimental
The first image push requires admin rights for the project. I had the same problem trying to push a new container to GCR for a team project, which I could resolve by updating my permissions.
You might also want to have a look at docker-credential-gcr. Hope that helps.
What version of gcloud and Docker are you using?
Looking at your requests, it seems as though the Docker client is not attaching credentials, which would explain the access denial.
I would recommend running gcloud components update and seeing if the issue reproduces. If it still does, feel free to reach out to us on gcr-contact at google.com so we can help you debug the issue and get your issue resolved.
I am still not able to push a docker image from my local machine, but authorizing a compute instance with my account and pushing an image from there works. If you run into this issue, I recommend creating a Compute Engine instance (for yourself), authorizing an account with gcloud auth that can push containers, and pushing from there. I have my source code in a Git repository that I can just pull from to get the code.
Thanks for adding your Docker version info. Does downgrading Docker to a more stable release (e.g. 1.11.2) help at all? Have you run 'docker-machine upgrade'?
It seems like you're trying to run gcloud docker push from an Google Compute Engine instance without a proper security scope of read/write access to Google Cloud Storage (it's where Google Container Registry stores the images of your containers behind the scene).
Try to create another instance, but this time with proper access scopes, i.e.:
gcloud compute --project "kubernetes-test-1367" instances create "test" --zone "us-east1-b" --machine-type "n1-standard-1" --network "default" --scopes default="https://www.googleapis.com/auth/logging.write","https://www.googleapis.com/auth/monitoring.write","https://www.googleapis.com/auth/servicecontrol","https://www.googleapis.com/auth/service.management","https://www.googleapis.com/auth/devstorage.full_control" --image "/debian-cloud/debian-8-jessie-v20160629" --boot-disk-size "10" --boot-disk-type "pd-standard" --boot-disk-device-name "test-1"
Once you create new instance, ssh into it and then try to re-run the gcloud docker push gcr.io/kubernetes-test-1367/myapp command
I checked for
gcloud auth list
to see my application is the active account and not my personal Google account. After setting
gcloud config set account example#gmail.com
I was able to push
gcloud docker -- push eu.gcr.io/$PROJECT_ID/my-docker:v1
So I can continue http://kubernetes.io/docs/hellonode/
I had a similar issue and it turned out that I had to enable billing for the project. When you have a new Google Cloud account you can enable only so many projects with billing. Once I did that it worked.
Also this could be the cause of this problem (was in my case):
Important: make sure the Compute Engine API is enabled for your project on the
Source: https://pinrojas.com/2016/09/12/your-personal-kubernetes-image-repo-in-a-few-steps-gcr-io/
If anyone is still having this problem while trying to push a docker image to gcr, even though they've authenticated an account that should have the permission to do so, try running gcloud auth configure-docker and pushing again.

AWS Command Line Interface Unable to Locate Credentials - Special Permissions

Okay, so I've encountered an insanely frustrating problem while trying to reach an AWS S3 Bucket through AWS CLI via the command prompt in Windows 7. The AWS CLI is "unable to locate credentials" a.k.a. the config.txt file # C:\Users\USERNAME\.aws\config.txt.
I've tried pathing to it by creating the AWS_CONFIG_FILE environmental variable in ControlPanel>System>AdvancedSystemSettings>EnvironmentalVariables, but no dice. I've also tried all of the above on another Win7 machine. Again, no dice.
What could I be missing here. Are there any special permission that need to be set for AWS CLI to accest config.txt? Help, before I poke my own eyes out!
The contents of config.txt, in case you're interested, are:
[default]
aws_access_key_id = key id here
aws_secret_access_key = key here
region = us-east-1
There is a another way to configure aws credentials while using command line tool.
You can pass credentials using windows command instead of passing through file.
Execute below command from windows command prompt
aws configure
It prompt you to enter below things
AWS Access key ID:
AWS secrete key ID:
Default region Name:
Default output Format:
See this video tutorial: https://youtu.be/hhXj8lM_jBs
Okay, so the config file cannot be a text file (.txt). You should create the file in CMD, and it should be a generic file w/o any extension.
A couple of points on this as I had similar problems whilst trying to perform an S3 sync.
My findings were as follows.
Remove the spaces between hte = and the key value pair (see example below).
The OP has specified a [default] section in their example, but I got the same error when I had removed this section as I did not think it was needed so it's worth nothing this is needed.
I then reformed my file as follows and it worked...
[default]
aws_access_key_id=****
aws_secret_access_key=****
region=eu-west-2
[deployment-profile]
aws_access_key_id=****
aws_secret_access_key=****
region=eu-west-2
I had to include a blank line at the bottom of my credentials file.
Just posting this really as I struggled for a few hours with vague messages from AWS and these were the solutions that worked for me. Hope that it helps someone.
If like me you have a custom IAM user in your credentials file rather than 'default', try setting the AWS_DEFAULT_PROFILE env variable to the name of your IAM user, and then running commands.
[user1]
ACCESS_KEY=
SECRET_KEY=
set AWS_DEFAULT_PROFILE=user1
aws <command>
Alternatively you can specify the --profile variable each time you use the cli:
aws <command> --profile user1

Resources