I am running into problems when I am trying to run a mapreduce job on AWS via the command line. I have to perform a large set of steps (approx 100) that are all chained to each other. Since I am not looking forward to configuring that by hand with the AWS graphic interface, I am trying to get it done with the CLI.
However, even the most easy command does not work:
$ aws emr list-clusters
hostname 'elasticmapreduce.us-west-1.amazonaws.com' doesn't match u'us-west-1.elasticmapreduce.amazonaws.com'
On S3 my configurations seem to work fine, since this command creates the bucket without any problems:
$ aws s3 mb s3://randombigdatabucket
These are my configurations:
$ aws configure list
Name Value Type Location
---- ----- ---- --------
profile <not set> None None
access_key ****************fooo shared-credentials-file
secret_key ****************fooo shared-credentials-file
region us-west-1 config-file ~/.aws/config
I hope somebody can help me out with this one!
try to install AWS CLI v1.6.6 or later
Related
I have a Jenkins Job to start and stop AWS EC2 instances.
Probably the profile is misconfigured and I'm stuck at this :
botocore.exceptions.ProfileNotFound: The config profile xxxx could not be found
Using this command through Execute Shell:
aws ec2 stop-instances --region $AWS_DEFAULT_REGION --profile $AWS_PROFILE --instance-ids $INSTANCE
Any suggestions will be appreciated to modify the job or resolve this error.
Please check the profile name you passed in the command. It should be configured in your credentials file, usually found at this location ~/.aws/credentials
You can follow this guide (https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-configure.html) to set up profile.
Best of luck
I am trying to use Amazon's ECS cli to create a cluster. I keep getting the error:
reason="The key pair 'my-key-pair' does not exist" resourceType="AWS::AutoScaling::LaunchConfiguration"
I have also run:
ecs-cli configure profile --profile-name grantspilsbury --access-key foo --secret-key bar
ecs-cli configure --cluster cluster_test --region us-east-1 --config-name myclusterconfig
I have added my-key-pair to ECS and to EC2.
The full log is:
~ $ ecs-cli up --keypair my-key-pair --capability-iam --size 2 --instance-type t2.small --force
INFO[0002] Created cluster cluster=default region=us-east-1
INFO[0003] Waiting for your CloudFormation stack resources to be deleted...
INFO[0003] Cloudformation stack status stackStatus="DELETE_IN_PROGRESS"
INFO[0038] Waiting for your cluster resources to be created...
INFO[0038] Cloudformation stack status stackStatus="CREATE_IN_PROGRESS"
INFO[0101] Cloudformation stack status stackStatus="CREATE_IN_PROGRESS"
INFO[0164] Cloudformation stack status stackStatus="CREATE_IN_PROGRESS"
ERRO[0197] Failure event reason="The key pair 'my-key-pair' does not exist" resourceType="AWS::AutoScaling::LaunchConfiguration"
FATA[0197] Error executing 'up': Cloudformation failure waiting for 'CREATE_COMPLETE'. State is 'ROLLBACK_IN_PROGRESS'
I ran into the same issue. My issue was I was giving it the full path to the pem file rather than the name of the key on EC2 (link to Ohio region). Turning
ecs-cli up --keypair /home/me/keyPair.pem --capability-iam --size 2 --instance-type t2.medium --cluster-config ec2-tutorial --force
into
ecs-cli up --keypair keyPair --capability-iam --size 2 --instance-type t2.medium --cluster-config ec2-tutorial --force
works as long as there is a key pair on EC2 named keyPair
My problem was that I was passing the filename (keypair.pem) instead of the name of the keypair on AWS. Make sure you pass the keypair name as you see on AWS and not the name of the file.
It's possible that your key is in a different region from the one where you're attempting to create the image. Jeff's answer gave me the clue that my keypair was in the default instance (Ohio), but I was creating my instance in my local region.
I was following this tutorial until I found the same issue:
https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ecs-cli-tutorial-ec2.html
It had nothing to do with the region, the problem was my Key-Pair wasn't uploaded to the ECS. Maybe it is obvious, but if you are AWS beginner like me, the tutorial instructions are not clear about this.
Previous step: Create Key-Pair
If you haven't already done it, first you have to create the Key-Pair that is stored on a ".pem" file. Following this instructions with the AWS Console, was quite easy: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-key-pairs.html
Solution:
The ".pem" file has to be uploaded. Can be done using the AWS console, following the menu:
EC2 > Network & Security > Key Pairs > "Create Key Pair"
Menu URL: https://eu-central-1.console.aws.amazon.com/ec2/v2/home?region=eu-central-1#CreateKeyPair:
(Be careful about the 2 regions defined on this URL)
As it is described on:
https://www.cloudbooklet.com/how-to-add-new-user-with-key-pair-in-aws-ec2/
That means that keypair name does not exist in your EC2 account.
Simply create it with s per the aws guide
i have a shell script which need to be installed over 100 Ubuntu instances/servers.What is the best way to install the same script on all instance without logging into each one.
You can use AWS System Manager , according to AWS Documentation :
You can send commands to tens, hundreds, or thousands of instances by
using the targets parameter (the Select Targets by Specifying a
Tag option in the Amazon EC2 console). The targets parameter accepts
a Key,Value combination based on Amazon EC2 tags that you specified
for your instances. When you execute the command, the system locates
and attempts to run the command on all instances that match the
specified tags
You can Target Instance by tag :
aws ssm send-command --document-name name --targets Key=tag:tag_name,Values=tag_value [...]
or
Targeting Instance IDs:
aws ssm send-command --document-name name --targets Key=instanceids,Values=ID1,ID2,ID3 [...]
Read the AWS Documentation for Details.
Thanks
You have several different options when trying to accomplish this task.
Like Kush mentioned, AWS System manager is great, but is a tightly coupled AWS service.
Packer - You could use Packer to create an AMI of the servers, and have the script installed on them, or just executed whatever the script is doing.
Configuration Management.
Ansible/Puppet/Chef. - These tools allow you to manage thousands of servers with only a couple of commands. My preference would be for Ansible, it is light weight, the syntax is only yaml, connects over ssh, and still allows use of placing shell scripts, if need be.
I am trying to launch a Spark cluster on an EC2 that I created in a development AWS instance. I was able to successfully connect to the EC2 instance using the AWSCLI as ec2-user. I used the existing VPC and AMI to create this EC2. Unzipped the Spark files on EC2 and using the private key tried starting the cluster using the below:
export AWS_SECRET_ACCESS_KEY=xxx
export AWS_ACCESS_KEY_ID=xxx
/home/ec2-user/spark-1.2.0/ec2$ ./spark-ec2 -k test -i /home/ec2-user/identity_files/test.pem launch test-spark-cluster
Got the Error:
boto.exception.EC2ResponseError: EC2ResponseError: 400 Bad Request
InvalidKeyPair.NotFoundThe key pair 'test' does not existxxx
I thought, this might have been due to the region issue, so I used the region and zone parameters while launching spark
/home/ec2-user/spark-1.2.0/ec2$ ./spark-ec2 -k test -i /home/ec2-user/identity_files/test.pem -r us-west-2 -z us-west-2a launch test-spark-cluster
However, when I run this, I encounter a different error:
boto.exception.EC2ResponseError: EC2ResponseError: 400 Bad Request
VPCIdNotSpecifiedNo default VPC for this userxxx
How can I resolve this issue?
I am no expert on this area, but I would recommend setting more parameters on your script call, something like:
./spark-ec2 -k test
-i /home/ec2-user/identity_files/test.pem
-s 5
--instance-type=m3.medium
--region=eu-west-1
--spark-version=1.2.0
launch myCluster
The -s refers to the instante quantity to be created. Furthermore, you might want to check the following, pay special attention to the last one:
The key pair test exists on your account
The key pair test.pem is present on the EC2-console
The region for both key pair and instances is the same
Searching on the web I have found out that most of the errors related to key pairs not being found are caused by region mismatching.
I am an aws newbie, and I'm trying to run Hadoop on EC2 via Cloudera's AMI. I installed the AMI, downloaded the cloudera-haddop-for-ec2-tools, and now I'm trying to configure
haddop-ec2-env.sh
It is asking for the following:
AWS_ACCOUNT_ID
AWS_ACCESS_KEY_ID
AWS_SECRET_ACCESS_KEY
EC2_KEYDIR
PRIVATE_KEY_PATH
when running:
./hadoop-ec2 launch-cluster my-cluster 10
i'm getting
AWS was not able to validate the provided access credentials
Firstly, I have the first 3 attributes for my own account. This is a corporate account, and I received an email with the access key id and secret access key for my email. Is it possible that my account doesn't have the proper permissions to do what is needed here. Exactly why does this script need my credentials? What does it need to do?
Secondly, where is the EC2 key dir? I've uploaded my key.pem file that amazon created for me, and hard coded that into the PRIVATE_KEY_PATH and chmod 400 on the .pem file. Is that the correct key that this script needs?
Any help is appreciated?
Sam
The cloudera ec2 tools heavily rely on the amazon ec2 api tools. Therefore, you must do the following:
1) Download amazon ec2 api tools from http://aws.amazon.com/developertools/351
2) Download cloudera ec2 tools from http://cloudera-packages.s3.amazonaws.com/cloudera-for-hadoop-on-ec2-0.3.0.tar.gz
3) Set the following env variables I am only giving Unix based examples
export EC2_HOME=<path-to-tools-from-step-1>
export $PATH=$PATH:$EC2_HOME/bin
export $PATH=$PATH:<path-to-cloudera-ec2-tools>/bin
export EC2_PRIVATE_KEY=<path-to-private-key.pem>
export EC2_CERT=<path-to-cert.pem>
4) In cloudera-ec2-tools/bin set the following variables
AWS_ACCOUNT_ID=<amazon-acct-id>
AWS_ACCESS_KEY_ID=<amazon-access-key>
AWS_SECRET_ACCESS_KEY=<amazon-secret-key>
EC2_KEYDIR=<dir-where-the-ec2-private-key-and-ec2-cert-are>
KEY_NAME=<name-of-ec2-private-key>
And then run
$ hadoop-ec2 launch-cluster my-hadoop-cluster 10
Which will create a hadoop cluster called "my-hadoop" with 10 nodes on multiple ec2 machines