How does "knife ec2 server create"'s expansion of --run-list work? - amazon-ec2

I'm unable to bootstrap my server because "knife ec2 server create" keeps expanding my runlist to "roles".
knife ec2 server create \
-V \
--run-list 'role[pgs]' \
--environment $1 \
--image $AMI \
--region $REGION \
--flavor $PGS_INSTANCE_TYPE \
--identity-file $SSH_KEY \
--security-group-ids $PGS_SECURITY_GROUP \
--subnet $PRIVATE_SUBNET \
--ssh-user ubuntu \
--server-connect-attribute private_ip_address \
--availability-zone $AZ \
--node-name pgs \
--tags VPC=$VPC
Consistently fails because 'roles[pgs]' is expanded to 'roles'. Why is this? Is there some escaping or alternative method I can use?
I'm currently working around this by bootstrapping with an empty run-list and then overriding the runlist by running chef-client once the node is registered.

This is a feature of bash. [] is a wildcard expander. You should can escape the brackets using "\".

Related

AWS list instances that are pending and running with one CLI call

I have an application which needs to know how many EC2 instances I have in both the pending and running states. I could run the two following commands and sum them to get the result, but that means my awscli will be making two requests, which is both slow and probably bad practice.
aws ec2 describe-instances \
--filters Name=instance-state-name,Values="running" \
--query 'Reservations[*].Instances[*].[InstanceId]' \
--output text \
| wc -l
aws ec2 describe-instances \
--filters Name=instance-state-name,Values="pending" \
--query 'Reservations[*].Instances[*].[InstanceId]' \
--output text \
| wc -l
Is there a way of combining these two queries into a single one, or another way of getting the total pending + running instances using a single query?
Cheers!
Shorthand syntax with commas:
Values=running,pending
You can add several filter values as follows:
aws ec2 describe-instances \
--filters "Name=instance-state-name,Values=running,pending" \
--query 'Reservations[*].Instances[*].[InstanceId]' \
--output text \
| wc -l

How to add Infrastructure Agent to Heroku applications

I am attempting to have the New Relic Infrastructure Agent monitor my heroku applications.
The documentation says to run the following:
docker run \
-d \
--name newrelic-infra \
--network=host \
--cap-add=SYS_PTRACE \
--privileged \
--pid=host \
-v "/:/host:ro" \
-v "/var/run/docker.sock:/var/run/docker.sock" \
-e NRIA_LICENSE_KEY=[Key]\
newrelic/infrastructure:latest
But where do I actually run or put this so it runs it on my Heroku apps?

How to create EC2 instance by docker-machine with exists key pair?

Use this way to create an instance on aws:
docker-machine create \
-d amazonec2 \
--amazonec2-region ap-northeast-1 \
--amazonec2-zone a \
--amazonec2-ami ami-XXXXXX \
--amazonec2-keypair-name my_key_pair \
--amazonec2-ssh-keypath ~/.ssh/id_rsa \
my_instance
Can't connect to it by ssh.
The my_key_pare is a name that exist on aws. The ~/.ssh/id_rsa is local ssh private key. How to set the right value?
I have read the document but didn't find an example of using both --amazonec2-keypair-name and --amazonec2-ssh-keypath.
Download the file from "Key Pairs" in AWS Console and place it in ~/.ssh.
Then run
docker-machine create \
-d amazonec2 \
--amazonec2-region ap-northeast-1 \
--amazonec2-zone a \
--amazonec2-ami ami-XXXXXX \
--amazonec2-ssh-keypath ~/.ssh/keypairfile \
my_instance
In gitlab-runner using Docker+machine, you have to provide both "amazonec2-keypair-name=XXX",
"amazonec2-ssh-keypath=XXX".
the Keypath should be like /home/gitlab-runner/.ssh/id_rsa and the path should also have id_rsa.pub file with id_rsa. These two file should not be your local generated key, should be the id_rsa, id_rsa.pub of Pem created
The following commands will do the trick,
cat faregate-test.pem > /home/gitlab-runner/.ssh/id_rsa
ssh-keygen -y -f faregate-test.pem > /home/gitlab-runner/.ssh/id_rsa.pub
And This will allow you to connect from runner manager instance to the runner provisioned by using your AWS existing keypair...

Configure EMR Cluster for Fair Scheduling

I am trying to spin up an emr cluster with fair scheduling such that I can run multiple steps in parallel. I see that this is possible via pipeline (https://aws.amazon.com/about-aws/whats-new/2015/06/run-parallel-hadoop-jobs-on-your-amazon-emr-cluster-using-aws-data-pipeline/), but I already have cluster management / creating automated via an airflow job calling the awscli[1] so it would be great to just update my configurations.
aws emr create-cluster \
--applications Name=Spark Name=Ganglia \
--ec2-attributes "${EC2_PROPERTIES}" \
--service-role EMR_DefaultRole \
--release-label emr-5.8.0 \
--log-uri ${S3_LOGS} \
--enable-debugging \
--name ${CLUSTER_NAME} \
--region us-east-1 \
--instance-groups InstanceGroupType=MASTER,InstanceCount=1,InstanceType=m3.xlarge InstanceGroupType=CORE,InstanceCount=4,InstanceType=m3.xlarge)
I think it may be achieved using the --configurations (https://docs.aws.amazon.com/cli/latest/reference/emr/create-cluster.html) flag, but not sure of the correct env names
Yes, you are correct. You can use EMR configurations to achieve your goal. You can create a JSON file with something like below :
yarn-config.json:
[
{
"Classification": "yarn-site",
"Properties": {
"yarn.resourcemanager.scheduler.class": "org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler"
}
}
]
as per Hadoop Fair Scheduler docs
Then modify you AWS CLI as :
aws emr create-cluster \
--applications Name=Spark Name=Ganglia \
--ec2-attributes "${EC2_PROPERTIES}" \
--service-role EMR_DefaultRole \
--release-label emr-5.8.0 \
--log-uri ${S3_LOGS} \
--enable-debugging \
--name ${CLUSTER_NAME} \
--region us-east-1 \
--instance-groups \
--configurations file://yarn-config.json
InstanceGroupType=MASTER,InstanceCount=1,InstanceType=m3.xlarge
InstanceGroupType=CORE,InstanceCount=4,InstanceType=m3.xlarge)

Vagrant keeps losing file doing provision

I'm running into an odd behavior on the latest version of vagrant in a Windows7/msys/Virtualbox environment setup, where after executing a vagrant up command I get an error with rsync; 'file has vanished: "/c/Users/spencerd/workspace/watcher/.LISTEN' doing the provisioning stage.
Since google, irc, and issue trackers have little to no documentation on this issue I wonder if anyone else ran into this and what would the fix be?
And for the record I have successfully build a box using the same vagrant file and provisioning script. For those that want to look, the project code is up at https://gist.github.com/denzuko/a6b7cce2eae636b0512d, with the debug log at gist.github.com/
After digging further into the directory structure and running into issues with git pushing code up I was able to find a non-existant file that needed to be removed after a reboot.
Thus, doing a reboot and a rm -rf -- "./.LISTEN\ \ \ \ \ 0\ \ \ \ \ \ 100\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ " did the trick.

Resources