Amazon EC2- user-data through the AWS console doesn't work - amazon-ec2

From the Amazon EC2 console (or even if I do it through API tools on a box using a file), I paste:
#!/bin/bash -ex
# tell the world what we've done!
echo 'thisisthetoken' > /home/ec2-user/testuserdata
Into the user-data text box. When the instance boots (the Amazon Linux AMI), the file is not in the directory. Am I missing something so basic?

I'm sure that's newest AMI from Amazon. You can also test Ubuntu AMI from www.alestic.com.
/var/log/messages or /etc/log/syslog logs user-data executions.

There was a bug in the original AMI; the current Amazon Linux AMI's work with the same user-data script.

your AMI should support user-data (most alestic.com images do)

Related

Shell-Install one script into group of servers

i have a shell script which need to be installed over 100 Ubuntu instances/servers.What is the best way to install the same script on all instance without logging into each one.
You can use AWS System Manager , according to AWS Documentation :
You can send commands to tens, hundreds, or thousands of instances by
using the targets parameter (the Select Targets by Specifying a
Tag option in the Amazon EC2 console). The targets parameter accepts
a Key,Value combination based on Amazon EC2 tags that you specified
for your instances. When you execute the command, the system locates
and attempts to run the command on all instances that match the
specified tags
You can Target Instance by tag :
aws ssm send-command --document-name name --targets Key=tag:tag_name,Values=tag_value [...]
or
Targeting Instance IDs:
aws ssm send-command --document-name name --targets Key=instanceids,Values=ID1,ID2,ID3 [...]
Read the AWS Documentation for Details.
Thanks
You have several different options when trying to accomplish this task.
Like Kush mentioned, AWS System manager is great, but is a tightly coupled AWS service.
Packer - You could use Packer to create an AMI of the servers, and have the script installed on them, or just executed whatever the script is doing.
Configuration Management.
Ansible/Puppet/Chef. - These tools allow you to manage thousands of servers with only a couple of commands. My preference would be for Ansible, it is light weight, the syntax is only yaml, connects over ssh, and still allows use of placing shell scripts, if need be.

Editing cloud-init files

I've created a cloud-init file using Terraform's aws_instance.user_data parameter.
This then is executed on start up, on Centos machines, by the cloud-init systemd service.
I would like to edit this file for dev/testing purposes on the fly then simply restart the said service.
To this end, where can I find the cloud-init file that contains the commands run by the cloud-init service?
On my Centos machine, this was found in the file:
/var/lib/cloud/instances/<instance-id>/user-data.txt
Where instance-id is found in the file /var/lib/cloud/data/instance-id.

Is it possible to run kubernetes as a docker container?

I'm very new to kubernetes and trying to conceptualize it as well as set it up locally in order to try developing something on it.
There's a confound though that I am running on a windows machine.
Their "getting started" documentation in github says you have to run Linux to use kubernetes.
As docker runs on windows, I was wondering if it was possible to create a kubernetes instance as a container in windows docker and use it to manage the rest of the cluster in the same windows docker instance.
From reading the setup instructions, it seems like docker, kubernetes, and something called etcd all have to run "in parallel" on a single host operating system... But part of me thinks it might be possible to
Start docker, boot 'default' machine.
Create kubernetes container - configure to communicate with the existing docker 'default' machine
Use kubernetes to manage existing docker.
Pipe dream? Wrongheaded foolishness? I see there are some options around running it in a vagrant instance. Does that mean docker, etcd, & kubernetes together in a single VM (which in turn creates a cluster of virtual machines inside it?)
I feel like I need to draw a picture of what this all looks like in terms of physical hardware and "memory boxes" to really wrap my head around this.
With Windows, you need docker-machine and boot2docker VMs to run anything docker related.
There is no (not yet) "docker for Windows".
Note that issue 7428 mentioned "Can't run kubernetes within boot2docker".
So even when you follow instructions (from a default VM created with docker-machine), you might still get errors:
➜ workspace docker run --net=host -d -v /var/run/docker.sock:/var/run/docker.sock gcr.io/google_containers/hyperkube:v0.14.2 /hyperkube kubelet --api_servers=http://localhost:8080 --v=2 --address=0.0.0.0 --enable_server --hostname_override=127.0.0.1 --config=/etc/kubernetes/manifests
ee0b490f74f6bc9b70c1336115487b38d124bdcebf09b248cec91832e0e9af1d
➜ workspace docker logs -f ee0b490f74f6bc9b70c1336115487b38d124bdcebf09b248cec91832e0e9af1d
W0428 09:09:41.479862 1 server.go:249] Could not load kubernetes auth path: stat : no such file or directory. Continuing with defaults.
I0428 09:09:41.479989 1 server.go:168] Using root directory: /var/lib/kubelet
The alternative would be to try on a full-fledge Linux VM (like the latest Ubuntu), instead of a boot2docker-like VM (based on a TinyCore distro).
All k8s components can be raised up with hyperkube, which helps you bring up a containerized one.
If you're able to run docker on windows, it would probably work. I haven't tried it on windows personally.

How to run a shell script on all amazon ec2 instances that are part of a autoscaling group?

Can anyone please tell me how to run a shell script on all the ec2 instances that are part of an auto scaling group?
The scenario is that I have a script that I want to run on many ec2 instances that are turned on automatically as part of auto scaling group. The native approach is to SSH to each instance and run the script. I am looking for a way by which it can run automatically on all the instances when I run it on one of the ec2 instance or any better way of doing this?
Thanks in advance.
You'll want to add that shell script to the userdata in a NEW Launch Config and then update the autoscaling group.
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/user-data.html#user-data-shell-scripts
Updating the Launch Config
if you want to change the launch configuration for your Auto Scaling group, you must create a launch configuration and then update your Auto Scaling group with the new launch configuration. When you change the launch configuration for your Auto Scaling group, any new instances are launched using the new configuration parameters, but existing instances are not affected.
https://docs.aws.amazon.com/autoscaling/latest/userguide/LaunchConfiguration.html
You can implement in many different ways...
Use awscli to get all instances in auto scaling group
aws autoscaling describe-auto-scaling-groups --auto-scaling-group-name myTestGroup
SSHkit is interesting tool run commands on remote servers
or
Invest your time to automate your infrastructure in proper tools like puppet, chef. With puppet mcollective you can do magic, including what you have asked about.
Update:
When you add instance to autoscaling group new tag name=aws:autoscaling:groupName, value=name_of_assigned_autoscaling_group is added, thus its easy to find it searching for this tag.
$ asgName=testASG
$ aws ec2 describe-instances --filters Name=tag-key,Values='aws:autoscaling:groupName' Name=tag-value,Values=$asgName --output text --query 'Reservations[*].Instances[*].[InstanceId,PublicIpAddress]'
The output you will get from the command above is the instance name and public IP:
i-4c42aabc 52.1.x.y
You can use this is your script...
I would do it using Chef. Opsworks (an aws product) is chef plus a lot of things that will do exactly what you want and give you even more flexibility.
Run Command perhaps?
You can use it to invoke scripts as well:
*Nice cheat: Select Run Command on the left-side menu of the EC2 section. Click the "Run Command" button. Then setup your AWS-RunShellScript. Type in your code. Then at the bottom there is a dropdown labelled: "AWS Command Line Interface command", select the correct platform and copy/paste the command into a script.
$Command_Id = aws ssm send-command --document-name "AWS-RunPowerShellScript" --targets '{\"Key\":\"tag:Name\",\"Values\":[\"RunningTests\"]}' --parameters '{\"commands\":[\"Start-Process \\\"C:\\\\path\\\\to\\\\scripts\\\\LOCAL_RUN.ps1\\\"\"]}' --comment "Run Tests Locally" --timeout-seconds 3800 --region us-east-1 --query 'Command.CommandId' --output text | Out-String
Per the question: Use your Auto Scaling Group Name instead of "RunningTests." Or, in the console: In your "Run Command" setup. Select the "Specifiying a Tag" radio button, then "Name" and your Auto Scaling Group.
*Note: The command above is windows powershell, but you can convert your script to Linux/OS or whatever by selecting the correct platform when in the Run Command setup.
**Note: Ensure your User on that instance has the AmazonSSMFullAccess permission setup to run the commands.
***Note:The SSM Agent comes installed on Windows Instances by default. If you are running Linux or something else, you might need to install the SSM Agent.
For a simplistic implementation you can use python fabric (http://www.fabfile.org/). It is a tool to run commands from a local or bashin instance to list of servers.
Here is a repo which has basic scaffolding and examples. Lot of CICD tools are having features to target this requirements, but I found fabric the easiest to implement fo simple setup.
https://github.com/techsemicolon/python-fabric-ec2-aws

Cloudera CDH on EC2

I am an aws newbie, and I'm trying to run Hadoop on EC2 via Cloudera's AMI. I installed the AMI, downloaded the cloudera-haddop-for-ec2-tools, and now I'm trying to configure
haddop-ec2-env.sh
It is asking for the following:
AWS_ACCOUNT_ID
AWS_ACCESS_KEY_ID
AWS_SECRET_ACCESS_KEY
EC2_KEYDIR
PRIVATE_KEY_PATH
when running:
./hadoop-ec2 launch-cluster my-cluster 10
i'm getting
AWS was not able to validate the provided access credentials
Firstly, I have the first 3 attributes for my own account. This is a corporate account, and I received an email with the access key id and secret access key for my email. Is it possible that my account doesn't have the proper permissions to do what is needed here. Exactly why does this script need my credentials? What does it need to do?
Secondly, where is the EC2 key dir? I've uploaded my key.pem file that amazon created for me, and hard coded that into the PRIVATE_KEY_PATH and chmod 400 on the .pem file. Is that the correct key that this script needs?
Any help is appreciated?
Sam
The cloudera ec2 tools heavily rely on the amazon ec2 api tools. Therefore, you must do the following:
1) Download amazon ec2 api tools from http://aws.amazon.com/developertools/351
2) Download cloudera ec2 tools from http://cloudera-packages.s3.amazonaws.com/cloudera-for-hadoop-on-ec2-0.3.0.tar.gz
3) Set the following env variables I am only giving Unix based examples
export EC2_HOME=<path-to-tools-from-step-1>
export $PATH=$PATH:$EC2_HOME/bin
export $PATH=$PATH:<path-to-cloudera-ec2-tools>/bin
export EC2_PRIVATE_KEY=<path-to-private-key.pem>
export EC2_CERT=<path-to-cert.pem>
4) In cloudera-ec2-tools/bin set the following variables
AWS_ACCOUNT_ID=<amazon-acct-id>
AWS_ACCESS_KEY_ID=<amazon-access-key>
AWS_SECRET_ACCESS_KEY=<amazon-secret-key>
EC2_KEYDIR=<dir-where-the-ec2-private-key-and-ec2-cert-are>
KEY_NAME=<name-of-ec2-private-key>
And then run
$ hadoop-ec2 launch-cluster my-hadoop-cluster 10
Which will create a hadoop cluster called "my-hadoop" with 10 nodes on multiple ec2 machines

Resources