Aws Ec2 run script program at startup - bash

There is a method to setup an EC2 machine to execute Kafka starting script on startup?
I use also java Aws SDK, so I accept both solution for a program java that run command on EC2 instance and solutions for a bash script mode that run kafka script at startup.

A script can be passed in the User Data property.
If you are using the Amazon Linux AMI, and the first line of the script begins with #!, then the script will be executed the first time that the instance is started.
For details, see: Running Commands on Your Linux Instance at Launch

Adding a script under User Data in CloudFormation only runs once, right when the instance is launched but not when the instance is restarted which is what I needed for my use case. I use the rc.local approach as commented above and here. The following in effect appends my script to the rc.local file and performs as expected:
Resources:
VM:
Type: 'AWS::EC2::Instance'
Properties:
[...]
UserData:
'Fn::Base64': !Sub |
#!/bin/bash -x
echo 'INSTANCEID="$(curl -s http://169.254.169.254/latest/meta-data/instance-id)"' >> /etc/rc.local
#echo 'INSTANCEID=$(ls /var/lib/cloud/instances)' >> /etc/rc.local
echo 'echo "aws ec2 stop-instances --instance-ids $INSTANCEID --region ${AWS::Region}" | at now + ${Lifetime} minutes' >> /etc/rc.local
Additional tip: You can inspect the user data (the current script) and modify it using the AWS console by following these instructions: View and update the instance user data.

What is the OS of EC2 instance?
You could use userdata script at instance launch time. Remember this is just 1time activity
If your requirement is to start the script everytime you reboot EC2 instance then you could make use of rc.local file on Linux instances which is loaded at OS boot time.

rc.local didn't work for me.
I used crontab following this guide, which sets up a cron job that can run a script on start up.
https://phoenixnap.com/kb/crontab-reboot
It's essentially
crontab -e
<select editor>
#reboot <script to run>

If you are running a windows EC2 you'll want to read: https://docs.aws.amazon.com/AWSEC2/latest/WindowsGuide/ec2-windows-user-data.html
Example:
<script>
echo Current date and time >> %SystemRoot%\Temp\test.log
echo %DATE% %TIME% >> %SystemRoot%\Temp\test.log
</script>

Related

How can I automate entering input for a command in a bash script that runs on AWS EC2 launch?

For example: upon launching my EC2 instance, I would like to automatically run
docker login
so I can pull a private image from dockerhub and run it. To login to dockerhub I need to input a username and password, and this is what I would like to automate but haven't been able to figure out how.
I do know that you can pass in a script to be ran on launch via User Data. The issue is that my script expects input and I would like to automate entering that input.
Thanks in advance!
If just entering a password for docker login is your problem then I would suggest searching for a manual for docker login. 30 secs on Google gave me this link:
https://docs.docker.com/engine/reference/commandline/login/
It suggests something of the form
docker login --username foo --password-stdin < ~/my_password.txt
Which will read the password from a file my_password.txt in the current users home directory.
Seems like the easiest solution for you here is to modify your script to accept command line parameters, and pass those in with the UserData string.
Keep in mind that this will require you to change your launch configs every time your password changes.
The better solution here is to store your containers in ECS, and let AWS handle the authentication for you (as far as pulling the correct containers from a repo).
Your UserData then turns into something along:
#!/bin/bash
mkdir -p /etc/ecs
rm -f /etc/ecs/ecs.config # cleans up any old files on this instance
echo ECS_LOGFILE=/log/ecs-agent.log >> /etc/ecs/ecs.config
echo ECS_LOGLEVEL=info >> /etc/ecs/ecs.config
echo ECS_DATADIR=/data >> /etc/ecs/ecs.config
echo ECS_CONTAINER_STOP_TIMEOUT=5m >> /etc/ecs/ecs.config
echo ECS_CLUSTER=<your-cluster-goes-here> >> /etc/ecs/ecs.config
docker pull amazon/amazon-ecs-agent
docker run --name ecs-agent --detach=true --restart=on-failure:10 --volume=/var/run/docker.sock:/var/run/docker.sock --volume=/var/log/ecs/:/log --volume=/var/lib/ecs/data:/data --volume=/sys/fs/cgroup:/sys/fs/cgroup:ro --volume=/var/run/docker/execdriver/native:/var/lib/docker/execdriver/native:ro --publish=127.0.0.1:51678:51678 --env-file=/etc/ecs/ecs.config amazon/amazon-ecs-agent:latest
You may or may not need all the volumes specified above.
This setup lets the AWS ecs-agent handle your container orchestration for you.
Below is what I could suggest at this moment -
Create a S3 bucket i.e mybucket.
Put a text file(doc_pass.txt) with your password into that S3 bucket
Create a IAM policy which has GET access to just that particular S3 bucket & add this policy to the EC2 instance role.
Put below script in you user data -
aws s3 cp s3://mybucket/doc_pass.txt doc_pass.txt
cat doc_pass.txt | docker login --username=YOUR_USERNAME --password-stdin
This way you just need to make your S3 bucket secure, no secrets gets displayed in the user data.

Cloudformation helper script (bash) is not running as expected

I am trying to setup an Autoscaling group with rolling updates which will kick-in in case of any changes to ec2 launch configurations. I am using a sample template from aws labs. This works fine for Amazon linux AMI. I have changed the template to work for Ubuntu. Now everything runs fine I can see from /var/log/cfn-init.log that my script was initiated properly, below is the script I am using to check health of my ec2 instance:
verify_instance_health:
commands:
ELBHealthCheck:
command: !Sub
'until [[ $state == *InService* ]]; do state=$(aws --region ${AWS::Region} elb describe-instance-health
--load-balancer-name ${ElasticLoadBalancer}
--instances $(curl -s http://169.254.169.254/latest/meta-data/instance-id)
--query InstanceStates[0].State);echo $(date): [$state] >> /tmp/health.log;sleep 10;done'
If I run the script directly from terminal it runs fine and works as expected, but when this runs via cloudformation show how the condition:
until [[ $state = InService ]]
Is never evaluated to be true which makes it keep on running. I have verified from /tmp/health.log that output is "InService". Anyone has faced this issue? What can cause this script to behave differently.

How do I configure the aws instance ip during the user data configuration?

I have question about passing a shell script to an instance with user data. So what I need to configure here is, since my server is going to run on the instance, before the instance got created, the shell script should configure the server.xml information, (like the instance ip address, database ip addresss...) before starting the instance/server.
But, since the instance/server hasn't be generated yet, is there any variable I can use to pass the localhost information in the shell script? is there any way for user to specify some custom variable while running the user data before the instance got created? (before using the aws user data, I used to run it manually, through the configure.sh file and the config.properties file after the instance got created)
#!/bin/bash
# source the properties:
. ./config.properties
echo "Installation"
echo "Updating server.xml"
cd "Server/server/configuration/"
sed -i -s "s/SERVER_IP/"$LOCALHOST_IP"/g" server.xml
sed -i -s "s/DB_IP/"$DATABASE_IP"/g" server.xml
cd "../tomcat/bin"
sh startup.sh

Passing S3cmd commands As User Data To Ec2

i am having one AWS EC2 instance. From this EC2 instance i am creating slave EC2 instances.
And while creating slave instances i am passing user data to new slave instance.In that user data i have written code for creating new directory in EC2 instance and downloading file from S3 bucket.
but problem is that, script creates new directory on EC2 instance but it Fails to download file from S3 bucket.
User Data Script :-
#! /bin/bash
cd /home
mkdir pravin
s3cmd get s3://bucket/usr.sh >> download.log
As shown above,in this code mkdir pravin create directory but s3cmd get s3://bucket/usr.sh fails to download file and download.log file also gets created but it remains empty.
How can i solve this proble, ? (AMI used for this is preconfigured with s3cmd)
Are you by chance running Ubuntu? Then Shlomo Swidler's question Python s3cmd only runs from login shell, not during startup sequence might apply exactly:
The s3cmd Python script (this one: http://s3tools.org/s3cmd ) seems to only work when run via an interactive login session, but not when run via scripts during the boot process.
Mitch Garnaat suggests that one should always beware of environmental differences inflicted by executing code within User-Data Scripts:
It's probably related to some difference in your environment when you are logged in as opposed to when the script is running as part of the startup sequence. I have run into similar problems with cron jobs.
This turned out to be the problem indeed, Shlomo Swidler summarizes the 'root cause' and a solution further down in this thread:
Mitch, your comment helped me realize what's different about the
startup sequence: the operative user is root. When I log in, I'm the
"ubuntu" user.
s3cmd looks in the current user's ~/.s3cfg - which didn't exist in
/root/.s3cfg, only in /home/ubuntu/.s3cfg.
Luckily s3cmd allow you to specify the config file's location with
--config /home/ubuntu/.s3cfg .

How can I interact with a Vagrant shell provisioning script?

I have a shell provisioning script that invokes a command that requires user input - but when I run vagrant provision, the process hangs at that point in the script, as the command is waiting for my input, but there is nowhere to give it. Is there any way around this - i.e. to force the script to run in some interactive mode?
The specifics are that I creating a clean Ubuntu VM, and then invoking the Heroku CLI to download a database backup (this is in my provisioning script):
curl -o /tmp/db.backup `heroku pgbackups:url -a myapp`
However, because this is a clean VM, and therefore this is the first time that I have run an Heroku CLI command, I am prompted for my login credentials. Because the script is being managed by Vagrant, there is no interactive shell attached, and so the script just hangs there.
If you want to pass temporary input or variables to a Vagrant script, you can have them enter their credentials as temporary environment variables for that command by placing them first on the same line:
username=x password=x vagrant provision
and access them from within Vagrantfile as
$u = ENV['username']
$p = ENV['password']
Then you can pass them as an argument to your bash script:
config.vm.provision "shell" do |s|
s.inline: "echo username: $1, password: $2"
s.args: [$u, $p]
end
You can install something like expect in the vm to handle passing those variables to the curl command.
I'm assuming you don't want to hard code your credentials in plain text thus trying to force an interactive mode.
Thing is just as you I don't see such option in vagrant provision doc ( http://docs.vagrantup.com/v1/docs/provisioners/shell.html ) so one way or another you need to embed the authentication within your script.
Have you thought about using something like getting a token and use the heroku REST Api instead of the CLI?
https://devcenter.heroku.com/articles/authentication

Resources