Start-Up script for a running GCE instance - bash

I just tried to set up a small start-up script for a running GCE instance. I added a custom meta-data with the key startup-script and value
#! /bin/bash
vncserver -geometry 1920x1080
However, it doesn't seem to be taken into account when I restart the instance. Moreover, trying to execute sudo google_metadata_script_runner --script-type startup doesn't give any result neither... What am I doing wrong?

Related

Bash- Running aws ec2 run-instances in background

How can i run the command aws ec2 run-instances in bash (mac os) so it will run in the background? (Right now when i run it it is in interactive mode in which i need to scroll until the end)
That command actually executes and completes immediately.
However, the AWS CLI is using a pager to show you the output. You can modify this behaviour by either requesting less information to be returned by using --query, or by removing the pager.
To remove the pager, add this to your ~/.aws/config file:
[default]
cli_pager=
This will cause all output to scroll up the screen without waiting for user input.
For more details, see: Using AWS CLI pagination options - AWS Command Line Interface

Aws Ec2 run script program at startup

There is a method to setup an EC2 machine to execute Kafka starting script on startup?
I use also java Aws SDK, so I accept both solution for a program java that run command on EC2 instance and solutions for a bash script mode that run kafka script at startup.
A script can be passed in the User Data property.
If you are using the Amazon Linux AMI, and the first line of the script begins with #!, then the script will be executed the first time that the instance is started.
For details, see: Running Commands on Your Linux Instance at Launch
Adding a script under User Data in CloudFormation only runs once, right when the instance is launched but not when the instance is restarted which is what I needed for my use case. I use the rc.local approach as commented above and here. The following in effect appends my script to the rc.local file and performs as expected:
Resources:
VM:
Type: 'AWS::EC2::Instance'
Properties:
[...]
UserData:
'Fn::Base64': !Sub |
#!/bin/bash -x
echo 'INSTANCEID="$(curl -s http://169.254.169.254/latest/meta-data/instance-id)"' >> /etc/rc.local
#echo 'INSTANCEID=$(ls /var/lib/cloud/instances)' >> /etc/rc.local
echo 'echo "aws ec2 stop-instances --instance-ids $INSTANCEID --region ${AWS::Region}" | at now + ${Lifetime} minutes' >> /etc/rc.local
Additional tip: You can inspect the user data (the current script) and modify it using the AWS console by following these instructions: View and update the instance user data.
What is the OS of EC2 instance?
You could use userdata script at instance launch time. Remember this is just 1time activity
If your requirement is to start the script everytime you reboot EC2 instance then you could make use of rc.local file on Linux instances which is loaded at OS boot time.
rc.local didn't work for me.
I used crontab following this guide, which sets up a cron job that can run a script on start up.
https://phoenixnap.com/kb/crontab-reboot
It's essentially
crontab -e
<select editor>
#reboot <script to run>
If you are running a windows EC2 you'll want to read: https://docs.aws.amazon.com/AWSEC2/latest/WindowsGuide/ec2-windows-user-data.html
Example:
<script>
echo Current date and time >> %SystemRoot%\Temp\test.log
echo %DATE% %TIME% >> %SystemRoot%\Temp\test.log
</script>

Monit requires manual restart in order to receive max open files value for running a process, bug?

I've been trying to figure this out for quite some time and I don't seem to be able to find any information on this issue.
diving into the issue:
I am running an application on Ubuntu 14.04 using Monit V5.6
The deployment of the application and Monit is done by using Chef scripts with AWS Opsworks which works excellent.
The problem is that once done, Monit starts the application using the following syntax:
start program = "/bin/sh -c 'ulimit -n 150000; <some more commands here which are not intersting>'" as uid <user> and gid <user_group>
This indeed starts the application using the correct user but the problem is that max open files for the process is showing 4096 instead of the number set in limits.conf
Just to be clear, I have set the following in /etc/security/limits.conf
root hard nofile 150000
root soft nofile 150000
* hard nofile 150000
* soft nofile 150000
Further more, if I stop the application then do a service monit restart and then start the application, the max open files values is received correctly and I am seeing 150000.
If I then redeploy the application without rebooting the instance then this happens again and I have to manually restart monit again.
Also if I run the application using the following syntax in order to mimic Monit:
sudo -H -u <user> /bin/sh -c 'ulimit -n 150000; <more commands here>'
Then everything is working and the process is receiving the correct value of max open files.
I try to script this manual service monit restart with stopping and starting the application via Chef scripts then this also fails and I receive 4096 as the max open files value thus my only option is to manually do this each time I deploy which is not very convenient.
Any help on this or thoughts would be greatly appreciated.
Thanks!
P.S. I also reviewed the following articles:
https://serverfault.com/questions/797650/centos-monit-ulimit-not-working
https://lists.nongnu.org/archive/html/monit-general/2010-04/msg00018.html
but as manually restarting Monit causes this to work then I am looking for a solution without changing init scripts.

AWS Launch Configuration not picking up user data

We are trying to build an an autoscaling group(lets say AS) configured with an elastic load balancer(lets say ELB) in AWS. The autoscaling group itself is configured with a launch configuration(lets say LC). As far as I could understand from the AWS documentation, pasting a script, as-is, in the user data section of the launch configuration would run that script for every instance launched into an auto scaling group associated with that auto scaling group.
For example pasting this in user data would have a file named configure available in the home folder of a t2 micro ubuntu image:
#!/bin/bash
cd
touch configure
Our end goal is:
Increase instances in auto scaling group, they launch with our startup script and this new instance gets added behind the load balancer tagged with the auto scaling group. But the script was not executed at the instance launch. My questions are:
1. Am i missing something here?
2. What should I do to run our startup script at time of launching any new instance in an auto scaling group?
3. Is there any way to verify if user data was really picked up by the launch?
The direction you are following is right. What is wrong is your user data script.
Problem 1:
What you have to remember is that user data will be executed as user root, not ubuntu. So if your script worked fine, you would find your file in /root/configure, NOT IN /home/ubuntu/configure.
Problem 2:
Your script is actually executing, but it's incorrect and is failing at cd command, thus file is not created.
cd builtin command without any directory given will try to do cd $HOME, however $HOME is NOT SET during cloud-init run, so you have to be explicit here.
Change your script to below and it will work:
#!/bin/bash
cd /root
touch configure
You can also debug issues with your user-data script by inspecting /var/log/cloud-init.log log file, in particular checking for errors in it: grep -i error /var/log/cloud-init.log
Hope it helps!

Passing S3cmd commands As User Data To Ec2

i am having one AWS EC2 instance. From this EC2 instance i am creating slave EC2 instances.
And while creating slave instances i am passing user data to new slave instance.In that user data i have written code for creating new directory in EC2 instance and downloading file from S3 bucket.
but problem is that, script creates new directory on EC2 instance but it Fails to download file from S3 bucket.
User Data Script :-
#! /bin/bash
cd /home
mkdir pravin
s3cmd get s3://bucket/usr.sh >> download.log
As shown above,in this code mkdir pravin create directory but s3cmd get s3://bucket/usr.sh fails to download file and download.log file also gets created but it remains empty.
How can i solve this proble, ? (AMI used for this is preconfigured with s3cmd)
Are you by chance running Ubuntu? Then Shlomo Swidler's question Python s3cmd only runs from login shell, not during startup sequence might apply exactly:
The s3cmd Python script (this one: http://s3tools.org/s3cmd ) seems to only work when run via an interactive login session, but not when run via scripts during the boot process.
Mitch Garnaat suggests that one should always beware of environmental differences inflicted by executing code within User-Data Scripts:
It's probably related to some difference in your environment when you are logged in as opposed to when the script is running as part of the startup sequence. I have run into similar problems with cron jobs.
This turned out to be the problem indeed, Shlomo Swidler summarizes the 'root cause' and a solution further down in this thread:
Mitch, your comment helped me realize what's different about the
startup sequence: the operative user is root. When I log in, I'm the
"ubuntu" user.
s3cmd looks in the current user's ~/.s3cfg - which didn't exist in
/root/.s3cfg, only in /home/ubuntu/.s3cfg.
Luckily s3cmd allow you to specify the config file's location with
--config /home/ubuntu/.s3cfg .

Resources