Terminate Amazon EC2 instance using shutdown - amazon-ec2

I can terminate Amazon EC2 instances using the API command ec2-terminate-instances but I'm trying to find out how to do this while logged onto the EC2 instances themselves.
I've tried shutdown -h now but this only "stops" the instance, without fully terminating it.
Is there any way to do this?

There is an option that you can set at instance creation that will allow the instance to terminate on shutdown.
If you're using the ec2 command line tools, add the option: --instance-initiated-shutdown-behavior terminate
After creating an instance with that option, issuing the shutdown -h now command from within the instance will terminate it instead of stopping it.

Related

Initialize and terminate a bot on EC2 remotely

I am trying to run a script to start, stop, or restart a bot from my front end webpage.
I have a bot that runs almost 24/7 on a Linux EC2 instance, and a webpage front end that allows for parameter input and shows the current status of the bot. The front end sends a POST request to a lambda function, which writes the parameters to my S3 bucket. The script to start the bot on the EC2 instance pulls the latest parameters from S3 and initializes the bot. When the bot starts up and shuts down, it writes the status ("running", "stopped") to a file in the S3 bucket, which then shows on the front end.
I have looked into SSM Run Command with Lambda, but given that the bot runs for days at a time, I don't believe that's viable. Additionally, it uses an agent to connect, so trying to use the screen command would terminate when the agent terminates.
I have also tried adding the script to my EC2 instance’s User Data, but that does not seem to work. Similarly a cron job for reboot does not work.
I've considered using a trigger file in S3, i.e. having the EC2 instance check at a given time interval for some trigger file in S3 that would indicate a start or stop, but that seems very resource intensive.
What alternatives do I have?
The solution that worked for me was setting up a crontab job that runs on reboot, then starting, stopping, and restarting the EC2 instance with a lambda function.
Steps to resolve this for anyone in the same boat:
SSH into the EC2 instance
crontab -e
add the following line:
#reboot sleep 60 && cd /home/ec2-user/bot_folder/ && /usr/bin/screen -S bot -dm /usr/bin/python3 run_bot.py
(for vim, press i to enter insert mode, paste the line and make changes, then press esc :wq enter to save)
Ensure that the script has all of its paths specified absolutely. In my case, using Selenium, the chromedriver path needed to be specified.
Finally, setup a lambda function to start/stop/reboot your instance as the comment above referenced.

Can't terminate node(js) process without terminating ssh server in docker container

I'm using a Dockerfile that ends with a CMD ["/start.sh"]:
#!/bin/bash
service ssh start
/usr/bin/node /myApp/app.js
if for some reason i need to kill the node process, the ssh server is being closed as well (forces me to reboot the container to reconnect).
Any simple way to avoid this behavior?
Thank You.
The container exits as soon as main process of the container exits. In your case, the main process inside the container is start.sh shell script. The start.sh shell script is starting the ssh service and then running the nodejs process as child process. Once the nodejs process dies, the shell script exits as well and so the container exits. So what you can do is to put the nodejs process in background.
#!/bin/bash
service ssh start
/usr/bin/node /myApp/app.js &
# Need the following infinite loop as the shell script should not exit
while do:
sleep 2
done
I DO NOT recommend this approach though. You should have only a single process per container. Read the following answers to understand why -
Running multiple applications in one docker container
If you still want to run multiple processes inside container, there are better ways to do it like using supervisord - https://docs.docker.com/config/containers/multi-service_container/

How to keep logstash running even when I logout from the remote server

I am connecting to a remote host via ssh login and running logstash by the following command
$./logstash -f first-pipeline.conf
However, after I logout from the server, the logstash stops running. How to enable it to keep running even after I logout. Thanks.
Another approach is to use the screen command which can be very useful for this.
First you open your SSH session, then type screen at the prompt. That opens a new session in which you can run your logstash command.
When it runs, you simply press Ctrl+a d in order to detach your self from that screen and you can safely logout.
Whenever you log back into your SSH session, you enter screen -r and you will get back into your previous session where logstash was started.
You can create as many "screens" as you wish to start many different processes at different times.
Also see this comparison between using nohup and screen
Just run it as an agent
$ logstash agent -f ~/logstash/pipeline.conf

Run python flask on EC2 in the background

I have small app created on python flask and deployed on EC2 aws machine, when I do ssh to ec2 machine and starts flask, it works, but when I terminate the session the flask dies, I can run it using nohup. What is the best way to make it independent of ssh session and run it continuously.
There are several options:
nohup python app.py &
use screen
run supervisord(link) on system startup and control all through it (pythonic way :))
nohup means: do not terminate this process even when the stty is cut off.
& at the end means: run this command as a background task.

Self-Terminating AWS EC2 Instance?

Is there a way that Amazon Web Services EC2 instances can be self terminating? Does Amazon have anything that allows an instance to terminate itself ("Hara-Kiri") after running for more than say an hour? I could change the scripts on the running instance to do this itself, but that might fail and I don't want to edit the image, so I would like Amazon to kill the instance.
To have an instance terminate itself do both of these steps:
Start the instance with --instance-initiated-shutdown-behavior terminate or the equivalent on the AWS console or API call.
Run shutdown -h now as root. On Ubuntu, you could set this up to happen in 55 minutes using:
echo "sudo halt" | at now + 55 minutes
I wrote an article a while back on other options to accomplish this same "terminate in an hour" goal:
Automatic Termination of Temporary Instances on Amazon EC2
http://alestic.com/2010/09/ec2-instance-termination
The article was originally written before instance-initiated-shutdown-behavior was available, but you'll find updates and other gems in the comments.
You can do this
ec2-terminate-instances $(curl -s http://169.254.169.254/latest/meta-data/instance-id)
The ec2 will get its current instance id and terminate itself.
Hopefully this will work
instanceId=$(curl http://169.254.169.254/latest/meta-data/instance-id/)
region=$(curl http://169.254.169.254/latest/dynamic/instance-identity/document | grep region | awk '{print $3}' | sed 's/"//g'|sed 's/,//g')
/usr/bin/aws ec2 terminate-instances --instance-ids $instanceId --region $region
Hope this help you !!!
Here is my script for Self-Terminating
$ EC2_INSTANCE_ID="`wget -q -O - http://instance-data/latest/meta-data/instance-id || die \"wget instance-id has failed: $?\"`"
$ echo "ec2-terminate-instances $EC2_INSTANCE_ID" | at now + 55 min || die 'cannot obtain instance-id'
If you want to assign it as Self-Stopping on Self-Terminating, you can do it one time only.
In your EC2 Console go to Instance Settings, change Shutdown Behavior to Stop.
Configure /etc/cloud/cloud.cfg, you may refer to how to run a boot script using cloud-init.
Follow answer from Eric Hammond, put the command in a file and locate it in scripts-per-boot path:
$ echo '#!/bin/sh' > per-boot.sh
$ echo 'echo "halt" | at now + 55 min' >> per-boot.sh
$ echo 'echo per-boot: `date` >> /tmp/per-boot.txt' >> per-boot.sh
$ chmod +x per-boot.sh
$ sudo chown -R root per-boot.sh
$ sudo mv -viu per-boot.sh /var/lib/cloud/scripts/per-boot
Reboot your instance, check if the script is executed:
$ cat /tmp/per-boot.txt
per-boot: Mon Jul 4 15:35:42 UTC 2016
If so, just in case you forgot to stop your instance, it will assure you that the instance will do itself termination as stopping when it has run for 55 minutes or whatever time you set in the script.
Broadcast message from root#ip-10-0-0-32
(unknown) at 16:30 ...
The system is going down for halt NOW!
PS: For everyone want to use the Self-Stopping, one thing you should note that not all EC2 types are self recovery on restarting. I recommend to use EC2-VPC/EBS with On/Off Schedule.
I had a similar need, where I had web applications firing up EC2 instances. I could not trust the web application to stop/terminate the instances, so I created a script to run in a separate process, called the "feeder". The feeder owns the responsibility of stopping/terminating the instance. The web application must periodically request that the feeder "feed" the instance. If an instance "starves" (is not fed within a timeout period), the feeder will stop/terminate it. Two feeders can be run simultaneously on different machines to protect against issues with one feeder process. In other words, the instance runs on a pressure switch. When pressure is released, the instance is stopped/terminated. This also allows me to share an instance among multiple users.
To the original question, the feeder, which could be running in the EC2 instance itself, eliminates the need to know a priori how long the task will be running, but it places a burden on the application to provide periodic feedings. If the laptop is closed, the instance will go down.
The feeder lives here: https://github.com/alessandrocomodi/fpga-webserver and has a permissive open-source license.

Resources