Initialize and terminate a bot on EC2 remotely - amazon-ec2

I am trying to run a script to start, stop, or restart a bot from my front end webpage.
I have a bot that runs almost 24/7 on a Linux EC2 instance, and a webpage front end that allows for parameter input and shows the current status of the bot. The front end sends a POST request to a lambda function, which writes the parameters to my S3 bucket. The script to start the bot on the EC2 instance pulls the latest parameters from S3 and initializes the bot. When the bot starts up and shuts down, it writes the status ("running", "stopped") to a file in the S3 bucket, which then shows on the front end.
I have looked into SSM Run Command with Lambda, but given that the bot runs for days at a time, I don't believe that's viable. Additionally, it uses an agent to connect, so trying to use the screen command would terminate when the agent terminates.
I have also tried adding the script to my EC2 instance’s User Data, but that does not seem to work. Similarly a cron job for reboot does not work.
I've considered using a trigger file in S3, i.e. having the EC2 instance check at a given time interval for some trigger file in S3 that would indicate a start or stop, but that seems very resource intensive.
What alternatives do I have?

The solution that worked for me was setting up a crontab job that runs on reboot, then starting, stopping, and restarting the EC2 instance with a lambda function.
Steps to resolve this for anyone in the same boat:
SSH into the EC2 instance
crontab -e
add the following line:
#reboot sleep 60 && cd /home/ec2-user/bot_folder/ && /usr/bin/screen -S bot -dm /usr/bin/python3 run_bot.py
(for vim, press i to enter insert mode, paste the line and make changes, then press esc :wq enter to save)
Ensure that the script has all of its paths specified absolutely. In my case, using Selenium, the chromedriver path needed to be specified.
Finally, setup a lambda function to start/stop/reboot your instance as the comment above referenced.

Related

Bash script to send commands to remote ssh session

Is it possible to write a bash script that opens a remote node (i.e. through ssh and/or slurm) and starts an interactive session there after running some commands? I'm trying to automate the process of starting a jupyter session on a remote computing cluster, which currently looks like this:
ssh into a login node of the remote cluster, using a specific port
use slurm to request an interactive session on one of the compute nodes, including x11 forwarding through that port
change directory to the working directory
activate conda environment for my project
open jupyter from the command line, specifying the port I used previously
It's a lengthy process, and if I get something wrong at any step I usually have to go back and start from the beginning because the port I'm using is still tied up. So I'm looking for a way I can run a single script (possibly with arguments) from my local machine that jumps through all the hoops to get me a working jupyter session with a link I can paste to my browser.
Like #Diego Torres Milano said, you would need to write a script locally that could do the interactive part, then invoke that via a remote script.
But since your process is interactive, this gets tricky. Luckily, linux has a tool which can easily be installed via a package manager called expect which has the ability to write logic to execute multi-step interactive scripts.
So you would write an expect script which would "expect" certain prompts, then it can read those prompts and use conditional logic respond to those prompts appropriately.
Once you have this written and it works locally, it's just a matter of executing it via ssh from a remote server as:
ssh user#12.34.56.78 /path/to/script.ex

Script executed in EC2 User Data with nohup doesn't run until manually SSHing in

I have the following command in my EC2 User Data:
setsid nohup /root/go/src/prometheus-to-cloudwatch-master/dist/bin/prometheus-to-cloudwatch --cloudwatch_namespace TestBox/Prometheus --cloudwatch_region eu-west-1 --cloudwatch_publish_timeout 5 --prometheus_scrape_interval 30 --prometheus_scrape_url MyScrapeUrl &
This script will take metrics published at the prometheus_scrape_url and push them to CloudWatch. By default this script runs in the foreground and every 30 seconds will output the number of metrics pushed to CloudWatch. I have added setsid nohup to run the script in the background in a new session.
The issue here is that the script doesn't seem to run until I SSH into the box following initialisation and su to the root user (it's like it's queued to be run when I next SSH as the root user).
My expected behaviour is that the script runs as part of the user data and I should never need to SSH into the box.
The script in question is: https://github.com/cloudposse/prometheus-to-cloudwatch

How can I run a Shell when booting up?

I am configuring an app at work which is on a Amazon Web Server.
To get the app running you have to run a shell called "Start.sh"
I want this to be done automatically after booting up the server
I have already tried with the following bash in the User Data section (Which runs on boot)
#!/bin/bash
cd "/home/ec2-user/app_name/"
sh Start.sh
echo "worked" > worked.txt
Thanks for the help
Scripts provided through User Data are only executed the first time the instance is started. (Officially, it is executed once per instance id.) This is done because the normal use-case is to install software, which should only be done once.
If you wish something to run on every boot, you could probably use the cloud-init once-per-boot feature:
Any scripts in the scripts/per-boot directory on the datasource will be run every time the system boots. Scripts will be run in alphabetical order.

How to keep logstash running even when I logout from the remote server

I am connecting to a remote host via ssh login and running logstash by the following command
$./logstash -f first-pipeline.conf
However, after I logout from the server, the logstash stops running. How to enable it to keep running even after I logout. Thanks.
Another approach is to use the screen command which can be very useful for this.
First you open your SSH session, then type screen at the prompt. That opens a new session in which you can run your logstash command.
When it runs, you simply press Ctrl+a d in order to detach your self from that screen and you can safely logout.
Whenever you log back into your SSH session, you enter screen -r and you will get back into your previous session where logstash was started.
You can create as many "screens" as you wish to start many different processes at different times.
Also see this comparison between using nohup and screen
Just run it as an agent
$ logstash agent -f ~/logstash/pipeline.conf

Terminate Amazon EC2 instance using shutdown

I can terminate Amazon EC2 instances using the API command ec2-terminate-instances but I'm trying to find out how to do this while logged onto the EC2 instances themselves.
I've tried shutdown -h now but this only "stops" the instance, without fully terminating it.
Is there any way to do this?
There is an option that you can set at instance creation that will allow the instance to terminate on shutdown.
If you're using the ec2 command line tools, add the option: --instance-initiated-shutdown-behavior terminate
After creating an instance with that option, issuing the shutdown -h now command from within the instance will terminate it instead of stopping it.

Resources