How can I run a Shell when booting up? - shell

I am configuring an app at work which is on a Amazon Web Server.
To get the app running you have to run a shell called "Start.sh"
I want this to be done automatically after booting up the server
I have already tried with the following bash in the User Data section (Which runs on boot)
#!/bin/bash
cd "/home/ec2-user/app_name/"
sh Start.sh
echo "worked" > worked.txt
Thanks for the help

Scripts provided through User Data are only executed the first time the instance is started. (Officially, it is executed once per instance id.) This is done because the normal use-case is to install software, which should only be done once.
If you wish something to run on every boot, you could probably use the cloud-init once-per-boot feature:
Any scripts in the scripts/per-boot directory on the datasource will be run every time the system boots. Scripts will be run in alphabetical order.

Related

Python Script Not Starting Bash Script when running as service

I have a python script that get started automatically as a service (activated with systemd). In this python script, I call a bash script using subprocess.call(script_file,shell=True).
When I call the python script manually ($ python my_python_script.py), everything works perfectly. However, the automatically started program does not execute the bash script (however it does run, I checked this my making it edit a text file, which it indeed does).
I (think) I gave everyone read-write permissions to the bash scripts. Does anyone have ideas as to what I'm doing wrong?
Addendum: I want to write a small script that sends me my public IP address via telegram. The service file looks like this:
[Unit]
Description=IPsender
After=networking.service
[Service]
Type=simple
User=root
WorkingDirectory=/home/pi/projects/tg_bot
ExecStart=/home/pi/miniconda3/bin/python /home/pi/projects/tg_bot/ip_sender_tg.py
Restart=always
[Install]
WantedBy=multi-user.target
Protawn, welcome to the Unix and Linux StackExchange.
Why scripts work differently under system is a common question. Check out this answer to the general question elsewhere on the site.
Without the source code for your Python and Bash scripts it's hard to guess which difference you have encountered.
My personal guess is that your bash script is calling some other binaries without full paths, and those paths are found in your shell $PATH but not the default systemd path.
Add set -x to the top of your bash script so that all actions are logged to standard out, which will be captured in the systemd journal. Then after it fails, use journalctl -u your-service-name to view the logs for your service to see if you can find the last command that bash executed successfully. Also consider using set -e in the bash script to have it stop at the first error.
Despite the two "off-topic" "close" votes on this topic, why things work differently under systemd is on topic for this Stack Exchange site.

running shell script with windows task scheduler

I currenty have a simple shell script that I created for a linux machine to be run using cron, but now I want to be able to run the file using windows task scheduler. I have tried to get it to work using cron for cygwin, but even after running cron-config successfully and ensuring that the shell script can be executed successfully, for some reason the cron task simply wasn't executing. So I decided to give in and use the windows task scheduler. In order to do this, I looked at the following posts about the issue:
Cgywin .sh file run as Windows Task Scheduler
http://www.davidjnice.com/cygwin_scheduled_tasks.html
in my case, the entry in the "actions" tab of the new task looks like this:
program/script: c:\cygwin64\bin\bash.exe
arguments: -l -c "/cygdrive/c/users/paul/bitcoinbot/download_all_data.sh >> cygdrive/c/users/paul/bitcoinbot/logfile.log 2>&1"
start in: c:\cygwin64\bin
Notice that I redirected the output of the shell script to a log file, so that I should be able to see there whether the program run. Other than that, I simply edited the "trigger" tab to run the task daily, and set the time to a couple of minutes in the fture to see whether it ran successfully.
Alas, when I look at the detailed event history for the task, nothing changes when the trigger time passes. And when I manually "run" the task, the event history seems to add a few different events, but the task is completed within seconds, whereas this task should take over an hour (and it does when the shell script is executed directly from the terminal). And when I look for the log file that should have been created, there is nothing.
Does anyone have any idea what might be the issue here? How can I get my task to run properly at the trigger time, and how can I make sure it does so?
Best,
Paul
EDIT:
here are the pictures showing event history, as per Ken White's request.
Please ignore the fact that it says there are 24 events. These are from multiple separate runs of the task. The events shown here are a complete list of the events triggered by a single run.
EDIT 2:
Regarding my attempts to get cron to work, I have run into the following problem when I try to start the cron service using cygrunsrv. First of all, I tried to start cron by typing
cygrunsrv -I cron -p /usr/sbin/cron.exe -a -D
Now when I type
$cygrunsrv -Q cron
Service: cron
Current State: stopped
Command: /usr/bin/cron.exe
Now, I tried to start the cron service by typing
cygrunsrv -S cron
Cygrunsrv: Error starting a service: QueryServiceStatus: Win32 error 1062:
The service has not been started.
Does anyone hae any idea what this error means? I tried googling it, but couldn't find any answers.

How to write shell script to restart tomcat7 automatically using cron?

I am very new to shell script,in my company i need restart all application servers in production at or before 12.30 pm everyday.can i automate this process by shell script also will it affect any other application running on the server.
Thanks in Advance
At last i found answer to restart tomcat automatically by writing a simple script and setting-up that script on cron whatever time sequence i need restart application server.
Script
#! /bin/sh
SERVICE=/etc/init.d/tomcat7
$SERVICE restart
Cron job
30 12 * * * /root/restarttomacat.sh
It wont affect any other applications.

How can I run a test on 20+ sites from a single console?

I'd like to run a test script on dozens of embedded Linux units; in manufacturing the authentication credentials are all the same.
The tests are about an hour, but I'd like each unit to loop continuously (over the weekend say) and report the current iteration of the test (on a per unit basis).
I'm thinking expect might be the way to go (it would certainly help with ssh login), but the online documentation is ... uh ... a bit too distributed for what seems to be a simple exercise.
I'm stuck at the trying to determine how to spawn my embedded tests in parallel. In BASH I'd use the & operator to put the process on the background, but then entering the authentication is a challenge.
Should I use expect or stick with BASH scripting?
What I did:
Using an expect script I placed an SSH authentication file on the DUT. THE DUT's have only a RAM file system to play with so this before the rest of the bash script is run. Then, a simple BASH for loop issues an ssh command inside a for loop to run the tests and put the session on a background thread. Comme ca:
for i in <IP devices to test> ;
do
ssh user#$i "echo - \"IP Address: $i :\" ; test-script" &
done
Voila!
Setup ssh public-key authentication (example: http://www.petefreitag.com/item/532.cfm) with blank public key password, then you can use ssh to run these scripts without entering any authorization credential, thus you can write bash scripts to execute them without user intervention

How to automate startup of a web app with several independent processes?

I run the wesabe web app locally.
Each time I start it by opening separate shells to start the mysql server, java backend and rails frontend.
My question is, how could you automate this with a shell script or rake task?
I tried just listing the commands sequentially in a shell script (see below) but the later commands never run because each app server creates its own process that never 'returns' (until you quit the server).
I've looked into sub-shells and parallel rake tasks, but that's where I got stuck.
echo 'starting mysql'
mysqld_safe
echo 'starting pfc'
cd ~/wesabe/pfc
rails server -p 3001
echo 'starting brcm'
cd ~/wesabe/brcm-accounts-api
script/server
echo 'ok, go!'
open http://localhost:3001
If you don't mind the output being messed, put a "&" at the end of the line where you start the application to make it run in background.

Resources