Python Script Not Starting Bash Script when running as service - bash

I have a python script that get started automatically as a service (activated with systemd). In this python script, I call a bash script using subprocess.call(script_file,shell=True).
When I call the python script manually ($ python my_python_script.py), everything works perfectly. However, the automatically started program does not execute the bash script (however it does run, I checked this my making it edit a text file, which it indeed does).
I (think) I gave everyone read-write permissions to the bash scripts. Does anyone have ideas as to what I'm doing wrong?
Addendum: I want to write a small script that sends me my public IP address via telegram. The service file looks like this:
[Unit]
Description=IPsender
After=networking.service
[Service]
Type=simple
User=root
WorkingDirectory=/home/pi/projects/tg_bot
ExecStart=/home/pi/miniconda3/bin/python /home/pi/projects/tg_bot/ip_sender_tg.py
Restart=always
[Install]
WantedBy=multi-user.target

Protawn, welcome to the Unix and Linux StackExchange.
Why scripts work differently under system is a common question. Check out this answer to the general question elsewhere on the site.
Without the source code for your Python and Bash scripts it's hard to guess which difference you have encountered.
My personal guess is that your bash script is calling some other binaries without full paths, and those paths are found in your shell $PATH but not the default systemd path.
Add set -x to the top of your bash script so that all actions are logged to standard out, which will be captured in the systemd journal. Then after it fails, use journalctl -u your-service-name to view the logs for your service to see if you can find the last command that bash executed successfully. Also consider using set -e in the bash script to have it stop at the first error.
Despite the two "off-topic" "close" votes on this topic, why things work differently under systemd is on topic for this Stack Exchange site.

Related

Starting Django server from bash script doesn't persist

So this seems to be a relatively common occurrence (lots of similar questions have been asked like here , here and here) with some variations. I have execute permissions for the script and have tried all the solutions mentioned in those questions but it's still not working. When I try to run the following command to start my Django server, the bash script opens and closes. How do I get it to persist after starting the server?
startserver.sh:
#!/bin/bash
python3 manage.py runserver 7888
So the command runs when I type it directly into cmd (in that directory), but when I try to run the bash file it just flashes and disappears. I have tried running it as
.\startserver.sh
. startserver.sh
sh startserver.sh
start startserver.sh
but each does the same. Ideally I would like to have it such that you can double click on the server.sh file and it runs persistently. Anyone have any ideas why this is the case? I feel like it's something very small I'm missing.
You need to add the nohup to your script.
Change
#!/bin/bash
python3 manage.py runserver 7888
to
#!/bin/bash
START=`pwd`
nohup python3 manage.py runserver 7888 >"${START}/startserver.LOG" 2>"${START}/startserver.ERR"
You don't want to rely on the longevity or continued existence of nohup.out because that default destination can be easily clobberred by other programs.

How do I make a Bash script run continously, also end it when I want to?

I have a Bash script that creates a private Geth node named "startnode.sh".
I want to be able to run this script on a server and exit that server without any problem.
You are looking for nohup(1).
It is a utility which let's you detach a process from your current terminal session.
Here's a link to a manual of a FreeBSD nohup(1).
Alternatively, set up a systemd .service file and have it run as a daemon
https://wiki.archlinux.org/index.php/Systemd

Fixing a systemd service 203/EXEC failure (no such file or directory)

I'm trying to set up a simple systemd timer to run a bash script every day at midnight.
systemctl --user status backup.service fails and logs the following:
backup.service: Failed at step EXEC spawning /home/user/.scripts/backup.sh: No such file or directory.
backup.service: Main process exited, code=exited, status=203/EXEC
Failed to start backup.
backup.service: Unit entered failed state.
backup.service: Failed with result 'exit-code'.
I'm lost, since the files and directories exist. The script is executable and, just to check, I've even set permissions to 777.
Some background:
The backup.timer and backup.service unit files are located in /home/user/.config/systemd/user.
backup.timer is loaded and active, and currently waiting for midnight.
Here's what it looks like:
[Unit]
Description=Runs backup at 0000
[Timer]
OnCalendar=daily
Unit=backup.service
[Install]
WantedBy=multi-user.target
Here's backup.service:
[Unit]
Description=backup
[Service]
Type=oneshot
ExecStart=/home/user/.scripts/backup.sh
[Install]
WantedBy=multi-user.target
And lastly, this is a paraphrase of backup.sh:
#!/usr/env/bin bash
rsync -a --delete --quiet /home/user/directory/ /mnt/drive/directory-backup/
The script runs fine if I execute it myself.
Not sure if it matters, but I use fish as my shell (started from .bashrc).
I'm happy to post the full script if that's helpful.
I think I found the answer:
In the .service file, I needed to add /bin/bash before the path to the script.
For example, for backup.service:
ExecStart=/bin/bash /home/user/.scripts/backup.sh
As opposed to:
ExecStart=/home/user/.scripts/backup.sh
I'm not sure why. Perhaps fish. On the other hand, I have another script running for my email, and the service file seems to run fine without /bin/bash. It does use default.target instead multi-user.target, though.
Most of the tutorials I came across don't prepend /bin/bash, but I then saw this SO answer which had it, and figured it was worth a try.
The service file executes the script, and the timer is listed in systemctl --user list-timers, so hopefully this will work.
Update: I can confirm that everything is working now.
To simplify, make sure to add a hash bang to the top of your ExecStart script, i.e.
#!/bin/bash
python -u alwayson.py
When this happened to me it was because my script had DOS line endings, which always messes up the shebang line at the top of the script. I changed it to Unix line endings and it worked.
I ran across a Main process exited, code=exited, status=203/EXEC today as well and my bug was that I forgot to add the executable bit to the file.
If that is a copy/paste from your script, you've permuted this line:
#!/usr/env/bin bash
There's no #!/usr/env/bin, you meant #!/usr/bin/env.
try running:
systemctl daemon-reload
and then again run
service <yourservice> status
I faced a similar issue, changed the permission and added executable permission
use chmod +x /etc/systemd/system/<service-filename>
This worked for me
I actually used the answer from How do I run a node.js app as a background service? combined with what dwrz said above. In my case, I was creating a Discord bot that needed to be able to run when I was not around.
With this service in place, I initially got the same error that the initial poster did, which brought me here. I was missing the #!/usr/bin/env node at the top of my executed node.js script.
Since then, no problems, although I intend to see what else can be extended to the service itself.

How to run shell script on VM indefinitely?

I have a VM that I want running indefinitely. The server is always running but I want the script to keep running after I log out. How would I go about doing so? Creating a cron job?
In general the following steps are sufficient to convince most Unix shells that the process you're launching should not depend on the continued existence of the shell:
run the command under nohup
run the command in the background
redirect all file descriptors that normally point to the terminal to other locations
So, if you want to run command-name, you should do it like so:
nohup command-name >/dev/null 2>/dev/null </dev/null &
This tells the process that will execute command-name to send all stdout and stderr to nowhere (instead of to your terminal) and also to read stdin from nowhere (instead of from your terminal). Of course if you actually have locations to write to/read from, you can certainly use those instead -- anything except the terminal is fine:
nohup command-name >outputFile 2>errorFile <inputFile &
See also the answer in Petur's comment, which discusses this issue a fair bit.

Simple script run via cronjob doesn't work but works from shell

I am on shared hosting and I'm trying to schedule cronjob to run every now and then. Via cPanel I scheduled to execute my script but even though that according to my host support the cronjob runs, the script doesn't seem as doing anything. The cron job command I set via cPanel is:
/bin/sh /home1/myusername/public_html/somefolder/cronjob2.sh
and the cronjob2.sh
#!/bin/bash
/home1/myusername/public_html/somefolder/node_modules/forever/bin/forever stop 0
when via SSH I execute:
/home1/myusername/public_html/somefolder/cronjob2.sh
it stops forever process as needed. From cronjob doesn't do anything.
How can I get this working?
EDIT:
So I've tried:
/bin/sh /home1/username/public_html/somefolder/cronjob2.sh >> /tmp/mylog 2>&1
and mylog entries say:
/usr/bin/env: node: No such file or directory
It seems that forever needs to run node and this cannot be found. How would I possibly fix this?
EDIT2:
Accepted answer at superuser.com. Thank you all for help
https://superuser.com/questions/763261/simple-script-run-via-cronjob-doesnt-work-but-works-from-shell/763288#763288
For cron job lines in a crontab it's not required to specify kind of shell or e.g. of perl.
It's enough, that your script contains
shebang
line.
Therefore you should remove /bin/sh from your cron job line.
Another aspect, that might cause a different behavior of your script by interactive start and by cron daemon start is possible different environment, first of all the PATH variable. Therefore check, if you script is able to be executed in very restricted environment, that is provided by cron daemon. You can determine your cron job environment experimentally by start of temporary cron job, that executes "env" command and writes its output to a file.
Once more aspect: Have you redirected STDOUT and STDERR of the cron job to a log file and read its content to analyze the issue? You can do it as follows:
your_cron_job >/tmp/any_name.log 2>&1
According to what you wrote, when you run your script via SSH, you are using bash, because this line is the first of your script:
#!/bin/bash
However, in the crontab, you are forcing the use of sh instead of bash. Are you sure your script is fully compatible with sh? Otherwise, simply replace /bin/sh with /bin/bash in your cron command and test again.

Resources