Start jobs in background from sh file in gitbash - bash

I need to start a couple of processes locally in multiple command-prompt windows, to make it simple, I have written a shell script say abc.sh to run in git-bash which has below commands:
cd "<target_dir1>"
<my_command1> &>> output.log &
cd "<target_dir2>"
<my_command2> &>> output.log &
when I run these commands in git bash I get jobs running in the background, which can be seen using jobs and kill command, however when I run them through abc.sh, I get my processes running in the background, but the git-bash instance disowns them, now I can no longer see them using jobs.
how can I get them run through the abc.sh file and also able to see them in jobs list?

Related

Could Git Bash run daemon process periodically?

I have this myscript.sh act as a performance monitor in Windows Server. To do so, I'm using Git Bash to run the script but the problem is the script just execute it once after I put the command to run it. Is there any command that I can use to run it in daemon or maybe let the script run periodically based on our time interval?

jobs command result is empty when process is run through script

I need to run rsync in background through shell script but once it has started, I need to monitor the status of that jobs through shell.
jobs command return empty when its run in shell after the script exits. ps -ef | grep rsync shows that the rsync is still running.
I can check the status through script but I need to run the script multiple times so it uses a different ip.txt file to push. So I can't have the script running to check jobs status.
Here is the script:
for i in `cat $ip.txt`; do
rsync -avzh $directory/ user#"$i":/cygdrive/c/test/$directory 2>&1 > /dev/null &
done
jobs; #shows the jobs status while in the shell script.
exit 1
Output of jobs command is empty after the shell script exits:
root#host001:~# jobs
root#host001:~#
What could be the reason and how could I get the status of jobs while the rsync is running in background? I can't find an article online related to this.
Since your shell (the one from which you execute jobs) did not start rsync, it doesn't know anything about it. There are different approaches to fixing that, but it boils down to starting the background process from your shell. For example, you can start the script you have using the source BASH command instead of executing it in a separate process. Of course, you'd have to remove the exit 1 at the end, because that exits your shell otherwise.

how to keep jobs with nohup in lxd

I made python script that needs to run background.
This script is in lxd image which is running (checked by 'lxc list')
got into image and tried to keep it run in background.
local> lxc exec image-name -- bash
image-root> nohup python test.py &
and It worked at this point.
image-root> jobs
--printed test.py jobs
BUT when I got out from image and re-entered it, all jobs gone.
image-root> exit (or ctrl+d)
root> lxc exec image-name -- bash
image-root> jobs
--printed nothing and script is not running in background. WHY?
Are you sure the script is not running? Since you have backgrounded it and nohupped it, and then return in a new bash instance, there is no reason you should see it with the "jobs" command. When doing a quick test I have no problem leaving the job running and returning to see the result still being produced.

How to run shell script on VM indefinitely?

I have a VM that I want running indefinitely. The server is always running but I want the script to keep running after I log out. How would I go about doing so? Creating a cron job?
In general the following steps are sufficient to convince most Unix shells that the process you're launching should not depend on the continued existence of the shell:
run the command under nohup
run the command in the background
redirect all file descriptors that normally point to the terminal to other locations
So, if you want to run command-name, you should do it like so:
nohup command-name >/dev/null 2>/dev/null </dev/null &
This tells the process that will execute command-name to send all stdout and stderr to nowhere (instead of to your terminal) and also to read stdin from nowhere (instead of from your terminal). Of course if you actually have locations to write to/read from, you can certainly use those instead -- anything except the terminal is fine:
nohup command-name >outputFile 2>errorFile <inputFile &
See also the answer in Petur's comment, which discusses this issue a fair bit.

shell script adjusment - not working fine with cron

i used this bash script to check services running or not if running the script will exit if else it will run another script which gona execute some commands
and once done it will be exit
my issue is when i run my script manuly its works fine but when i run it with cron its not running and not executed correctly here is my script
#!/bin/sh
SERVICE='loop2.sh'
if ps ax | grep -v grep | grep $SERVICE > /dev/null
then
echo "$SERVICE service running, everything is fine"
else
/home//www/loop2.sh
fi
any adjust to my script to be working fine in cron
You're not being very specific. What error are you seeing ?
Note that processes run under cron with a cut-down environment. In particular environment variables such as PATH will be much reduced from your interactive shell.
log your scripts stdout/stderr e.g. myscript 2>&1 >/tmp/script.log
check your environment is as expected via the env command in your script
does this script really do what you want ? And interact with cron how you wish ? If your service isn't running you spawn a new one, but I'd expect you to put it in the background, thus making it a daemon and not a (grand)child of the cron process
is your script executable by whichever user it's executed by under cron ?

Resources