Can you suggest me how to kill the running files which start with nohup ./filename.sh?
In my file large number of the files but I want to stop to run these file.
Related
I have this myscript.sh act as a performance monitor in Windows Server. To do so, I'm using Git Bash to run the script but the problem is the script just execute it once after I put the command to run it. Is there any command that I can use to run it in daemon or maybe let the script run periodically based on our time interval?
I need to run rsync in background through shell script but once it has started, I need to monitor the status of that jobs through shell.
jobs command return empty when its run in shell after the script exits. ps -ef | grep rsync shows that the rsync is still running.
I can check the status through script but I need to run the script multiple times so it uses a different ip.txt file to push. So I can't have the script running to check jobs status.
Here is the script:
for i in `cat $ip.txt`; do
rsync -avzh $directory/ user#"$i":/cygdrive/c/test/$directory 2>&1 > /dev/null &
done
jobs; #shows the jobs status while in the shell script.
exit 1
Output of jobs command is empty after the shell script exits:
root#host001:~# jobs
root#host001:~#
What could be the reason and how could I get the status of jobs while the rsync is running in background? I can't find an article online related to this.
Since your shell (the one from which you execute jobs) did not start rsync, it doesn't know anything about it. There are different approaches to fixing that, but it boils down to starting the background process from your shell. For example, you can start the script you have using the source BASH command instead of executing it in a separate process. Of course, you'd have to remove the exit 1 at the end, because that exits your shell otherwise.
I need to start a couple of processes locally in multiple command-prompt windows, to make it simple, I have written a shell script say abc.sh to run in git-bash which has below commands:
cd "<target_dir1>"
<my_command1> &>> output.log &
cd "<target_dir2>"
<my_command2> &>> output.log &
when I run these commands in git bash I get jobs running in the background, which can be seen using jobs and kill command, however when I run them through abc.sh, I get my processes running in the background, but the git-bash instance disowns them, now I can no longer see them using jobs.
how can I get them run through the abc.sh file and also able to see them in jobs list?
I'm have a tad bit of difficulty with developing bash based deployment scripts for a pipeline I want to run on an OpenStack VM. There are 4 scripts in total:
head_node.sh - launches the vm and attaches appropriate disk storage to the VM. Once that's completed, it runs the scripts (2 and 3) sequentially by passing a command through ssh to the VM.
install.sh - VM-side, installs all of the appropriate software needed by the pipeline.
run.sh - VM-side, mounts storage on the VM and downloads raw data from object storage. It then runs the final script, but does so by detaching the process from the shell created by ssh using nohup ./pipeline.sh &. The reason I want to detach from the shell is that the next portion is largely just compute and may take days to finish. Therefore, the user shouldn't have to keep the shell open that long and it should just run in the background.
pipeline.sh - VM-side, essentially a for loop that iterates through a list of files, and sequential runs commands on those and intermediate files. The result are analysed which are then staged back to the object storage. The VM then essentially tells the head node to kill it.
Now I'm running into a problem with nohup. If I launch the pipeline.sh script normally (i.e. without nohup) and keep it attached to that shell, everything runs smoothly. However, if I detach the script, it errors out after the first command in the first iteration of the for loop. Am I thinking about this the wrong way? What's the correct way to do this?
So this is how it looks:
$./head_node.sh
head_node.sh
#!/bin/bash
... launched VM etc
ssh $vm_ip './install.sh'
ssh $vm_ip './run.sh'
exit 0
install.sh - omitted - not important for the problem
run.sh
#!/bin/bash
... mounts storage downloads appropriate files
nohup ./pipeline.sh > log &
exit 0
pipeline.sh
#!/bin/bash
for f in $(find . -name '*ext')
do
process1 $f
process2 $f
...
done
... stage files to object storage, unmount disks, additional cleanups
ssh $head_node 'nova delete $vm_hash'
exit 0
Since I'm evoking the run.sh script from an ssh instance, subprocesses launched from the script (namely pipeline.sh) will not properly detach from the shell and will error out on termination of the ssh instance evoking run.sh. The pipeline.sh script can be properly detached by calling it from the head node, e.g., nohup ssh $vm_ip './pipeline.sh' &, this will keep the session alive until the end of the pipeline.
I started a rar archiving of a huge folder, forgetting to split in multiple rar archives.
How can I stop the process?
Log in again, use ps -a to find the relevant process IDs, then kill it with kill.