I'm an intern porting a Linux Bash script to Python 2.6. This script basically powers an intranet dashboard that displays data about servers. It updates every minute, and is constantly running, pretty much 24/7.
I would like some help converting the below Bash line to Python:
(while sleep 30; do custom_cmd > tmp.txt; cp tmp.txt index.html; rm tmp.txt; done) &
I have confusion converting the '&', which I know turns the while loop into a background process. The while sleep 30 runs infinitely (as long as the user is active), and does work every 30 seconds (sleeping until then). I have already ported custom_cmd (which generates the html for the dashboard) to Python 2.6.
The Bash script is configured with "nohup", which I believe means the script will run even after the user logs out of the Linux machine.
That being said, how could I convert the above Bash line to Python, so that it runs as a background process, forever? Thank you very much.
The simple bash script can be written like this:
from subprocess import call, PIPE
from tempfile import SpooledTemporaryFile
from time import sleep
def myfunc():
while True:
sleep(30)
handle = SpooledTemporaryFile()
if call(['custom_cmd'], stdout=handle) == 0:
handle.seek(0)
open('index.html', 'w').write(handle.read())
handle.close()
You should then use the daemonize to daemonize the function.
Related
I have a script say parallelise.sh, whose contents are 10 different python calls shown below:
python3.8 script1.py
python3.8 script2.py
.
.
.
python3.8 script10.py
Now, I use GNU parallel
nohup parallel -j 5 < parallellise.sh &
It starts as expected; 5 different processors are being used and the first 5 scripts, script_1.py ... script_5.py are running. Now I notice that some of them (say two of them script_1.py and script_2.py) complete very fast, whereas the others need more time to complete.
Now, there are unused resources (2 processors) while waiting for the remaining 3 scripts (script_3.py, script_4.py, and script_5.py) to complete so that the next 5 can be loaded. Is there a way to use these resources by loading new ones as existing commands get completed?
For information: My OS is CentOS
As #RenaudPacalet says there is nothing else to do.
So there is something in your scripts which causes this not to happen.
To help debug you can use:
parallel --lb --tag < parallellise.sh
and maybe add a "Starting X" line at the beginning of scriptX.py and a "Finishing X" line at the end of scriptX.py so you can see that the scripts are indeed finishing.
Without knowing anything about scriptX.py it is impossible to say what is causing this.
(Instead of nohup consider using tmux or screen so you can have the jobs run in the background but always check in on them and see their output. nohup is not ideal for debugging).
I am trying to run several roslaunch files, one after the other, from a bash script. However, when the nodes complete execution, they hang with the message:
[grem_node-1] process has finished cleanly
log file: /home/user/.ros/log/956b5e54-75f5-11e9-94f8-a08cfdc04927/grem_node-1*.log
Then I need to Ctrl-C to get killing on exit for all of the nodes launched from the launch file. Is there some way of causing nodes to automatically kill themselves on exit? Because at the moment I need to Ctrl-C every time a node terminates.
My bash script looks like this, by the way:
python /home/user/git/segmentation_plots/scripts/generate_grem_launch.py /home/user/Data2/Coco 0 /home/user/git/Async_CNN/config.txt
source ~/setupgremsim.sh
roslaunch grem_ros grem.launch config:=/home/user/git/Async_CNN/config.txt
source /home/user/catkin_ws/devel/setup.bash
roslaunch rpg_async_cnn_generator conf_coco.launch
The script setupgremsim.sh sources another catkin workspace.
Many thanks!
Thanks all for your advice. What I ended up doing was this; I launched my ROS Nodes from separate python scripts, which I then called from the bash script. In python you are able to terminate child processes with shutdown. So to provide an example for anyone else with this issue:
bash script:
#!/bin/bash
for i in {0..100}
do
echo "========================================================\n"
echo "This is the $i th run\n"
echo "========================================================\n"
source /home/timo/catkin_ws/devel/setup.bash
python planar_launch_generator.py
done
and then inside planar_launch_generator.py:
import roslaunch
import rospy
process_generate_running = True
class ProcessListener(roslaunch.pmon.ProcessListener):
global process_generate_running
def process_died(self, name, exit_code):
global process_generate_running
process_generate_running = False
rospy.logwarn("%s died with code %s", name, exit_code)
def init_launch(launchfile, process_listener):
uuid = roslaunch.rlutil.get_or_generate_uuid(None, False)
roslaunch.configure_logging(uuid)
launch = roslaunch.parent.ROSLaunchParent(
uuid,
[launchfile],
process_listeners=[process_listener],
)
return launch
rospy.init_node("async_cnn_generator")
launch_file = "/home/user/catkin_ws/src/async_cnn_generator/launch/conf_coco.launch"
launch = init_launch(launch_file, ProcessListener())
launch.start()
while process_generate_running:
rospy.sleep(0.05)
launch.shutdown()
Using this method you could source any number of different catkin workspaces and launch any number of launchfiles.
Try to do this
(1) For each launch you put in a separate shell script. So you have N script
In each script, call the launch file in xterm. xterm -e "roslaunch yourfacnylauncher"
(2) Prepare a master script which calling all N child script in the sequence you want it to be and delay you want it to have.
Once it is done, xterm should kill itself.
Edit. You can manually kill one if you know its gonna hang. Eg below
#!/bin/sh
source /opt/ros/kinetic/setup.bash
source ~/catkin_ws/devel/setup.bash
start ROScore using systemd or rc.local using lxtermal or other terminals to avoid accident kill. Then run the part which you think gonna hang or create a problem. Echo->action if necessary
xterm -geometry 80x36+0+0 -e "echo 'uav' | sudo -S dnsmasq -C /dev/null -kd -F 10.5.5.50,10.5.5.100 -i enp59s0 --bind-dynamic" & sleep 15
Stupid OUSTER LIDAR cant auto config like Veloydne and will hang here. other code cant run
killall xterm & sleep 1
Lets just kill it and continuous run other launches
xterm -e "roslaunch '/home/uav/catkin_ws/src/ouster_driver_1.12.0/ouster_ros/os1.launch' os1_hostname:=os1-991907000715.local os1_udp_dest:=10.5.5.1"
I am testing a bash script I hope to run as a cron job to scan a download log and perform labor-intensive conversions on image files. In order to run several conversions at once, the first script loops through the download log and sends the filename to the second script, which I set to run as a background process using &.
The script pair works well, but when the process is complete, I must press the enter key to return to a command prompt. This is a non-issue when I am running a test, but I am not sure if this behavior has ramifications when run as a cron job.
Will this be an issue? If so, is there a way to close the "terminal" running the first script from the crontab?
Here's a truncated form of my code:
Script 1 (to be launched by crontab):
for i in file1 file2 file3 etc
do
bash /path/to/convert.sh $i &
done
exit 0
Script 2 (convert.sh)
fileName=${1?no file given}
jpegName=$(echo $fileName | sed s/tif/jpg/g)
convert $fileName $jpegName
exit 0
Thanks for any help/assurances you can give!
you don't need script 2. you can convert it to function and put it inside script1.
Another problem is you are running convert.sh in an uncontrolled way. You cannot foresee how many processes will be created (background processes) and this may lead to severe performance overheads.
finally, if you cannot end process in normal way, you may choose to terminate it again using cron by issueing pkill script1.sh
I'm trying to write a script that runs a java program "Loop", and then after 5 seconds, terminates that java program, however, it's not terminating the program when I use the "pkill" option, I'm sorry for asking such a basic question, and I have looked around the internet, but I can't find a solution. Here is my code:
#!/bin/bash
javac Loop.java
java Loop
sleep 5
pkill -n java
When I run the command pkill -n java from the terminal, as opposed to in a script, it does as I expected, why is this?
Your bash script is waiting for java to complete so you'll need to run it as a background process which will start your Loop code and then return immediately, allowing the rest of your script to run:
java Loop &
More information: http://tldp.org/LDP/abs/html/x9644.html
Since you run java Loop in foreground, the next line sleep 5 doesn't get executed until your JVM exits (which is probably never, if Loop is actually an infinite loop).
So you need to start that in background:
java Loop &
Also, for killing that specific background job (rather than killing the newest JVM), you can do:
kill $!
Is there a way to create a bash script that will only run for X hours? I'm currently setting up a cron job to initiate a script every night. This script essentially runs until a certain condition is met, exporting it's status to a holding variable to keep track of 'where it is' after each iteration. The intention is to start-up the process every night, run for a few hours, and then stop, holding the status until the process starts up the next night.
Short of somehow collecting the start time, and checking it against the current time in each iteration of the loop, is there an easier way to do this? Bash scripting is not my forte (I know enough to get things done and be dangerous) and I have not done something like this before. Any help would be appreciated. Thanks.
Use GNU Coreutils
GNU coreutils contains an actual timeout binary, usually invoked like this:
# timeout after 5 seconds when sleeping for 30
/usr/bin/timeout 5s /bin/sleep 30
In your case, you'd want to specify hours instead of seconds, so to timeout in 2 hours use something like 2h instead of 5s. See timeout(1) or info coreutils 'timeout invocation' for additional options.
Hacks and Workarounds
Native timeouts or the GNU timeout command are really the best options. However, see the following for some ideas if you decide to roll your own:
How do I run a command, and have it abort (timeout) after N seconds?
The TMOUT variable using read and process or command substitution.
Do it as you described - it is the cleanest way.
But if for some strange reason want kill the process after a time, can use the next
./long_runner &
(sleep 5; kill $!; wait; exit 0) &
will kill the long_runner after 5 secs.
By using the SIGALRM facility you can rig a signal to be sent after a certain time, but traditionally, this was not easily accessible from shell scripts (people would write small custom C or Perl programs for this). These days, GNU coreutils ships with a timeout command which does this by wrapping your command:
timeout 4h yourprogram