I try to create some threads to ping different servers by system() function on Mac. The code looks like this:
sprintf(str,"#!/bin/sh\n ping -c 3 -t 3 -o %d.%d.%d.%d \n",dataIP1[0],dataIP1[1],dataIP1[2],dataIP1[3]);
int ret =system(str);
But I found if there was a server unavailable, the last threads have to cost more than 3s to ping this servers and also even if this server is available. So I guess that the system() function does not support multi-thread. It looks like there is a locker inside itself, so it can only do the job one by one even you invoke it in different thread at the same time.
Is it correct?
Related
I'm working on a bash script (I just started learning bash) that involves creating virtual guests on a remote server. I do this by SSH'ing from server A to B and execute 2 different commands:
# create the images
$(ssh -n john#serverB.net "fallocate -l ${imgsize} /home/john/images/${imgname}")
and
# create the virtual machine
$(ssh -n john#serverB.net virt-install --bunch of options)
It is possible that these sets of commands have to be executed twice (if there need to be 2 virtual guests created) in a loop. When the second command is being run for the second time I sometimes get this error:
Domain installation still in progress.
This means I have to wait until the previous virtual guest is completed. How would I be able to do these operations in one loop? Can I run them asynchronously? Can I use threads? Or is there another way?
I have heard about the 'wait' command, but is that safe to use?
Check the man page for virt-install. You can use --wait=0 or --noautoconsole.
--wait=WAIT Amount of time to wait (in minutes) for a VM to complete its install. Without this option, virt-install will wait for the
console to close (not necessarily indicating the guest has shutdown),
or in the case of --noautoconsole, simply kick off the install and
exit. Any negative value will make virt-install wait indefinitely, a
value of 0 triggers the same results as noautoconsole. If the time
limit is exceeded, virt-install simply exits, leaving the virtual
machine in its current state.
Have searched on SO and GNU parallel tutorial and gone through examples here, but still don't quite see what I need solved. Any tips appreciated on how I could accomplish the following:
I need to invoke the same script on several remote servers with a different argument passed to each one (argument is a string), then wait until all those jobs are done... Then, run that same script some more times on those same remote servers, but this time try to keep the remote servers as busy as possible (ie when they finish their job, send them another job). Ideally the strings could be read in from a file on the "master" machine that is sending the jobs to the remote servers.
To diagram this, I'm trying to run *my_script* like this:
server A: myscript fee
server B: myscript fi
When both jobs are done I then want to do something like:
server A: myscript fo
server B: myscript fum
... and supposing A finished its work before server B, immediately sending it the next job like :
server A: myscript englishmun
... etc
Again, hugely appreciate any ideas people might have about whether this is easy/hard with GNU parallel (or if something else like pdsh, cluster ssh, might be better suited).
TIA!
It seems we can split the problem up in two parts: An initialization part that needs to be run on all server and a job processing part that does not care which server it is run on.
The last part is GNU Parallel's specialty:
cat argfile | parallel -S serverA,serverB myscript
The first part is a bit more tricky: You want the first k arguments to go onto to k servers.
head -n 2 argfile | parallel -j1 -S serverA,serverB myscript
The problem is here that if there are loads of servers, then serverA may finish before you get to the last server. It is much easier to run the same job on all servers:
head -n 1 argfile | parallel --onall -S serverA,serverB myscript
I have a Sky wireless sensor node and a script which prints the output from the node.
sudo ./serialdump-linux -b115200 /dev/tmotesky1
If I start this script before my pc detects the node, I get the following error:
/dev/tmotesky1: No such file or directory
But if I wait for example 20 seconds, I miss the initial prints (which are important).
Is there a way to detect if the /dev/tmotesky1 exists?
Something like
while [ ! -f /dev/tmotesky1 ] ; do sleep 1; print 'Waiting...'; done
Thanks in advance!
Your code indicates that you are using Linux where you can use the hotplugging mechanism.
On generic systems, you can write an udev rule (--> see with udevadmin monitor -e what happens when you attach the device) which starts e.g. a program or writes something into a pipe. When systemd is used, you can start a service (see man systemd.device).
On small/embedded systems it is possible to write a custom /sbin/hotplug program (set in /proc/sys/kernel/hotplug) instead of using udev.
I'm having issues using Oprofile to profile a parallel program that I call via mpirun. The command I'd like to use is:
$ operf mpirun -n 4 [program and arguments]
Unfortunately, when I do this, operf starts logging, but something funny happens when the MPI program is finished - operf seems to not recognize that it's returned (MPI-spawned processes no longer appear in htop, but operf still does), and things just hang waiting for me to interrupt them.
Is there an option I can pass to operf or mpirun which will make the two play nicely together? Failing that, is there a bash trick I can use to automatically kill operf when my MPI program is finished?
Edit: Previously thought that it Oprofile wasn't always generating results, but it turns out that I was just confused and looking in the wrong location. The only problem is that operf doesn't recognize that the MPI program has terminated.
Try using this line, it works like a charm:
sudo operf mpirun --allow-run-as-root -x LD_LIBRARY_PATH="build/" -np 2 (path_to_the_file)
I have a program that runs as a daemon, using the C command fork(). It creates a new instance that runs in the background. The main instance exists after that.
What would be the best option to check if the service is running? I'm considering:
Create a file with the process id of the program and check if it's running with a script.
Use ps | grep to find the program in the running proccess list.
Thanks.
I think it will be better to manage your process with supervisord, or other process control system.
Create a cron job that runs every few minutes (or whatever you're comfortable with) and does something like this:
/path/to/is_script_stopped.sh && /path/to/script.sh
Write is_script_stopped.sh using any of the methods that you suggest. If your script is stopped cron will evaluate your script, if not, it won't.
To the question, you gave in the headline:
This simple endless loop will restart yourProgram as soon as it fails:
#!/bin/bash
for ((;;))
do
yourProgram
done
If your program depends on a resource, which might fail, it would be wise to insert a short pause, to avoid, that it will catch all system resources when failing million times per second:
#!/bin/bash
for ((;;))
do
yourProgram
sleep 1
done
To the question from the body of your post:
What would be the best option to check if the service is running?
If your ps has a -C option (like the Linux ps) you would prefer that over a ps ax | grep combination.
ps -C yourProgram