Ruby shell script realtime output - ruby

script.sh
echo First!
sleep 5
echo Second!
sleep 5
echo Third!
another_script.rb
%x[./script.sh]
I want another_script.rb to print the output of script.sh as it happens. That means printing "First!", waiting five seconds, printing "Second!', waiting 5 seconds, and so on.
I've read through the different ways to run an external script in Ruby, but none seem to do this. How can I fulfill my requirements?

You can always execute this in Ruby:
system("sh", "script.sh")
Note it's important to specify how to execute this unless you have a proper #!/bin/sh header as well as the execute bit enabled.

Related

Bash Script slows down executed program

I have a program where i test different data sets and configuration. I have a script to execute all of those.
imagine my code as :
start = omp_get_wtime()
function()
end = omp_get_wtime()
print(end-start)
and the bash script as
for a in "${first_option[#]}"
do
for b in "${second_option[#]}"
do
for c in "${third_option[#]}"
do
printf("$a $b $c \n")
./exe $a $b $c >> logs.out
done
done
done
now when i execute the exact same configurations by hand, i get varying results from 10 seconds to 0.05 seconds but when i execute the script, i get the same results on the up side but for some reason i can't get any timings lower than 1 seconds. All the configurations that manually compute at less than a second get written in the file at 1.001; 1.102; 0.999 ect...
Any ideas of what is going wrong?
Thanks
My suggestion would be to remove the ">> logs.out" to see what happens with the speed.
From there you can try several options:
Replace ">> log.out" with "| tee -a log.out"
Investigate stdbuf and if your code is python, look at "PYTHONUNBUFFERED=1" shell variable. See also: How to disable stdout buffer when running shell
Redirect bash print command with ">&2" (write to stderr) and move ">> log.out" or "| tee -a log.out" behind the last "done"
You can probably see what is causing the delay by using:
strace -f -t bash -c "<your bash script>" | tee /tmp/strace.log
With a little luck you will see which system call is causing the delay on the bottom of the screen. But it is a lot of information to process. Alternatively look for the name of your "./exe" in "/tmp/strace.log" after tracing is done. And then look for the system calls after invocation (process start of ./exe) that eat most time. Could be just many calls ... Don't spent to much time on this if you don't have the stomach for it.

Stop command after a given time and return its result in Bash

I need to execute several calls to a C++ program that records frames from a videogame. I have about 1800 test games, and some of them work and some of them don't.
When they don't work, the console returns a Segmentation fault error, but when they do work, the program opens a window and plays the game, and at the same time it records every frame.
The problem is that when it does work, this process does not end until you close the game window.
I need to make a Bash script that will test every game I have and write the names of the ones that work in a text file and the names of the ones that don't work in another file.
For the moment I have tried with this, using the timeout command:
count=0
# Run for every file in the ROMS folder
for filename in ../ROMs/*.bin; do
# Increase the counter
(( count++ ))
# Run the command with a timeout to prevent it from being infinite
timeout 5 ./doc/examples/videoRecordingExample "$filename"
# Check if execution succeeds/fails and print in a text file
if [ $? == 0 ]; then
echo "Game $count named $filename" >> successGames.txt
else
echo "Game $count named $filename" >> failedGames.txt
fi
done
But it doesn't seem to be working, because it writes all the names on the same file. I believe this is because the condition inside the if refers to the timeout and not the execution of the C++ program itself.
Then I tried without the timeout and everytime a game worked, I closed manually the window, and then the result was the expected. I tried this with only 10 games, but when I test it with all the 1800 I would need it to be completely automatic.
So, is there any way of making this process automatic? Like some command to stop the execution and at the same time know if it was successful or not?
instead of
timeout 5 ./doc/examples/videoRecordingExample "$filename"
you could try this:
./doc/examples/videoRecordingExample "$filename" && sleep 5 && pkill videoRecordingExample
Swap the arguments in the timeout code. It should be:
timeout 5 "$filename" ./doc/examples/videoRecordingExample
Reason: the syntax for timeout is:
timeout [OPTION] DURATION COMMAND [ARG]...
So the COMMAND should be just after the DURATION. In the code above the presumably non-executable file videoRecordingExample would be the COMMAND, which probably returns an error every time.

Storing execution time of a command in a variable

I am trying to write a task-runner for command line. No rationale. Just wanted to do it. Basically it just runs a command, stores the output in a file (instead of stdout) and meanwhile prints a progress indicator of sorts on stdout and when its all done, prints Completed ($TIME_HERE).
Here's the code:
#!/bin/bash
task() {
TIMEFORMAT="%E"
COMMAND=$1
printf "\033[0;33m${2:-$COMMAND}\033[0m\n"
while true
do
for i in 1 2 3 4 5
do
printf '.'
sleep 0.5
done
printf "\b\b\b\b\b \b\b\b\b\b"
sleep 0.5
done &
WHILE=$!
EXECTIME=$({ TIMEFORMAT='%E';time $COMMAND >log; } 2>&1)
kill -9 $WHILE
echo $EXECTIME
#printf "\rCompleted (${EXECTIME}s)\n"
}
There are some unnecessarily fancy bits in there I admit. But I went through tons of StackOverflow questions to do different kinds of fancy stuff just to try it out. If it were to be applied anywhere, a lot of fat could be cut off. But it's not.
It is to be called like:
task "ping google.com -c 4" "Pinging google.com 4 times"
What it'll do is print Pinging google.com 4 times in yellow color, then on the next line, print a period. Then print another period every .5 seconds. After five periods, start from the beginning of the same line and repeat this until the command is complete. Then it's supposed to print Complete ($TIME_HERE) with (obviously) the time it took to execute the command in place of $TIME_HERE. (I've commented that part out, the current version would just print the time).
The Issue
The issue is that that instead of the execution time, something very weird gets printed. It's probably something stupid I'm doing. But I don't know where that problem originates from. Here's the output.
$ sh taskrunner.sh
Pinging google.com 4 times
..0.00user 0.00system 0:03.51elapsed 0%CPU (0avgtext+0avgdata 996maxresident)k 0inputs+16outputs (0major+338minor)pagefaults 0swaps
Running COMMAND='ping google.com -c 4';EXECTIME=$({ TIMEFORMAT='%E';time $COMMAND >log; } 2>&1);echo $EXECTIME in a terminal works as expected, i.e. prints out the time (3.559s in my case.)
I have checked and /bin/sh is a symlink to dash. (However that shouldn't be a problem because my script runs in /bin/bash as per the shebang on the top.)
I'm looking to learn while solving this issue so a solution with explanation will be cool. T. Hanks. :)
When you invoke a script with:
sh scriptname
the script is passed to sh (dash in your case), which will ignore the shebang line. (In a shell script, a shebang is a comment, since it starts with a #. That's not a coincidence.)
Shebang lines are only interpreted for commands started as commands, since they are interpreted by the system's command launcher, not by the shell.
By the way, your invocation of time does not correctly separate the output of the time builtin from any output the timed command might sent to stderr. I think you'd be better with:
EXECTIME=$({ TIMEFORMAT=%E; time $COMMAND >log.out 2>log.err; } 2>&1)
but that isn't sufficient. You will continue to run into the standard problems with trying to put commands into string variables, which is that it only works with very simple commands. See the Bash FAQ. Or look at some of these answers:
How to escape a variable in bash when passing to command line argument
bash quotes in variable treated different when expanded to command
Preserve argument splitting when storing command with whitespaces in variable
find command fusses on -exec arg
Using an environment variable to pass arguments to a command
(Or probably hundreds of other similar answers.)

Use shell output for error handling for condor

I need to submit multiple simulations to condor (multi-client execution grid) using shell and since this may take a while, I decided to write a shell script to do it for me. I am very new to shell scripting and this is the result of what I did on one day:
for H in {0..50}
do
for S in {0..10}
do
./p32 -data ../data.txt -out ../result -position $S -group $H
echo "> Ready to submit"
condor_submit profile.sub
echo "> Waiting 15 minutes for group $H Pos $S"
for W in {1..15}
do
echo "Staring minute $W"
sleep 60
done
done
echo "Deleting data_3 to free up space"
mkdir /tmp/data_3
if [$H < 10]
then
tar cfvz /tmp/data_3/group_000$H.tar.gz ../result/data_3/group_000$H
rm -r ../result/data_3/group_000$H
else
tar cfvz /tmp/data_3/group_00$H.tar.gz ../result/data_3/group_00$H
rm -r ../result/data_3/group_00$H
fi
done
This script runs through 0..50 simulations and submits 0..10 different parameters to a program that generates a condor submission profile. Then I submit this profile and let it execute for 15 minutes (with a call being made every minute to ensure the SSH pipe doesn't break). Once the 15 minutes are up I compress the output to a volume with more space and erase the original files.
The reason for me implementing this because is due to our condor system can only being able to handle up to 10,000 submissions at once and one submission (condor_submit profile.sub) executes 7000+ simulations.
Now my problem is with this line. When I checked this morning I (luckily) spotted that the when calling condor_submit profile.sub may cause an error if the network is too busy. The error code is:
ERROR: Failed to connect to local queue manager
CEDAR:6001:Failed to connect to <IP_NUMBER:PORT_NUMBER>
This means that from time to time a whole iteration gets lost! How can I work around this? The only way I see is to use shell to read in the last line/s of terminal output and evaluate whether they follow the expected response i.e.:
7392 job(s) submitted to cluster CLUSTER_NUMBER.
But how would I read in the last line and go about checking for errors?
Any help is very needed and very much appreciated
Does condor_submit give a non-zero exit code when it fails? If so, you can try calling it like this:
while ! condor_submit profile.sub; do
sleep 5
done
which will cause the current profile to be submitted every 5 seconds until it succeeds.

I want to make a conditional cronjob

I have a cron job that runs every hour. It accesses an xml feed. If the xml feed is unvailable (which seems to happen once a day or so) it creates a "failure" file. This "failure" file has some metadata in it and is erased at the next hour when the script runs again and the XML feed works again.
What I want is to make a 2nd cron job that runs a few minutes after the first one, looks into the directory for a "failure" file and, if it's there, retries the 1st cron job.
I know how to set up cron jobs, I just don't know how to make scripting conditional like that. Am I going about this in entirely the wrong way?
Possibly. Maybe what you'd be better off doing is having the original script sleep and retry a (limited) number of times.
Sleep is a shell command and shells support looping so it could look something like:
for ((retry=0;retry<12;retry++)); do
try the thing
if [[ -e my_xlm_file ]]; then break; fi
sleep 300
# five minutes later...
done
As the command to run, try:
/bin/bash -c 'test -e failurefile && retrycommand -someflag -etc'
It runs retrycommand if failurefile exists
Why not have your set your script touch a status file when it has successfully completed. Have it run every 5 minutes, and have the first check of the script be to see if the status file is less then 60 minutes old, and if it is young, then quit, if it is old, then fetch.
I agree with MarkusQ that you should retry in the original job instead of creating another job to watch the first job.
Take a look at this tool to make retrying easier: https://github.com/kadwanev/retry
You can just wrap the original cron in a retry very easily and the final existence of the failure file would indicate if it failed even after the retries.
If somebody will need a bash script to ping an endpoint (for example, run scheduled API tasks via cron), retry it, if the response status was bad, then:
#!/bin/bash
echo "Start pinch.sh script.";
# run 5 times
for ((i=1;i<=5;i++))
do
# run curl to do a POST request to https://www.google.com
# silently flush all its output
# get the response status code as a bash variable
http_response=$(curl -o /dev/null -s -w "%{response_code}" https://www.google.com);
# check for the expected code
if [ $http_response != "200" ]
then
# process fail
echo "The pinch is Failed. Sleeping for 5 minutes."
# wait for 300 seconds, then start another iteration
sleep 300
else
# exit from the cycle
echo "The pinch is OK. Finishing."
break;
fi
done
exit 0

Resources