single line bash script for wget with retries - bash

I currently run two commands:
sleep 180
wget https://somedomain.com/api/up
So it waits for 3 minutes and then calls api up
I would like to change that so it continuously checks every minute for up to ten minutes until wget returns 200.
so it should be the equivalent of this php function, but in bash. It is important to be a one liner (can be multiple statements separated by ;)
foreach(range(1,10) as $i) {
sleep(60);
try {
Http::get('https://somedomain.com/api/up');
break;
} catch(Exception $e) {
if($i>=10) throw $e;
}
}
the two things where my bash knowledge fails me:
how to do the try catch or check for a 200 response code.
how to get that all into one line/statement

Use a for loop with all the statements separated by ;
for i in {1..10}; do sleep 60; wget 'https://somedomain.com/api/up' && break; done

for i in {1..5}; do wget -- "$1" && break || sleep 15; done

Related

Ampersand to run process in background causes invalid parameters in Bash 5

I have a bash script that runs perfectly well on Bash 3.2. The script contains an ampersand to run a process in the background. However, when I run it in Bash 5.x, it doesn't pass the variables correctly (I get a "SyntaxError: Unexpected end of JSON input"). When I take off the ampersand at the end (of the mgeneratejs line), it executes normally in Bash 5.
#!/bin/bash
#Works on Bash 3.2 on MacOS
#Doesn't work in bash-5.0/5.1
##!/usr/bin/env bash
NUM_ROWS_PER_RUN=5
NUM_RUNS=2
TEMPLATE_STRING='{
name: "$name"
}'
for i in $(seq 1 "$NUM_RUNS")
do
echo "Starting run ${i}"
#If you dont have it, then run "npm install -g mgeneratejs"
mgeneratejs -n "$NUM_ROWS_PER_RUN" "${TEMPLATE_STRING//[$'\r\n ']}" &
done
echo "Waiting"
wait
echo "Finished"
How can I get the process (mgeneratejs) to run in the background when using Bash 5.x?
Bash may or may not be in fault here, but be sure that the problema is in mgeneratejs.
Taking a look mgeneratejs's source code I found this:
if (process.stdin.isTTY) {
var str = argv._[0];
template = _.startsWith(str, '{') ? parseTemplate(str) : parseTemplate(read(str, 'utf8'));
generate();
} else {
template = '';
process.stdin.setEncoding('utf-8');
process.stdin.on('readable', function() {
var chunk = process.stdin.read();
if (chunk !== null) {
template += chunk;
}
});
process.stdin.on('end', function() {
template = JSON.parse(template);
generate();
});
}
If stdin is not a TTY then mgeneratejs assumes that stdin is a pipe, and tries to read from it. This is wrong, they should at least check if the template has been given in the command line args.
I would't recommend that you fix mgeneratejs, but I can recommend you to do this:
function do_run() {
echo "${TEMPLATE_STRING//[$'\r\n ']}" | mgeneratejs -n "$NUM_ROWS_PER_RUN"
}
for i in $(seq 1 "$NUM_RUNS")
do
echo "Starting run ${i}"
#If you dont have it, then run "npm install -g mgeneratejs"
do_run &
done

Variables comparision

I want to write a script with several commands and get the combination result of all them:
#!/bin/bash
command1; RET_CMD1=$(echo $?)
command2; RET_CMD2=$(echo $?)
command3; RET_CMD3=$(echo $?)
\#result is error if any of them fails
\#could I do something like:
RET=RET_CMD1 && RET_CMD2 && RET_CMD3 *<- this is the part that I can't remember how I did in the past..*
echo $RET
Thanks for your help!
I think you're just looking for this:
if ! { command1 && command2 && command3; }; then
echo "one of the commands failed"
fi
The result of the block { command1 && command2 && command3; } will be 0 (success) only if all of the commands exited successfully. The semicolon is needed if the block is all written on one line.
There is no need to save the return codes to variables, or even to refer to $?, since if works based on the return code of a command (or list of commands).
So to think about this...
we want to return 0 on success... or some other positive integer if an error occurred with one of the commands.
If no error occurred with any 3, they would all return 0, which means you would also return 0 in your script. Some simple addition can resolve this.
RET=$[RET_CMD1 + RET_CMD2 + RET_CMD3] # !
echo $RET
You can also replace the first line (!) with logical or operator, as you mentioned.
RET=$[RET_CMD1 | RET_CMD2 | RET_CMD3]
Note that addition and logical or are different in nature. But you seemed to want the logical or...
Disadvantages of this setup: Not being able to trace where the error occurred from the return value. Tracing errors from either 3 commands will need to rely on other error output generated. (This is just a forewarning.)

wget script to connect, wait if successful, continue if no connection

I have to create a script that sends a wget contact message to all devices on our network. wget connects to a url and this trigger the endpoint to contact a server.
The problem I have is I need to be able to send the command to each IP address on the network and if the connection is successful do noting for 30 seconds then move on to the next url in the list. However if the connection isn't successful I want the script to move on to the next url with no pause.
Currently I'm using a bash script to send the command with a pause=30 in-between url's, connection attempts set to 1 and time-out set to 1. this works OK for the connections that are successful but it also hangs on the addresses that are not.
Any advise on how I can pause on success and move on after time out on dead addresses?
This is the command I'm currently running,
wget --connect-timeout=1 --tries=1 http://xx.xx.xx.xx:8085/contact_dls.html/ContactDLS
sleep 30
wget --connect-timeout=1 --tries=1 http://xx.xx.xx.xx:8085/contact_dls.html/ContactDLS
etc etc etc
thanks
You don't need wget for this task - everything can be done in Perl.
Simply use code like this:
use LWP::UserAgent;
use HTTP::Request;
my $ua = new LWP::UserAgent;
$ua->timeout(1);
my $req = new HTTP::Request(GET => $url);
my $res = $ua->request($req);
if ($res->is_success()) {
print("Connection to '$url' was OK");
sleep(30);
} else {
print("Cannot access '$url'");
sleep(1);
}
This will hit your url, but will timeout in just 1 second.
I would probably load the urls into an array and iterate through the array. Something like this
#!/usr/bin/perl
use warnings;
use strict;
my #urls = qw(test.com url.com 123abc.com fail.aa);
foreach my $url (#urls){
my $check = `wget --server-response $url 2>&1 | grep HTTP/ | awk '{print \$2}'`;
#if there is a successful response you can set the sleep(3) to sleep(30) for your instance.
if($check){
print "Found $url -- sleeping 3 seconds\n";
sleep(3);
}
else{
print "$url does not exist\n";
#If the url does not exist or respond it will move on to the next item in the array. You could add a sleep(1) before the next for a 1 second pause
next;
}
}
Of course this is assuming that you are using linux. The urls could be loaded another way as well, I don't know how your current script it getting them. The above is an example and you of course would need to be adjusted to fit your environment

which loop in bash script

I am quite new in bash, but I need to create a simple script which will do below steps:
Wait 1 minute
A) bash script will use CM to generate result file
B) check row 8 in result file (to know if Administrator is running any jobs or not)
if NO jobs:
C) bash script will use CM to start cube refresh
D) wait 1 minute
D1) Remove result file
E) generate result file
E1) Read row 8
no jobs:
F) remove result file G) EXIT
yes:
I) Go to D)
YES:
E) Wait 1 minute
F) Remove result file
Go to A)
As bash doesn't have goto (or should not be use), I tried few loops, but I not sure which I should choose.
I know how to:
- start cube(step C)
- generate result file (step A & E):
- check line 8:
sed '8!d' /abc_uat/cmlogs/adm_jobs_u1.log
condition for loops will be probably similar to this: !='Owner = Administrator'
but how to avoid goto ?
I tried with while do loop, but I am not sure what should I add in case of false condition, I added else, but not sure of it:
sleep 60
Generate result file with admin jobs (which admin runs inside of 3rd party tool)
while [ sed '8!d' admin_jobs_result_file.log !="Owner = Administrator" ];
do
--NO Admin jobs
START CUBE REFRESH (it will start admin job)
sleep 60
REMOVE RESULT FILE (OLD)
GENERATE RESULT FILE
while [ sed '8!d' admin_jobs_result_file.log = "Owner = Administrator" ];
--Admin is still running cube refresh
do
sleep 60
REMOVE RESULT FILE (OLD)
GENERATE RESULT FILE
-- it should continue checking every 1 minute if admin is still running cube refresh job, so I hope it will go back to while condition
else
done
else
-- Admin is running something
sleep 60
REMOVE RESULT FILE (OLD)
GENERATE RESULT FILE
-it should check result file again but I think it will finish loop
done
You can replace goto with a loop. while loop, for example.
Syntax
while <condition>
do
action
done
Check out cron jobs. Delegate, if possible, "waiting for a minute" task to cron. Cron should worry about running your script on a timely fashion.
You may consider writing two scripts instead of one.
Do you really need to create a result file? Do you know piping ? (no offense, just mentioning it because you said you were fairly new to bash)
Hopefully this is self explanatory.
result_file=admin_jobs_result_file.log
function generate {
logmsg sleeping
sleep 60
rm -f "$result_file"
logmsg generating
# use CM to generate result file
}
function owner_is_administrator {
# if line 8 contains "Owner = Administrator", exit success
# else exit failure
sed -n '8 {/Owner = Administrator/ q 0; q 1}' "$result_file"
}
function logmsg { date "+%Y-%m-%d %T -- $*"; }
##############
generate
while owner_is_administrator; do
generate
done
# at this point, line 8 does NOT contain "Owner = Administrator"
logmsg start cube refresh
# use CM to start cube refresh
generate
while owner_is_administrator; do
generate
done
logmsg Done
Looks like AIX's sed can't exit with a specified status. Try this instead:
function owner_is_administrator {
# if line 8 contains "Owner = Administrator", exit success
# else exit failure
awk 'NR == 8 {if (/Owner = Administrator/) {exit 0} else {exit 1}}' "$result_file"
}

Catch PHP Exits in CLI via sh

Alright, I am trying to figure this problem out. I have a class that loops indefinitely until I either restart it manually or it runs out of available ram. I've written the code to be compliant with both CLI and normal web based execution. The only difference is with web-based execution the script will last about 12 hours or so until it crashes due to memory issues. When I run it in CLI it runs far longer, (On average 4-5 days before a crash due to memory)
The script is an IRC bot that is heavily customized for what I need it to do. I don't know enough of C++, ruby, python or other languages to make something that is cross platform compliant. My dev machine is Windows and my production server is Ubuntu. Right now I have the script successfully forking off and detaching from the terminal window so I can close that with out ending the script.
But what I am trying to figure out is how to catch errors and restart the script automatically since it tends to fail at random times and not always when I am at the IRC channel to catch the failure. One last positive would be a way to catch if I requested a restart from the channel and have the bot restart as I am constantly adding in new code functions or just general bug fixes.
Here is my CLI start php script
#!/usr/bin/php
<?php
include_once ("./config/base_conf.php");
include_once ("./libs/irc_base.php");
if ($config ['database'] == true) {
include_once ("./config/db_conf.php");
}
$server = getopt ( 's', array ("server::" ) );
if (! $server) {
$SER = 'default_server';
} elseif ($server ['server'] == 'raelgun') {
$SER = 'server_a';
} else {
$SER = 'default_server';
}
declare ( ticks = 1 )
;
$pid = pcntl_fork ();
if ($pid == - 1) {
die ( "could not fork" );
} else if ($pid) {
exit (); // we are the parent
} else {
// we are the child
}
// detatch from the controlling terminal
if (posix_setsid () == - 1) {
die ( "could not detach from terminal" );
}
$posid = posix_getpid ();
$PID_FILE = "/var/run/bot_process_".$SER.".pid";
$fp = fopen ($PID_FILE , "w" ) or die("File Exists Process Running");
fwrite ( $fp, $posid );
fclose ( $fp );
// setup signal handlers
pcntl_signal ( SIGTERM, "sig_handler" );
pcntl_signal ( SIGHUP, "sig_handler" );
// loop forever performing tasks
$bot = new IRC_BOT ( $config, $SER );
function sig_handler($signo) {
switch ($signo) {
case SIGTERM :
$bot->machineKill();
unlink($PID_FILE);
exit ();
break;
case SIGHUP :
$bot->machineKill();
unlink($PID_FILE);
break;
default :
// handle all other signals
}
}
Depending on the server I connect to since it connects to a maximum of 2 servers I run the following in the terminal to get the script running
php bot_start_shell.php --server="servernamehere" > /dev/null
So what I am trying to do is get a shell file coded correctly to monitor that script, and if it exits due to error or requested restart to restart the script.
I've used this technique for a while, where a shell script runs a PHP script, monitors the exit value and restarts.
Here's a test script that uses exit() to return a value to the shell script - 95,96 & 100 are taken as other 'unplanned restarts', handled at the bottom of the script.
#!/usr/bin/php
<?php
// cli-script.php
// for testing of the BASH script
exit (rand(95, 100));
/* normally we would return one of
# 97 - planned pause/restart
# 98 - planned restart
# 99 - planned stop, exit.
# anything else is an unplanned restart
*/
I prefer to wait a few seconds before I restart the script, to avoid wasting CPU if the script being called instantly fails, and so would be immediately restarted.
#!/bin/bash
# runPHP-Worker.sh
# a shell script that keeps looping until an exit code is given
# if its does an exit(0), restart after a second - or if it's a declared error
# if we've restarted in a planned fashion, we don't bother with any pause
# and for one particular code, we can exit the script entirely.
# The numbers 97, 98, 99 must match what is returned from the PHP script
nice php -q -f ./cli-script.php -- $#
ERR=$?
## Possibilities
# 97 - planned pause/restart
# 98 - planned restart
# 99 - planned stop, exit.
# 0 - unplanned restart (as returned by "exit;")
# - Anything else is also unplanned paused/restart
if [ $ERR -eq 97 ]
then
# a planned pause, then restart
echo "97: PLANNED_PAUSE - wait 1";
sleep 1;
exec $0 $#;
fi
if [ $ERR -eq 98 ]
then
# a planned restart - instantly
echo "98: PLANNED_RESTART, no pause";
exec $0 $#;
fi
if [ $ERR -eq 99 ]
then
# planned complete exit
echo "99: PLANNED_SHUTDOWN";
exit 0;
fi
# unplanned exit, pause, and then restart
echo "unplanned restart: err:" $ERR;
echo "sleeping for 1 sec"
sleep 1
exec $0 $#
If you don't want to do different things for each value, it really just comes down to
#!/bin/bash
php -q -f ./cli-script.php -- $#
exec $0 $#;

Resources