Time Limit for a command to run in cmd - windows

I'm writing a batch file which is having several commands and giving output as given by each command. But, some commands are not working which is taking the code longer to execute. I want every command to be run for 10 seconds and if output does not come, then abort this command and run next command in batch file.
curl URL1
curl URL2
curl URL3
curl URL4
If URL2 is not working, it is taking a longer time to execute. I want every curl command to be checked for 10 seconds and abort and run next curl command.

Since you say you're writing a batch file I'm going to assume that you're using the Windows port of the cURL commandline utility, not the alias curl for the PowerShell cmdlet Invoke-WebRequest.
The cURL utility has 2 parameters that control timeouts:
--connect-timeout <seconds>
Maximum time in seconds that you allow the connection to the server to take. This only limits the connection phase, once curl has connected this option is of no more use. See also the -m/--max-time option.
[...]
-m/--max-time <seconds>
Maximum time in seconds that you allow the whole operation to take. This is useful for preventing your batch jobs from hanging for hours due to slow networks or links going down. See also the --connect-timeout option.
So you should be able to run your statements like this:
curl --connect-timeout 10 --max-time 10 URL

Related

Continuing script after long command executed over SSH

My local computer is running a bash script that executes another script (locally) on a remote like so :
#!/bin/bash
# do stuff
ssh remote#remote "/home/remote/path/to/script.sh"
echo "Done"
# do other stuff
script.sh takes around 15 minutes to execute. Without loss of connection, script.sh is executed completely (until the very last line). Though, Done will never be echoed (nor will the other stuff be executed).
Notes :
I've experimented using screen and nohup, but like I said, the connection is stable and script.sh is executed thoroughly (script.sh doesn't seem to be dropped).
I need script.sh to be over before I can move on to doing other stuff so I can't really run the script and detach (or I will need to know when the script is over before I can start doing other stuff).
Everything works fine if I use a dummy script that last only 5 minutes (instead of 15).
Edit :
script.sh used for testing :
#!/bin/bash
touch /tmp/start
echo "Start..." & sleep 900; touch /tmp/endofscript
Adding -o ServerAliveInterval=60 fixes the issue.
The ServerAliveInterval option prevents your router from thinking the SSH connection is idle by sending packets over the network between your device and the destination server every 60 seconds.
(source)
In the case of a script that takes several minutes to execute and that has no output, this will keep the connection alive and avoid it from timing out and being left hanging.
Two options : 
ssh -o ServerAliveInterval=60 remote#remote "/home/remote/path/to/script.sh"
Adding the following lines to ~/.ssh/config of local computer (replace remote by the name of your remote or * to enable for any remote):
Host remote
ServerAliveInterval 60
For additional information :
What do options ServerAliveInterval and ClientAliveInterval in sshd_config do exactly?
Have you tried setting set -xv in the scripts, or executing both the scripts with bash -xv script.shto get the details of the scripts execution?

Ansible ad-hoc command background not working

It is my understanding that running ansible with -B will put the process in the background and I will get the console back. I don't know if I am using it wrong, or it is not working as expected. What I expect is to have the sleep command initiate on all three computers and then the prompt will be available to me to run another command. What happens is that I do not get access to the console until the command completes (in this case 2 minutes).
Is something wrong, am I misunderstanding what the -B does or am I doing it wrong?
With polling:
Without polling:
There are two parameters to configure async tasks in Ansible: async and poll.
async in playbooks (-B in ad-hoc) – total number of seconds you allow the task to run.
poll in playbooks (-P in ad-hoc) – period in seconds how often you want check for result.
So if you just need fire and forget ad-hoc command, use -B 3600 -P 0: allow 1 min execution and don't care about result.
By default -P 15, so ansible doesn't exit but checks your job every 15 seconds.

curl - not working properly inside shell script program

I wrote a shell script with the following set of commands.
sudo service apache2 reload
sudo service apache2 restart
curl -v http://api.myapi.com/API/firstApi #1
curl -v http://api.myapi.com/API/secondApi #2
echo "Success"
The second curl call(#2) will take almost 1 minute to finish the process. When I run this command from commandline it is working fine. Taking almost 1 minute and printing the response. But when I execute it from a shell script, it exits very fast, the expected process is not happening in the backend, but printing the response. I don't have any clue why it is happening. I tried &&, but not working.
Any clue on this?

Wait between wget downloads

I'm trying to download a directory (incl subs) from a website.
I'm using:
wget -r -e robots=off --no-parent --reject "index.html*" http://example.com/directory1/
Problem is, the server refuses connection after a bit, I think there's too many connections within a short amount of time. So what I'd like to do is insert a wait time (5 seconds) between each download/lookup. Is that possible? If so, how?
You can use --wait. From wget(1):
-w seconds
--wait=seconds
Wait the specified number of seconds between the retrievals. Use
of this option is recommended, as it lightens the server load by
making the requests less frequent. Instead of in seconds, the time
can be specified in minutes using the "m" suffix, in hours using
"h" suffix, or in days using "d" suffix.
Specifying a large value for this option is useful if the network
or the destination host is down, so that Wget can wait long enough
to reasonably expect the network error to be fixed before the
retry. The waiting interval specified by this function is
influenced by "--random-wait", which see.
I didn't know this either, but I found this answer in 15 seconds by using the wget manpage:
Type man wget.
You can use / to search, so I used /wait.
It's the first hit!
Press q to quit.

Need Help with Unix bash "timer" for mp3 URL Download Script

I've been using the following Unix bash script:
#!/bin/bash
mkdir -p ~/Desktop/URLs
n=1
while read mp3; do
curl "$mp3" > ~/Desktop/URLs/$n.mp3
((n++))
done < ~/Desktop/URLs.txt
to download and rename a bunch mp3 files from URLs listed in "URLs.txt". It works well (thanks to StackOverflow users), but due to a suspected server quantity/time download limit, it's only allowing me to access a range of 40 - 50 files from my URL list.
Is there a way to work around this by adding a "timer" inside the while loop so it downloads 1 file per "X" seconds?
I found another related question, here:
How to include a timer in Bash Scripting?
but I'm not sure where to add the "sleep [number of seconds]"... or even if "sleep" is really what I need for my script...?
Any help enormously appreciated — as always.
Dave
curl has some pretty awesome command-line options (documentation), for example, --limit-rate will limit the amount of bandwidth that curl uses, which might completely solve your problem.
For example, replace the curl line with:
curl --limit-rate 200K "$mp3" > ~/Desktop/URLs/$n.mp3
would limit the transfers to an average of 200K per second, which would download a typical 5MB MP3 file in 25 seconds, and you could experiment with different values until you found the maximum speed that worked.
You could also try a combination of --retry and --retry-delay so that when and if a download fails, curl waits and then tries again after a certain amount of time.
For example, replace the curl line with:
curl --retry 30 "$mp3" > ~/Desktop/URLs/$n.mp3
This will transfer the file. If the transfer fails, it will wait a second and try again. If it fails again, it will wait two seconds. If it fails again, it will wait four seconds, and so on, doubling the waiting time until it succeeds. The "30" means it will retry up to 30 times, and it will never wait more than 10 minutes. You can learn this all at the documentation link I gave you.
#!/bin/bash
mkdir -p ~/Desktop/URLs
n=1
while read mp3; do
curl "$mp3" > ~/Desktop/URLs/$n.mp3 &
((n++))
if ! ((n % 4)); then
wait
sleep 5
fi
done < ~/Desktop/URLs.txt
This will spawn at most 4 instances of curl and then wait for them to complete before it spawns 4 more.
A timer?
Like your crontab?
man cron
You know what they let you download, just count the disk usage of your files that you did get.
There is the transfer you are allowed. You need that, and you will need the PID of your script.
ps aux | grep $progname | print awk '{print $1}'
or something like that. The secret sauce here is that you can suspend with
kill -SIGSTOP PID
and resume with
kill -SIGCONT PID
So the general method would be
Urls on an array or queue or whatever
bash lets you have
Process an url.
increment transfer counter
When transfer counter gets close
kill -SIGSTOP MYPID
You are suspended.
in your crontab foreground your script after a minute/hour/day whatever
Continue processing
Repeat until done.
just don't log out or you'll need to do the whole thing over again, although if you used perl it would be trivial.
Disclaimer, I am not sure if this is an exercise in bash or whatnot, I confess freely that I see the answer in perl, which is always my choice outside of a REPL. Code in Bash long enough , or heaven forbid, Zsh ( my shell ) and you will see why Perl was so popular. Ah memories...
Disclaimer 2: Untested, drive by , garbage methodology here only made possible because you've an idea what that transfer might be. Obviously, if you have ssh , use ssh -D PORT you#host and pull the mp3's out of the proxy half the time.
In my own defense, if you slow pull the urls with sleep you'll be connected for a while. Perhaps "they" might notice that. Suspend and resume and you only should be connected while grabbing tracks, and gone otherwise.
Not so much an answer as an optimization. If you can consistently get the first few URLs, but it times out on the later ones, perhaps you could trim your URL file as the mp3s were successfully received?
That is, as 1.mp3 is successfully downloaded, strip it from the list:
tail url.txt -n +2 > url2.txt; mv -f url2.txt url.txt
Then the next time the script runs, it'll begin from 2.mp3
If that works, you might just set up a cron job to periodically execute the script over and over, taking bites at a time.
It just occurred to me that you're programatically numbering the mp3s, and curl might clobber some of them on restart, since every time it runs it'll start counting at 1.mp3 again. Something to be aware of, if you go this route.

Resources