This question already has answers here:
Timeout command on Mac OS X?
(6 answers)
Closed 1 year ago.
UPDATE:
This question has been closed. I asked in another question.
Bash how to run multiple wget with output by sequence (NOT Parallel) and delete the wget file for speedtest purpose
I used timeout solution
#!/bin/bash
function speedtest() {
local key=$1
local url=$2
timeout 10 wget $url
echo -e "\033[40;32;1m$key is completed.\033[0m"
}
speedtest "Lisbon" "https://lg-lis.fdcservers.net/100MBtest.zip"
speedtest "London" "https://lg-lon.fdcservers.net/100MBtest.zip"
speedtest "Madrid" "https://lg-mad.fdcservers.net/100MBtest.zip"
speedtest "Paris" "https://lg-par2.fdcservers.net/100MBtest.zip"
When I run the bash script, this is the output.
» ./wget_speedtest.sh [2021/05/8 |15:42:49]
Redirecting output to ‘wget-log’.
Lisbon is completed.
Redirecting output to ‘wget-log.1’.
London is completed.
Redirecting output to ‘wget-log.2’.
Madrid is completed.
Redirecting output to ‘wget-log.3’.
Paris is completed.
I am expected to see the kb/s for running after 10 seconds for each wget.
I have a list of wget of file to download and want to see the download speed. I just want to run about 10 seconds, then print out the result what is the download speed.
I have 20 different server file to test out. My goal is to see the how kb/s download for that 10 seconds.
e.g
> wget https://lg-lis.fdcservers.net/100MBtest.zip
--2021-05-08 13:37:37-- https://lg-lis.fdcservers.net/100MBtest.zip
Resolving lg-lis.fdcservers.net (lg-lis.fdcservers.net)... 50.7.43.4
Connecting to lg-lis.fdcservers.net (lg-lis.fdcservers.net)|50.7.43.4|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 104857600 (100M) [application/zip]
Saving to: ‘100MBtest.zip’
100MBtest.zip 0%[ ] 679.66K 174KB/s eta 9m 26s ^C
This is my bash file
#!/bin/bash
function speedtest() {
local key=$1
local url=$2
( cmdpid=$$;
(sleep 10; kill $cmdpid; rm -f 100M) \
& while ! wget "$url"
do
echo -e "\033[40;32;1m$key for 10 seconds done.\033[0m"
done )
}
speedtest "Lisbon" "https://lg-lis.fdcservers.net/100MBtest.zip"
speedtest "London" "https://lg-lon.fdcservers.net/100MBtest.zip"
speedtest "Madrid" "https://lg-mad.fdcservers.net/100MBtest.zip"
speedtest "Paris" "https://lg-par2.fdcservers.net/100MBtest.zip"
However, the above code does not work, it still download at background and redirect to wget-log
> ./wget_speedtest.sh
--2021-05-08 13:41:56-- https://lg-lis.fdcservers.net/100MBtest.zip
Resolving lg-lis.fdcservers.net (lg-lis.fdcservers.net)... 50.7.43.4
Connecting to lg-lis.fdcservers.net (lg-lis.fdcservers.net)|50.7.43.4|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 104857600 (100M) [application/octet-stream]
Saving to: ‘100M’
100M 5%[==> ] 5.84M 1.02MB/s eta 79s [1] 21251 terminated ./wget_speedtest.sh
Redirecting output to ‘wget-log’.
Use the timeout command (part of GNU coreutils):
$ timeout 10 wget https://lg-lis.fdcservers.net
/100MBtest.zip
--2021-05-07 22:53:20-- https://lg-lis.fdcservers.net/100MBtest.zip
Resolving lg-lis.fdcservers.net (lg-lis.fdcservers.net)... 50.7.43.4
Connecting to lg-lis.fdcservers.net (lg-lis.fdcservers.net)|50.7.43.4|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 104857600 (100M) [application/zip]
Saving to: ‘100MBtest.zip’
100MBtest.zip 0%[ ] 23.66K 10.8KB/s
$
If the specified time is exceeded, timeout exits with a status of 124.
timeout --help or info coreutils timeout for more information.
If you're on MacOS, see Timeout command on Mac OS X? for some suggested alternatives.
Related
tac FILE | sed -n -e 's/^.*URL: //p' | SEND TO WGET HERE
This one liner above gives a list of URLs from a file, one per line. I am trying to stream/pipe these into wget directly. Each URL is a thumbnail picture that I need to do a massive download on. Trying to write this one liner to facilitate this process.
This one liner above gives a list of URLs from a file, one per line. I
am trying to (...) pipe these into wget directly.
In order to do so you might harness -i file option, if you give - as file wget will be reading standard input, from wget man page
-i file
--input-file=file
Read URLs from a local or external file. If - is specified as file, URLs are read from the standard input(...)If this function is
used, no URLs need be present on the command line(...)
So in your case
command | wget -i -
where command is command which output is one URL per line
Use xargs to set the argument of a command from standard input:
tac FILE | sed -n -e 's/^.*URL: //p' | xargs wget
Here each word of the standard input of xargs is set as a positional argument to wget
Demo:
$ cat FILE
URL: https://google.com https://netflix.com
asdfdas URL: https://stackoverflow.com
$ tac FILE | sed -n -e 's/^.*URL: //p' | xargs wget
--2021-12-30 12:53:17-- https://stackoverflow.com/
Resolving stackoverflow.com (stackoverflow.com)... 151.101.65.69, 151.101.193.69, 151.101.129.69, ...
Connecting to stackoverflow.com (stackoverflow.com)|151.101.65.69|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: unspecified [text/html]
Saving to: ‘index.html.7’
index.html.7 [ <=> ] 175,76K 427KB/s in 0,4s
2021-12-30 12:53:18 (427 KB/s) - ‘index.html.7’ saved [179983]
--2021-12-30 12:53:18-- https://google.com/
Resolving google.com (google.com)... 142.250.184.142, 2a00:1450:4017:80c::200e
Connecting to google.com (google.com)|142.250.184.142|:443... connected.
HTTP request sent, awaiting response... 301 Moved Permanently
Location: https://www.google.com/ [following]
--2021-12-30 12:53:18-- https://www.google.com/
Resolving www.google.com (www.google.com)... 142.250.187.100, 2a00:1450:4017:807::2004
Connecting to www.google.com (www.google.com)|142.250.187.100|:443... connected.
HTTP request sent, awaiting response... 302 Found
Location: https://consent.google.com/ml?continue=https://www.google.com/&gl=GR&m=0&pc=shp&hl=el&src=1 [following]
--2021-12-30 12:53:19-- https://consent.google.com/ml?continue=https://www.google.com/&gl=GR&m=0&pc=shp&hl=el&src=1
Resolving consent.google.com (consent.google.com)... 216.58.206.206, 2a00:1450:4017:80c::200e
Connecting to consent.google.com (consent.google.com)|216.58.206.206|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: unspecified [text/html]
Saving to: ‘index.html.8’
index.html.8 [ <=> ] 12,16K --.-KB/s in 0,01s
2021-12-30 12:53:19 (1,25 MB/s) - ‘index.html.8’ saved [12450]
--2021-12-30 12:53:19-- https://netflix.com/
Resolving netflix.com (netflix.com)... 54.155.246.232, 18.200.8.190, 54.73.148.110, ...
Connecting to netflix.com (netflix.com)|54.155.246.232|:443... connected.
HTTP request sent, awaiting response... 301 Moved Permanently
Location: https://www.netflix.com/ [following]
--2021-12-30 12:53:19-- https://www.netflix.com/
Resolving www.netflix.com (www.netflix.com)... 54.155.178.5, 3.251.50.149, 54.74.73.31, ...
Connecting to www.netflix.com (www.netflix.com)|54.155.178.5|:443... connected.
HTTP request sent, awaiting response... 302 Found
Location: https://www.netflix.com/gr-en/ [following]
--2021-12-30 12:53:20-- https://www.netflix.com/gr-en/
Reusing existing connection to www.netflix.com:443.
HTTP request sent, awaiting response... 200 OK
Length: unspecified [text/html]
Saving to: ‘index.html.9’
index.html.9 [ <=> ] 424,83K 1003KB/s in 0,4s
2021-12-30 12:53:21 (1003 KB/s) - ‘index.html.9’ saved [435027]
FINISHED --2021-12-30 12:53:21--
Total wall clock time: 4,1s
Downloaded: 3 files, 613K in 0,8s (725 KB/s)
I have the following command
sudo wget --output-document=/dev/null http://speedtest.pixelwolf.ch which outputs
--2016-03-27 17:15:47-- http://speedtest.pixelwolf.ch/
Resolving speedtest.pixelwolf.ch (speedtest.pixelwolf.ch)... 178.63.18.88, 2a02:418:3102::6
Connecting to speedtest.pixelwolf.ch (speedtest.pixelwolf.ch) | 178.63.18.88|:80... connected.
HTTP Request sent, awaiting response... 200 OK
Length: 85 [text/html]
Saving to: `/dev/null`
100%[======================>]85 --.-K/s in 0s
2016-03-27 17:15:47 (8.79 MB/s) - `dev/null` saved [85/85]
I'd like to be able to parse the (8.79 MB/s) from the last line and store this in a file (or any other way I can get this into a local PHP file easily), I tried to store the full output by changing my command to --output-document=/dev/speedtest however this just saved "Could not reach website" in the file and not the terminal output of the command.
Not quite sure where to start with this, so any help would be awesome.
Not sure if it helps, but my intention is for this stored value (8.79) in this instance to be read by a PHP file and handled there, every 30 seconds which I'll achieve by: while true; do (run speed test and save speed variable to a file cmd); php handleSpeedTest.php; sleep 5; done where handleSpeedTest.php will be able to read that stored value and handle it accordingly.
I changed the URL to one that works. Redirected stderr onto stdout. Used grep --only-matching (-o) and a regex.
sudo wget -O /dev/null http://www.google.com 2>&1 | grep -o '\([0-9.]\+ [KM]B/s\)'
I am using following command in my bash script to trigger jenkins build:
wget --no-check-certificate "http://<jenkins_url>/view/some_view/job/some_prj/buildWithParameters?token=xxx"
Output:
HTTP request sent, awaiting response... 201 Created
Length: 0
Saving to: “buildWithParameters?token=xxx”
[ <=> ] 0 --.-K/s in 0s
2015-02-20 10:10:46 (0.00 B/s) - “buildWithParameters?token=xxx” saved [0/0]
And then it's creates empty file: “buildWithParameters?token=xxx”
My question is: why wget creates this file and how to turn that functionality off?
Most simply:
wget --no-check-certificate -O /dev/null http://foo
this will make wget save the file to /dev/null, effectively discarding it.
I am running a web app on a Tomcat server. There is a hard-to-detect problem within the server code that causes it to crash once or twice everyday. I will dig in to correct it when I have time. But until that day, in a problematic case restarting tomcat (/etc/init.d/tomcat7 restart) or basically rebooting the machine also seem pretty good solutions for now. I want to detect liveliness of server with wget instead of grep or something else because even though tomcat is running my service may be down.
wget localhost:8080/MyService/
outputs
--2012-12-04 14:10:20-- http://localhost:8080/MyService/
Resolving localhost... 127.0.0.1
Connecting to localhost|127.0.0.1|:8080... connected.
HTTP request sent, awaiting response... 200 OK
Length: 2777 (2.7K) [text/html]
Saving to: “index.html.3”
100%[======================================>] 2,777 --.-K/s in 0s
2012-12-04 14:10:20 (223 MB/s) - “index.html.3” saved [2777/2777]
when my service is up. And outputs
Resolving localhost... 127.0.0.1
Connecting to localhost|127.0.0.1|:8080... failed: Connection refused.
or just stucks after saying
--2012-12-04 14:07:34-- http://localhost:8080/MyService/
Resolving localhost... 127.0.0.1
Connecting to localhost|127.0.0.1|:8080... connected.
HTTP request sent, awaiting response...
otherwise. Can you offer me a shell script with a cron job or something else to do that. I prefer not to use cron if there is an alternative.
Why not using cron for that? Anyway ig oogled for tomcat + watchdog and found the following blog post.
Should give you and idea how to solve your problem.
hth
OK I find the solution to add a script under /etc/rc5.d/ as told here http://www.linuxforums.org/forum/mandriva-linux/27687-how-make-command-run-startup.html
!/bin/bash
echo "Restarted: " `date` >> /home/ec2-user/reboot.log
sleep 600
while [ 1 ]
do
var=`php -f /home/ec2-user/lastEntry.php`
if [ ${#var} -gt 3 ]
then
echo "Restarting: " `date` >> /home/ec2-user/reboot.log
reboot
fi
sleep 60
done
where last query checks a table to see if is there any entry in last 10 mins.
<?php
mysql_connect('ip', 'uname', 'pass') or die(mysql_error());
mysql_select_db("db") or die(mysql_error());
$query="SELECT min(now()-time) as last FROM table;";
$result = mysql_query($query)or die(mysql_error());
$row = mysql_fetch_array($result);
echo $row['last'];
?>
There are more straightforward ways to check if tomcat is running but this checks on the last output so more accurate check it's.
I have s shell script, the content is as below
I hope the screen output could be redirected to templog,
the screen output not the html content but is like
--2012-10-30 15:53:14-- http://www.youtube.com/results?search_query=pig
Resolving www.youtube.com... 173.194.34.5, 173.194.34.6, 173.194.34.7, ...
Connecting to www.youtube.com|173.194.34.5|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: unspecified [text/html]
Saving to: “search_result”
[ <=> ] 108,503 --.-K/s in 0.07s
2012-10-30 15:53:15 (1.40 MB/s) - “search_result” saved [108503]
but it can't
I tried 2>&1|cat > templog
still not OK
you can copy the content and make a wget.sh file and then run it
you will notice that the content can not be redirected to templog,
how to deal with this to achieve my target?
thanks
keyword=pig
page_nr=3
wget -O search_result http://www.youtube.com/results?search_query=${keyword}&page=${page_nr} > templog
You just need to put quotes around your url. wget then is using the stderr to print on screen, so you also have to the stderr instead of the stdout (using 2> instead of >):
keyword=pig
page_nr=3
wget -O search_result "http://www.youtube.com/results?search_query=${keyword}&page=${page_nr}" 2> templog