How to tell curl to check file existence before download? - bash

I use this command to download a series of images:
curl -O --max-time 10 --retry 3 --retry-delay 1 http://site.com/image[0-100].jpg
Some images are corrupted, so I delete them.
for i in *.jpg; do jpeginfo -c $i || rm $i; done
How to tell curl to check file existence before download?
I can use this command to prevent curl override existing images:
chmod 000 *.jpg
But I don't want to re-download them.

If the target resource is static, curl has an option -z to only download a newer target copy.
Usage example:
curl -z image0.jpg http://site.com/image0.jpg
An example for your case:
for i in $(seq 0 100); do curl -z image$i.jpg -O --max-time 10 --retry 3 --retry-delay 1 http://site.com/image$i.jpg; done

No idea about doing it with curl, but you could check it with Bash before you run the curl command.
for FILE in FILE1 FILE2 …
do
if [[ ! -e $FILE ]]; then
# Curl command to download the image.
fi
done

Related

return filename after downloading file with curl

I would like to capture the filename to a variable of a file that is downloaded using curl. I am using the following flag to preserve the filename as in the below using --remote-name
My code:
file1=$(curl -O --remote-name 'https://url.com/download_file.tgz')
echo $file1
You can use the -w|--write-out switch of curl:
file1="$(curl -O --remote-name -s \
-w "%{filename_effective}" "https://url.com/download_file.tgz")"
echo "$file1"
file1=download_file.tgz
url=https://url.com/$file1 #encoding this might be necessary
curl -O --remote-name $url
echo $file1
If, to construct the URL, you need to know the filename that you want to download, then you don't need anything from curl to identify the file that it downloaded unless there is not a 1:1 relationship between the basename of the URL and file that was downloaded.

Check if a remote file exists in bash

I am downloading files with this script:
parallel --progress -j16 -a ./temp/img-url.txt 'wget -nc -q -P ./images/ {}; wget -nc -q -P ./images/ {.}_{001..005}.jpg'
Would it be possible to not download files, just check them on the remote side and if exists create a dummy file instead of downloading?
Something like:
if wget --spider $url 2>/dev/null; then
#touch img.file
fi
should work, but I don't know how to combine this code with GNU Parallel.
Edit:
Based on Ole's answer I wrote this piece of code:
#!/bin/bash
do_url() {
url="$1"
wget -q -nc --method HEAD "$url" && touch ./images/${url##*/}
#get filename from $url
url2=${url##*/}
wget -q -nc --method HEAD ${url%.jpg}_{001..005}.jpg && touch ./images/${url2%.jpg}_{001..005}.jpg
}
export -f do_url
parallel --progress -a urls.txt do_url {}
It works, but it fails for some files. I can not find consistency why it works for some files, why it fails for others. Maybe it has something with the last filename. Second wget tries to access the currect url, but the touch command after that simply does not create the desidered file. First wget always (correctly) downloads the main image without the _001.jpg, _002.jpg.
Example urls.txt:
http://host.com/092401.jpg (works correctly, _001.jpg.._005.jpg are downloaded)
http://host.com/HT11019.jpg (not works, only the main image is downloaded)
It is pretty hard to understand what it is you really want to accomplish. Let me try to rephrase your question.
I have urls.txt containing:
http://example.com/dira/foo.jpg
http://example.com/dira/bar.jpg
http://example.com/dirb/foo.jpg
http://example.com/dirb/baz.jpg
http://example.org/dira/foo.jpg
On example.com these URLs exist:
http://example.com/dira/foo.jpg
http://example.com/dira/foo_001.jpg
http://example.com/dira/foo_003.jpg
http://example.com/dira/foo_005.jpg
http://example.com/dira/bar_000.jpg
http://example.com/dira/bar_002.jpg
http://example.com/dira/bar_004.jpg
http://example.com/dira/fubar.jpg
http://example.com/dirb/foo.jpg
http://example.com/dirb/baz.jpg
http://example.com/dirb/baz_001.jpg
http://example.com/dirb/baz_005.jpg
On example.org these URLs exist:
http://example.org/dira/foo_001.jpg
Given urls.txt I want to generate the combinations with _001.jpg .. _005.jpg in addition to the original URL. E.g.:
http://example.com/dira/foo.jpg
becomes:
http://example.com/dira/foo.jpg
http://example.com/dira/foo_001.jpg
http://example.com/dira/foo_002.jpg
http://example.com/dira/foo_003.jpg
http://example.com/dira/foo_004.jpg
http://example.com/dira/foo_005.jpg
Then I want to test if these URLs exist without downloading the file. As there are many URLs I want to do this in parallel.
If the URL exists I want an empty file created.
(Version 1): I want the empty file created in a the similar directory structure in the dir images. This is needed because some of the images have the same name, but in different dirs.
So the files created should be:
images/http:/example.com/dira/foo.jpg
images/http:/example.com/dira/foo_001.jpg
images/http:/example.com/dira/foo_003.jpg
images/http:/example.com/dira/foo_005.jpg
images/http:/example.com/dira/bar_000.jpg
images/http:/example.com/dira/bar_002.jpg
images/http:/example.com/dira/bar_004.jpg
images/http:/example.com/dirb/foo.jpg
images/http:/example.com/dirb/baz.jpg
images/http:/example.com/dirb/baz_001.jpg
images/http:/example.com/dirb/baz_005.jpg
images/http:/example.org/dira/foo_001.jpg
(Version 2): I want the empty file created in the dir images. This can be done because all the images have unique names.
So the files created should be:
images/foo.jpg
images/foo_001.jpg
images/foo_003.jpg
images/foo_005.jpg
images/bar_000.jpg
images/bar_002.jpg
images/bar_004.jpg
images/baz.jpg
images/baz_001.jpg
images/baz_005.jpg
(Version 3): I want the empty file created in the dir images called the name from urls.txt. This can be done because only one of _001.jpg .. _005.jpg exists.
images/foo.jpg
images/bar.jpg
images/baz.jpg
#!/bin/bash
do_url() {
url="$1"
# Version 1:
# If you want to keep the folder structure from the server (similar to wget -m):
wget -q --method HEAD "$url" && mkdir -p images/"$2" && touch images/"$url"
# Version 2:
# If all the images have unique names and you want all images in a single dir
wget -q --method HEAD "$url" && touch images/"$3"
# Version 3:
# If all the images have unique names when _###.jpg is removed and you want all images in a single dir
wget -q --method HEAD "$url" && touch images/"$4"
}
export -f do_url
parallel do_url {1.}{2} {1//} {1/.}{2} {1/} :::: urls.txt ::: .jpg _{001..005}.jpg
GNU Parallel takes a few ms per job. When your jobs are this short, the overhead will affect the timing. If none of your CPU cores are running at 100% you can run more jobs in parallel:
parallel -j0 do_url {1.}{2} {1//} {1/.}{2} {1/} :::: urls.txt ::: .jpg _{001..005}.jpg
You can also "unroll" the loop. This will save 5 overheads per URL:
do_url() {
url="$1"
# Version 2:
# If all the images have unique names and you want all images in a single dir
wget -q --method HEAD "$url".jpg && touch images/"$url".jpg
wget -q --method HEAD "$url"_001.jpg && touch images/"$url"_001.jpg
wget -q --method HEAD "$url"_002.jpg && touch images/"$url"_002.jpg
wget -q --method HEAD "$url"_003.jpg && touch images/"$url"_003.jpg
wget -q --method HEAD "$url"_004.jpg && touch images/"$url"_004.jpg
wget -q --method HEAD "$url"_005.jpg && touch images/"$url"_005.jpg
}
export -f do_url
parallel -j0 do_url {.} :::: urls.txt
Finally you can run more than 250 jobs: https://www.gnu.org/software/parallel/man.html#EXAMPLE:-Running-more-than-250-jobs-workaround
You may use curl instead to check if the URLs you are parsing are there without downloading any file as such:
if curl --head --fail --silent "$url" >/dev/null; then
touch .images/"${url##*/}"
fi
Explanation:
--fail will make the exit status nonzero on a failed request.
--head will avoid downloading the file contents
--silent will avoid status or errors from being emitted by the check itself.
To solve the "looping" issue, you can do:
urls=( "${url%.jpg}"_{001..005}.jpg )
for url in "${urls[#]}"; do
if curl --head --silent --fail "$url" > /dev/null; then
touch .images/${url##*/}
fi
done
From what I can see, your question isn't really about how to use wget to test for the existence of a file, but rather on how to perform correct looping in a shell script.
Here is a simple solution for that:
urls=( "${url%.jpg}"_{001..005}.jpg )
for url in "${urls[#]}"; do
if wget -q --method=HEAD "$url"; then
touch .images/${url##*/}
fi
done
What this does is that it invokes Wget with the --method=HEAD option. With the HEAD request, the server will simply report back whether the file exists or not, without returning any data.
Of course, with a large data set this is pretty inefficient. You're creating a new connection to the server for every file you're trying. Instead, as suggested in the other answer, you could use GNU Wget2. With wget2, you can test all of these in parallel, and use the new --stats-server option to find a list of all the files and the specific return code that the server provided. For example:
$ wget2 --spider --progress=none -q --stats-site example.com/{,1,2,3}
Site Statistics:
http://example.com:
Status No. of docs
404 3
http://example.com/3 0 bytes (identity) : 0 bytes (decompressed), 238ms (transfer) : 238ms (response)
http://example.com/1 0 bytes (gzip) : 0 bytes (decompressed), 241ms (transfer) : 241ms (response)
http://example.com/2 0 bytes (identity) : 0 bytes (decompressed), 238ms (transfer) : 238ms (response)
200 1
http://example.com/ 0 bytes (identity) : 0 bytes (decompressed), 231ms (transfer) : 231ms (response)
You can even get this data printed as a CSV or JSON for easier parsing
Just loop over the names?
for uname in ${url%.jpg}_{001..005}.jpg
do
if wget --spider $uname 2>/dev/null; then
touch ./images/${uname##*/}
fi
done
You could send a command via ssh to see if the remote file exists and cat it if it does:
ssh your_host 'test -e "somefile" && cat "somefile"' > somefile
Could also try scp which supports glob expressions and recursion.

Using wget in shell trouble with variable that has \

I'm trying to run a script for pulling finance history from yahoo. Boris's answer from this thread
wget can't download yahoo finance data any more
works for me ~2 out of 3 times, but fails if the crumb returned from the cookie has a "\" character in it.
Code that sometimes works looks like this
#!usr/bin/sh
symbol=$1
today=$(date +%Y%m%d)
tomorrow=$(date --date='1 days' +%Y%m%d)
first_date=$(date -d "$2" '+%s')
last_date=$(date -d "$today" '+%s')
wget --no-check-certificate --save-cookies=cookie.txt https://finance.yahoo.com/quote/$symbol/?p=$symbol -O C:/trip/stocks/stocknamelist/crumb.store
crumb=$(grep 'root.*App' crumb.store | sed 's/,/\n/g' | grep CrumbStore | sed 's/"CrumbStore":{"crumb":"\(.*\)"}/\1/')
echo $crumb
fileloc=$"https://query1.finance.yahoo.com/v7/finance/download/$symbol?period1=$first_date&period2=$last_date&interval=1d&events=history&crumb=$crumb"
echo $fileloc
wget --no-check-certificate --load-cookies=cookie.txt $fileloc -O c:/trip/stocks/temphistory/hs$symbol.csv
rm cookie.txt crumb.store
But that doesn't seem to process in wget the way I intend either, as it seems to be interpreting as described here:
https://askubuntu.com/questions/758080/getting-scheme-missing-error-with-wget
Any suggestions on how to pass the $crumb variable into wget so that wget doesn't error out if $crumb has a "\" character in it?
Edited to show the full script. To clarify I've got cygwin installed with wget package. I call the script from cmd prompt as (example where the script above is named "stocknamedownload.sh, the stock symbol I'm downloading is "A" from the startdate 19800101)
c:\trip\stocks\StockNameList>bash stocknamedownload.sh A 19800101
This script seems to work fine - unless the crumb returned contains a "\" character in it.
The following implementation appears to work 100% of the time -- I'm unable to reproduce the claimed sporadic failures:
#!/usr/bin/env bash
set -o pipefail
symbol=$1
today=$(date +%Y%m%d)
tomorrow=$(date --date='1 days' +%Y%m%d)
first_date=$(date -d "$2" '+%s')
last_date=$(date -d "$today" '+%s')
# store complete webpage text in a variable
page_text=$(curl --fail --cookie-jar cookies \
"https://finance.yahoo.com/quote/$symbol/?p=$symbol") || exit
# extract the JSON used by JavaScript in the page
app_json=$(grep -e 'root.App.main = ' <<<"$page_text" \
| sed -e 's#^root.App.main = ##' \
-e 's#[;]$##') || exit
# use jq to extract the crumb from that JSON
crumb=$(jq -r \
'.context.dispatcher.stores.CrumbStore.crumb' \
<<<"$app_json" | tr -d '\r') || exit
# Perform our actual download
fileloc="https://query1.finance.yahoo.com/v7/finance/download/$symbol?period1=$first_date&period2=$last_date&interval=1d&events=history&crumb=$crumb"
curl --fail --cookie cookies "$fileloc" >"hs$symbol.csv"
Note that the tr -d '\r' is only necessary when using a native-Windows jq mixed with an otherwise native-Cygwin set of tools.
You are adding quotes to the value of the variable instead of quoting the expansion. You are also trying to use tools that don't know what JSON is to process JSON; use jq.
wget --no-check-certificate \
--save-cookies=cookie.txt \
"https://finance.yahoo.com/quote/$symbol/?p=$symbol" \
-O C:/trip/stocks/stocknamelist/crumb.store
# Something like thist; it's hard to reverse engineer the structure
# of crumb.store from your pipeline.
crumb=$(jq 'CrumbStore.crumb' crumb.store)
echo "$crumb"
fileloc="https://query1.finance.yahoo.com/v7/finance/download/$symbol?period1=$first_date&period2=$last_date&interval=1d&events=history&crumb=$crumb"
echo "$fileloc"
wget --no-check-certificate \
--load-cookies=cookie.txt "$fileloc" \
-O c:/trip/stocks/temphistory/hs$symbol.csv

Creating a loop with curl -F argument

I have this bash script code:
#!/bin/bash
FILES=/home/user/Downloads/asd/
for f in $FILES
do
curl -F dir="#/home/user/Downloads/asd;$f" -F Match=3 -F "Name=DrKla" \
-F countNo=1 -F outputFormat=json "http://somelink.com"
done
Inside the asd folder there are 6 files and I want them to be uploaded 1 by 1 with this code as an argument of -F "dir=#...."
When I run my code I get the error:
Warning: skip unknown form field: /home/user/Downloads/asd/
curl: (43) A libcurl function was given a bad argument
Here is a working version of the code for a single file:
curl -F dir="#/home/user/Downloads/asd/count.txt" -F Match=3 -F "Name=DrKla" \
-F countNo=1 -F outputFormat=json "http://somelink.com"
So I want to have all the files in asd folder to be read and uploaded like this. I don't see what's wrong with my do loop.
The issues appear to be that you only give a path, not a reference to all files in the path * and there is a strange semi-colon ; in your path:
#!/bin/bash
FILES=/home/user/Downloads/asd/*
for f in $FILES
do
curl -F dir="#$f" -F Match=3 -F "Name=DrKla" \
-F countNo=1 -F outputFormat=json "http://somelink.com"
done
I'm not sure what the # is for or if it is needed, but $f should already contain the path.

If proxy is down, get a new one

I'm writing my first bash script
LANG="en_US.UTF8" ; export LANG
PROXY=$(shuf -n 1 proxy.txt)
export https_proxy=$PROXY
RUID=$(php -f randuid.php)
curl --data "mydata${RUID}" --user-agent "myuseragent" https://myurl.com/url -o "ticket.txt"
This script also use curl, but if proxy is down it gives me this error:
failed to connect PROXY:PORT
How can I make bash script run again, so it can get another proxy address from proxy.txt
Thanks in advance
Run it in a loop until the curl succeeds, for example:
export LANG="en_US.UTF8"
while true; do
PROXY=$(shuf -n 1 proxy.txt)
export https_proxy=$PROXY
RUID=$(php -f randuid.php)
curl --data "mydata${RUID}" --user-agent "myuseragent" https://myurl.com/url -o "ticket.txt" && break
done
Notice the && break at the end of the curl command.
That is, if the curl succeeds, break out of the infinite loop.
If you have multiple curl commands and you need all of them to succeed,
then chain them all together with &&, and add the break after the last one:
curl url1 && \
curl url2 && \
break
Lastly, as #Inian pointed out,
you could use the --proxy flag to pass a proxy URL to curl without the extra step of setting https_proxy, for example:
curl --proxy "$(shuf -n 1 proxy.txt)" --data "mydata${RUID}" --user-agent "myuseragent"
Lastly, note that due to the randomness, a randomly selected proxy may come up more than once until you find one that works.
Avoid that, you could read iterate over the shuffled proxies instead of an infinite loop:
export LANG="en_US.UTF8"
shuf proxy.txt | while read -r proxy; do
ruid=$(php -f randuid.php)
curl --proxy "$proxy" --data "mydata${ruid}" --user-agent "myuseragent" https://myurl.com/url -o "ticket.txt" && break
done
I also lowercased your user-defined variables,
as capitalization is not recommended for those.
I know i accepted #janos answer but since I can't edit his I'm going to add this
response=$(curl --proxy "$proxy" --silent --write-out "\n%{http_code}\n" https://myurl.com/url)
status_code=$(echo "$response" | sed -n '$p')
html=$(echo "$response" | sed '$d')
case "$status_code" in
200) echo 'Working!'
;;
*)
echo 'Not working, trying again!';
exec "$0" "$#"
esac
This will run my script again if it gives 503 status code which i wanted :)
And with #janos code it will run again if proxy is not working.
Thank you everyone i achieved what i wanted.

Resources