I was wondering if CURL allows you to do the same function as WGET -N does - which will only download / overwrite a file if the existing file on the client side is older than the one on the server.
I realise this question is old now, but just in case someone else is looking for the answer, it seems that cURL can indeed acheieve similar to wget -N.
Because I was just looking for an answer to this question today, and I found elsewhere that cURL does have time condition option. If google brings you here first, as it did me, then I hope this answer might save you some time in looking. According to curl --help, there is a time-cond flag;
-z, --time-cond <time> Transfer based on a time condition
The other part I needed, in order to make it like wget -N, is to make it try and preserve the timestamp. This is with the -R option.
-R, --remote-time Set the remote file's time on the local output
We can use these to download "$file", only in the condition when the current local "$file" timestamp is older than the server's file timestamp; we can do it in this form;
curl -R -o "$file" -z "$file" "$serverurl"
So, for example, I use it to check if there is a newer cygwin installer like this;
curl -R -o "C:\cygwin64\setup-x86_64.exe" -z "C:\cygwin64\setup-x86_64.exe" "https://www.cygwin.com/setup-x86_64.exe"
cURL doesn't have the same type of mirroring support that wget has built in. There is one setting in there with cURL that should make it pretty easy to implement this for yourself though with a little bit of wrapping logic. It's the --remote-time option:
-R/--remote-time
When used, this will make libcurl attempt to figure out the
timestamp of the remote file, and if that is available make the
local file get that same timestamp.
Related
i've started playing around with curl a few days ago. For any reason i couldn't figure out how to archive the following.
I would like to get the original filename with the output option
-O -J
AND put there some kind of variable, like time stamp, source path or whatever. This would avoid the file overwriting issue and also make it easier for further work with it.
Here are a few specs about my setup
Win7 x64
curl 7.37.0
Admin user
just commandline no PHP or script or so one
no scripting solutions please, need tihs command in a single line for Selenium automation
C:>curl --retry 1 --cert c:\certificate.cer --URL https://blabla.com/pdf-file --user username:password --cookie-jar cookie.txt -v -O -J
I've played around with various things i found online like
-o %(file %H:%s)
-O -J/%date%
-o $(%H) bla#1.pdf
but it always just print out the file as it is named link "%(file.pdf" or some other shitty names. I guess this is something pointing to escaping and quoting issues but cant find it right now.
No scripting solutions please, I need tihs command in a single line for Selenium automation.
Prefered output
originalfilename_date_time_source.pdf
Let me know if you get a solution for this.
this is all a little over my head, so please be specific in your responses.
I have successfully performed an HTTPS FORM POST using cURL. Here is the code, simplified:
curl.exe -E cert.pem -k -F file=#"C:\DIR\test.txt" "https://www.example.com/ul_file_curl.ashx"
Here's the problem: I need to make this code upload two files each day, and the names will change every day based on several variables, like date and time of creation.
What I want to do is just replace test.txt with *.txt, but cURL doesn't seem to support wildcards, so how can I accomplish this? Thanks very much in advance.
Edit: This is all done in a Windows environment.
I know a network dependent makefile is poor form, so please don't lecture me.
I have a Makefile where I want to grab the latest copy of some tweets for example, as network efficiently as possible is a plus.
webconverger.txt:
wget http://greptweet.com/u/webconverger/webconverger.txt -O webconverger.txt
However make obviously thinks the file is upto date once running it. Are there hack to put in the dependency section to do a wget -q -N to see if indeed webconverger.txt is upto date?
refresh:
wget -q -N http://greptweet.com/u/webconverger/webconverger.txt -O webconverger.txt
all: refresh webconverger.txt
This question already has answers here:
Closed 11 years ago.
Possible Duplicate:
How do I save a file using the response header filename with cURL
I need to download many thousands of images in the format
http://oregondigital.org/cgi-bin/showfile.exe?CISOROOT=/baseball&CISOPTR=0
If you paste that link in a browser, it tries to download a file named 1.jp2
I want to use curl to do the same. However, when I run
curl -I 'http://oregondigital.org/cgi-bin/showfile.exe?CISOROOT=/baseball&CISOPTR=0'
the filename is reported as 404.txt which you can download and see that it is actually the file I want. I can't use the -O option because the name assigned to the file is no good, and I have technical reasons for needing the actual name used on the system.
How do I get curl to download the same file I have no trouble retrieving in my browser? Thanks.
The solution is to use -O -J
-O, --remote-name Write output to a file named as the remote file
-J, --remote-header-name Use the header-provided filename
So...
curl -O -J 'http://oregondigital.org/cgi-bin/showfile.exe?CISOROOT=/baseball&CISOPTR=0'
I had to upgrade my CURL. I had v 7.19 which doesn't support -J but 7.22 (which is the latest) does.
You can use the -o option can you? eg
curl 'http://oregondigital.org/cgi-bin/showfile.exe?CISOROOT=/baseball&CISOPTR=[0-9]' -o "#1.jpg"
Is it possible to grab text from a online text file via grep/cat/awk or someting else? (in bash)
The way i currently do this is i download the text file to the drive and grep/cat into the file for it's text.
curl -o "$TMPDIR"/"text.txt" http://www.example.com/text.txt
cat/grep "$TMPDIR"/text.txt
rm -rf "$TMPDIR"/"text.txt"
Is one of the text grabbers (or another one) capable enough to grab something from a text file on the internet?
This would get rid of the whole downloadfile-readfile-deletefile process and just replace it with one command, speeding up things considerably if you have a lot of those strings.
I couldn't find anything via the man pages or googling around, maybe you guys know something.
Use curl -o - http://www.example.com/text.txt | grep "something".
-o - tells curl that it "downloads to stdout", other utils such as wget, lynx and links also have corresponding functionality.
You might try netcat - this is exactly what it was made for.
You could at least pipe your commands to avoid manually creating a temporary file:
curl … | cat/grep …