How can I use applescript to download a file from a specified link and open this downloaded file. Is there a way to do this without needing to specify the exact path to the downloaded file to open this file?
You can download a file to the current directory with curl -O and open the newest file with open "$(ls -t|head -n1)":
do shell script "cd ~/Desktop
curl -LO http://stackoverflow.com/favicon.ico
open \"$(ls -t|head -n1)\""
curl -L follows location headers (redirections).
If curl downloads the file with no filename extension, try to use wget --content-disposition instead.
Related
wget -E -H -k -K -p -e robots=off -P ./images/ -i./list.txt
./list.txt: No such file or directory
No URLs found in ./list.txt.
Converted links in 0 files in 0 seconds.
I downloaded and installed brew. Further, I installed wget and it's letting me download images one image at a time. However, when I tried the aforementioned command to download images from multiple urls, it's not doing anything. Can someone tell me what I could be doing wrong here?
wget is pretty lucid with description of issue
./list.txt: No such file or directory
apparently there is not file named list.txt inside current dir. Please trying giving full path to list.txt.
This is a link https://1lib.in/dl/4993988/f3ffaf to download an EPUB file. This link get redirected to https://p300.zlibcdn.com/dtoken/{some-value}.
It downloads on browser.
But I cannot download it using curl even with -L flag,
curl "https://1lib.in/dl/4993988/f3ffaf" -L -o download.epub, it just downloads a html file.
Any idea to get this file downloaded from CLI?
Thank
I download a .tar.gz file using wget using this command:
wget hello.tar.gz
This is a part of a long script, sometimes when I want to download this file, an error occurs and when for the second time the file is downloaded the name of the downloaded file changes to something like this:
hello.tar.gz.2
the third time:
hello.tar.gz.3
How can I say that the whatever the name of the downloaded is, change it to hello.tar.gz?
In other words I don't want the name of the downloaded file be anything other than hello.tar.gz?
wget hello.tar.gz -O <fileName>
wget have internal option like -r, -p to change default behavior
So just try the following:
wget -p <url>
wget -r <url>
Since now you noticed the incremental change. Discard any repeated files and rely on the following as initial condition:
wget hello.tar.gz
mv hello.tar.gz.2 hello.tar.gz
I am trying to automate a data downloading process. For this purpose, my goal is to extract (using bash commands) the .zip from a redirection link that could be seen on display here: https://journals.sagepub.com/doi/suppl/10.1177/0022002706289303
I have seen that people suggest the -L tag with curl for redirections, but it doesn't seem to work for my case. The specific command I have tried is:
curl -L -o output.zip https://journals.sagepub.com/doi/suppl/10.1177/0022002706289303/suppl_file/Sambanis_Aug_06.zip
The command file output.zip shows that the extracted .zip file is actually a HTML document text. On the other hand, clicking the redirection link (used inside curl command) downloads the extracted folder automatically via a browser.
Any ideas, tips, or suggestions on what I should try (or whether this is possible or not) will be highly appreciated!
If you execute curl with the --verbose option you can see that it is a cookie related problem. The cookie engine needs to be enabled. You can download the desired file as follows:
curl -b cookies.txt -L https://journals.sagepub.com/doi/suppl/10.1177/0022002706289303/suppl_file/Sambanis_Aug_06.zip -o test.zip
It doesn't matter if the file provided with the -b option doesn't exist. We just need to activate the cookie engine.
Refer to Send cookies with curl and Save cookies between two curl requests for futher information.
You can download that file with wget on Linux
$ wget https://journals.sagepub.com/doi/suppl/10.1177/0022002706289303/suppl_file/Sambanis_Aug_06.zip
$ unzip Sambanis_Aug_06.zip
Archive: Sambanis_Aug_06.zip
inflating: Sambanis (Aug 06).dta
inflating: Sambanis Appendix (Aug 06).pdf
Im trying to do a bash script and i need to download certain files with wget
like libfat-nds-1.0.11.tar.bz2 but after some times the version of this file may change so i would like to download a file that start with libfatnds and ends in .tar.bz2 .Is this possible with wget?
Using only wget, it can be achieved by specifying filename with wildcards in the list of accepted extensions.
wget -r -np -nd --accept='libfat-nds-*.tar.bz2'
The problem is that HTTP doesn't support wildcard downloads
. But if there is content listing enabled on the server or you have a index.html containing the available file names you could download that, extract the file name you need and then download the file with wget.
Something in this order
Download the index with curl
Use grep and/or sed to extract the exact file name
Download the file with wget (or curl)
If you pipe the commands you can do it on one line.