Bash script to wget url starting with a specific character - bash

I have a URL http://example.com/dir that has many subdirectories with files that I want to save. Because its size is very big I want to break this operation in parts
eg. download everything from subdirectories starting with A like
http://example.com/A
http://example.com/Aa
http://example.com/Ab
etc
I have created the following script
#!/bin/bash
for g in A B C
do wget -e robots=off -r -nc -np -R "index.html*" http://example.com/$g
done
but it tries to download only http://example.com/A and not http://example.com/A*

Look at this page, it has all you need to know:
https://www.gnu.org/software/wget/manual/wget.html
1) You could use:
--spider -nd -r -o outputfile <domain>
which does not download the files, it just checks if they are there.
-nd prevents wget from creating directories locally
-r to parse entire site
-o outputfile to send the output to a file
to get a list of URLs to download.
2) then parse the outputfile to extract the files, and create smaller lists of links you want to download.
3) then use -i file (== --input-file=file) to download each list, thus limiting how many you download in one execution of wget.
Notes:
- --limit-rate=amount can be used to slow down downloads, to spare your Internet link!

Related

Download specific directories data with wget

I am downloading data from an 'ftp' server using 'wget' from Ubuntu system command line. I know how to download all the directories from a specific URL. Is their any command to select and download some specific directories?
I am using "wget -c -r --no-parent 'URL' --user=xx --password=xx"
Now if I give a URL xx followed by the year say 2009, then the URL will be xx/2009. Now I want to download folders 100 to 365 within 2009, not the total folders (001 to 365). How can I do that?
you could specify the directories you want using a shell expansion:
wget -c -r --no-parent URL/2009/{100..365} --user=xx --password=xx
if someone searching for these types of problem I have another solution now which is better than
wget -c -r --no-parent URL/2009/{100..365} --user=xx --password=xx
This does not have any problem, but if the URL is programmed in such a way that no one is allowed to download a large amount of data at a time then the next one will do the trick
for i in $(seq -w 144 365);do wget -r --no-parent URL/year/$i --user=XX --password=xx ; sleep 30; done

How do I use wget to download images from a list without creating duplicates?

I have a list that has urls such as:
http://imgur.com/img1.jpg
http://imgur.com/img2.jpg
(It updates every now and then.)
I currently use wget:
wget -i imglist -P c:/filename/
But if I run it once again, it duplicates the images: if there's already image1.jpg, it names it image1.jpg.1. How can I avoid that?
You could compare both files & keep only the newest among both using -N flag
wget -N -i imglist -P c:/filename/
Here are some use cases for wget:
http://www.editcorp.com/personal/lars_appel/wget/v1/wget_7.html

Download data from FTP Website

There is something which I am missing or might be the whole case. So I am trying to download NCDC data from NCDC Datasets and unable to do it the unix box.
The command which I have used this far are
wget ftp://ftp.ncdc.noaa.gov:21/pub/data/noaa/1901/029070-99999-1901.gz">029070-99999-1901.gz
This is for one file, but will be very happy if I can downlaod the entire parent directory.
You seem to have a lonely " just before the >
to download everything you can try this command to get the whole directory content
wget -r ftp://ftp.ncdc.noaa.gov:21/pub/data/noaa/1901/*
for i in {1990..1993}
do
echo "$i"
cd /home/chile/data
# -nH Disable generation of host-prefixed directories
# -nd all files will get saved to the current directory
# -np Do not ever ascend to the parent directory when retrieving recursively.
# -R index.html*,227070*.gz* don't download files with this regex
wget -r -nH -nd -np -R *.html,999999-99999-$i.gz* http://www1.ncdc.noaa.gov/pub/data/noaa/$i/
/data/noaa/$i/
done

wget to download images from retrieved file

I've got a file I need to retrieve, then I need to go through that file and download all the images listed. The format is xml, but I don't want to use an xml parser.
When I use
sudo wget --restrict-file-names=windows -nH -nd -r -i -P images \ -A jpeg,jpg,gif,png https://url.com/api/ojgnvhy75hGvcf36dnJO0947bsh62gbs?_=1361842359357
I get the xml file downloaded, but I need the images which are referenced in that file.
What am I doing wrong here?
I ended up with the following code, get the xml file and save it to text, then I get the links form the text file using sed and write those into another file, then use wget on that file to download the images.
#!/bin/dash
wget -O xml.txt 'https://url_to_download_from'
links=$(sed -n "/image>/s/^ .\([^>]*\)<\/image>.*/\1/gpw links.txt" xml.txt)
wget -N -P images -A png -i $links
Sadly, this results a bunch of files which are not images, even though I'm requesting only images.
After this script has completed, I run the following commands to clean up the folder.
cd images
shopt -s extglob nocaseglob
rm !(*.png)

Using wget to recursively fetch a directory with arbitrary files in it

I have a web directory where I store some config files. I'd like to use wget to pull those files down and maintain their current structure. For instance, the remote directory looks like:
http://mysite.com/configs/.vim/
.vim holds multiple files and directories. I want to replicate that on the client using wget. Can't seem to find the right combo of wget flags to get this done. Any ideas?
You have to pass the -np/--no-parent option to wget (in addition to -r/--recursive, of course), otherwise it will follow the link in the directory index on my site to the parent directory. So the command would look like this:
wget --recursive --no-parent http://example.com/configs/.vim/
To avoid downloading the auto-generated index.html files, use the -R/--reject option:
wget -r -np -R "index.html*" http://example.com/configs/.vim/
To download a directory recursively, which rejects index.html* files and downloads without the hostname, parent directory and the whole directory structure :
wget -r -nH --cut-dirs=2 --no-parent --reject="index.html*" http://mysite.com/dir1/dir2/data
For anyone else that having similar issues. Wget follows robots.txt which might not allow you to grab the site. No worries, you can turn it off:
wget -e robots=off http://www.example.com/
http://www.gnu.org/software/wget/manual/html_node/Robot-Exclusion.html
You should use the -m (mirror) flag, as that takes care to not mess with timestamps and to recurse indefinitely.
wget -m http://example.com/configs/.vim/
If you add the points mentioned by others in this thread, it would be:
wget -m -e robots=off --no-parent http://example.com/configs/.vim/
Here's the complete wget command that worked for me to download files from a server's directory (ignoring robots.txt):
wget -e robots=off --cut-dirs=3 --user-agent=Mozilla/5.0 --reject="index.html*" --no-parent --recursive --relative --level=1 --no-directories http://www.example.com/archive/example/5.3.0/
If --no-parent not help, you might use --include option.
Directory struct:
http://<host>/downloads/good
http://<host>/downloads/bad
And you want to download downloads/good but not downloads/bad directory:
wget --include downloads/good --mirror --execute robots=off --no-host-directories --cut-dirs=1 --reject="index.html*" --continue http://<host>/downloads/good
wget -r http://mysite.com/configs/.vim/
works for me.
Perhaps you have a .wgetrc which is interfering with it?
First of all, thanks to everyone who posted their answers. Here is my "ultimate" wget script to download a website recursively:
wget --recursive ${comment# self-explanatory} \
--no-parent ${comment# will not crawl links in folders above the base of the URL} \
--convert-links ${comment# convert links with the domain name to relative and uncrawled to absolute} \
--random-wait --wait 3 --no-http-keep-alive ${comment# do not get banned} \
--no-host-directories ${comment# do not create folders with the domain name} \
--execute robots=off --user-agent=Mozilla/5.0 ${comment# I AM A HUMAN!!!} \
--level=inf --accept '*' ${comment# do not limit to 5 levels or common file formats} \
--reject="index.html*" ${comment# use this option if you need an exact mirror} \
--cut-dirs=0 ${comment# replace 0 with the number of folders in the path, 0 for the whole domain} \
$URL
Afterwards, stripping the query params from URLs like main.css?crc=12324567 and running a local server (e.g. via python3 -m http.server in the dir you just wget'ed) to run JS may be necessary. Please note that the --convert-links option kicks in only after the full crawl was completed.
Also, if you are trying to wget a website that may go down soon, you should get in touch with the ArchiveTeam and ask them to add your website to their ArchiveBot queue.
To fetch a directory recursively with username and password, use the following command:
wget -r --user=(put username here) --password='(put password here)' --no-parent http://example.com/
This version downloads recursively and doesn't create parent directories.
wgetod() {
NSLASH="$(echo "$1" | perl -pe 's|.*://[^/]+(.*?)/?$|\1|' | grep -o / | wc -l)"
NCUT=$((NSLASH > 0 ? NSLASH-1 : 0))
wget -r -nH --user-agent=Mozilla/5.0 --cut-dirs=$NCUT --no-parent --reject="index.html*" "$1"
}
Usage:
Add to ~/.bashrc or paste into terminal
wgetod "http://example.com/x/"
The following option seems to be the perfect combination when dealing with recursive download:
wget -nd -np -P /dest/dir --recursive http://url/dir1/dir2
Relevant snippets from man pages for convenience:
-nd
--no-directories
Do not create a hierarchy of directories when retrieving recursively. With this option turned on, all files will get saved to the current directory, without clobbering (if a name shows up more than once, the
filenames will get extensions .n).
-np
--no-parent
Do not ever ascend to the parent directory when retrieving recursively. This is a useful option, since it guarantees that only the files below a certain hierarchy will be downloaded.
All you need is two flags, one is "-r" for recursion and "--no-parent" (or -np) in order not to go in the '.' and ".." . Like this:
wget -r --no-parent http://example.com/configs/.vim/
That's it. It will download into the following local tree: ./example.com/configs/.vim .
However if you do not want the first two directories, then use the additional flag --cut-dirs=2 as suggested in earlier replies:
wget -r --no-parent --cut-dirs=2 http://example.com/configs/.vim/
And it will download your file tree only into ./.vim/
In fact, I got the first line from this answer precisely from the wget manual, they have a very clean example towards the end of section 4.3.
It sounds like you're trying to get a mirror of your file. While wget has some interesting FTP and SFTP uses, a simple mirror should work. Just a few considerations to make sure you're able to download the file properly.
Respect robots.txt
Ensure that if you have a /robots.txt file in your public_html, www, or configs directory it does not prevent crawling. If it does, you need to instruct wget to ignore it using the following option in your wget command by adding:
wget -e robots=off 'http://your-site.com/configs/.vim/'
Convert remote links to local files.
Additionally, wget must be instructed to convert links into downloaded files. If you've done everything above correctly, you should be fine here. The easiest way I've found to get all files, provided nothing is hidden behind a non-public directory, is using the mirror command.
Try this:
wget -mpEk 'http://your-site.com/configs/.vim/'
# If robots.txt is present:
wget -mpEk robots=off 'http://your-site.com/configs/.vim/'
# Good practice to only deal with the highest level directory you specify (instead of downloading all of `mysite.com` you're just mirroring from `.vim`
wget -mpEk robots=off --no-parent 'http://your-site.com/configs/.vim/'
Using -m instead of -r is preferred as it doesn't have a maximum recursion depth and it downloads all assets. Mirror is pretty good at determining the full depth of a site, however if you have many external links you could end up downloading more than just your site, which is why we use -p -E -k. All pre-requisite files to make the page, and a preserved directory structure should be the output. -k converts links to local files.
Since you should have a link set up, you should get your config folder with a file /.vim.
Mirror mode also works with a directory structure that's set up as an ftp:// also.
General rule of thumb:
Depending on the side of the site you are doing a mirror of, you're sending many calls to the server. In order to prevent you from being blacklisted or cut off, use the wait option to rate-limit your downloads.
wget -mpEk --no-parent robots=off --random-wait 'http://your-site.com/configs/.vim/'
But if you're simply downloading the ../config/.vim/ file you shouldn't have to worry about it as your ignoring parent directories and downloading a single file.
Wget 1.18 may work better, e.g., I got bitten by a version 1.12 bug where...
wget --recursive (...)
...only retrieves index.html instead of all files.
Workaround was to notice some 301 redirects and try the new location — given the new URL, wget got all the files in the directory.
Recursive wget ignoring robots (for websites)
wget -e robots=off -r -np --page-requisites --convert-links 'http://example.com/folder/'
-e robots=off causes it to ignore robots.txt for that domain
-r makes it recursive
-np = no parents, so it doesn't follow links up to the parent folder
You should be able to do it simply by adding a -r
wget -r http://stackoverflow.com/

Resources