Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I would like to download a local copy of a web page and get all of the css, images, javascript, etc.
In previous discussions (e.g. here and here, both of which are more than two years old), two suggestions are generally put forward: wget -p and httrack. However, these suggestions both fail. I would very much appreciate help with using either of these tools to accomplish the task; alternatives are also lovely.
Option 1: wget -p
wget -p successfully downloads all of the web page's prerequisites (css, images, js). However, when I load the local copy in a web browser, the page is unable to load the prerequisites because the paths to those prerequisites haven't been modified from the version on the web.
For example:
In the page's html, <link rel="stylesheet href="/stylesheets/foo.css" /> will need to be corrected to point to the new relative path of foo.css
In the css file, background-image: url(/images/bar.png) will similarly need to be adjusted.
Is there a way to modify wget -p so that the paths are correct?
Option 2: httrack
httrack seems like a great tool for mirroring entire websites, but it's unclear to me how to use it to create a local copy of a single page. There is a great deal of discussion in the httrack forums about this topic (e.g. here) but no one seems to have a bullet-proof solution.
Option 3: another tool?
Some people have suggested paid tools, but I just can't believe there isn't a free solution out there.
wget is capable of doing what you are asking. Just try the following:
wget -p -k http://www.example.com/
The -p will get you all the required elements to view the site correctly (css, images, etc).
The -k will change all links (to include those for CSS & images) to allow you to view the page offline as it appeared online.
From the Wget docs:
‘-k’
‘--convert-links’
After the download is complete, convert the links in the document to make them
suitable for local viewing. This affects not only the visible hyperlinks, but
any part of the document that links to external content, such as embedded images,
links to style sheets, hyperlinks to non-html content, etc.
Each link will be changed in one of the two ways:
The links to files that have been downloaded by Wget will be changed to refer
to the file they point to as a relative link.
Example: if the downloaded file /foo/doc.html links to /bar/img.gif, also
downloaded, then the link in doc.html will be modified to point to
‘../bar/img.gif’. This kind of transformation works reliably for arbitrary
combinations of directories.
The links to files that have not been downloaded by Wget will be changed to
include host name and absolute path of the location they point to.
Example: if the downloaded file /foo/doc.html links to /bar/img.gif (or to
../bar/img.gif), then the link in doc.html will be modified to point to
http://hostname/bar/img.gif.
Because of this, local browsing works reliably: if a linked file was downloaded,
the link will refer to its local name; if it was not downloaded, the link will
refer to its full Internet address rather than presenting a broken link. The fact
that the former links are converted to relative links ensures that you can move
the downloaded hierarchy to another directory.
Note that only at the end of the download can Wget know which links have been
downloaded. Because of that, the work done by ‘-k’ will be performed at the end
of all the downloads.
Related
I am building an internal project wiki for a group software development project. The project wiki is currently powered by VimWiki and I send the HTML files to both the project supervisor and each of the development team on a weekly basis. This keeps our Intellectual property secure and internal, but also organized and up to date. I would like to put diagram images into the wiki itself so that all diagrams and documentation can be accessed together with ease. I am however having trouble making the images transferable between systems. Does vimwiki give a way for image files to be embedded such that they can be transferred between systems? Ideally the solution would make it possible to transfer the output directory of the Vimwiki as a singular entity containing the HTML files and the image files.
I have tried reading the documentation on images in the vimwiki reference document. I have not had luck using local: or file: variants. The wiki reference states that local should convert the image links to a localized location based on the output directory of the HTML files, but it breaks my image when I use it.
I have currently in my file
{{file:/images/picture.png}}
I expect the system to be able to transfer the file between computers but it registers to an absolute link and also does not include the image directory in the output directory of the vimwikiAll2HTML command.
I know this is an old question, but try to use {{local:/images/picture.png}} instead. If you open :help vimwiki in Vim, you can find a part that says:
In Vim, "file:" and "local:" behave the same, i.e. you can use them with both
relative and absolute links. When converted to HTML, however, "file:" links
will become absolute links, while "local:" links become relative to the HTML
output directory. The latter can be useful if you copy your HTML files to
another computer.
I am trying to download many files (~30,000) using wget, all files are in the following webpage:
http://galex.stsci.edu/gr6/?page=tilelist&survey=ais&showall=Y
However, the real data is under a sublink after I click Fits and then some file under this sublink is displayed. For example, the sublink of the first file is the following:
http://galex.stsci.edu/gr6/?page=downloadlist&tilenum=50270&type=coaddI&subvis=28&img=1
I only want to download one file in this sublink: Intensity Map of band NUV. In this above case, it is the second file that I want to download.
All files have the same structure. How could I use wget to download all the files under sublink?
The Intensity Map of band NUV files have a common ending, which should allow you to download only the files you want using wget -r -A "*nd-int.fits.gz" on the target site. This employs wget's recursive function, -r, and the Accept List function, -A. The Accept List function, outlined here, will only download the files you want according to extension, name, or naming convention. Whether the wget recursive function can successfully crawl the entirety of your target site is something you'll have to test.
If the above doesn't work, the website seems to have handy tools for filtering available files, such as a catalog search.
I'm working on a website with Github pages. My code has been reviewed and it said to be correct, but my images and video do not display when I look at the site on my browser. I've read a bunch of similar questions on this site, but the answer was always that the file path was case sensitive. My case is the same, it still doesn't work.
I've tried to use the "raw" url from github. Tried with a ./ and just a / no luck. My email is verified.
Any other suggestions? Thanks!!
https://jeanninejacobs.github.io/excursion/
The top level of your site is at https://jeanninejacobs.github.io/excursion/.
That means that an image located at resources/images/camp.jpg has a full path of https://jeanninejacobs.github.io/excursion/resources/images/camp.jpg.
However, you're using /resources/images/camp.jpg which is relative to the top of the domain, in other words relative to https://jeanninejacobs.github.io. That means /resources/images/camp.jpg references https://jeanninejacobs.github.io/resources/images/camp.jpg, which is wrong because it's missing the excursion subdirectory.
The easiest way to fix your problem is to just use /excursion/resources/images/camp.jpg instead of /resources/images/camp.jpg.
Or you could use a custom domain and get rid of the excursion subdirectory.
There are fancier options if you need something more specific, but hopefully this gets you going in the right direction.
I have also found that the file extension names are case sensitive. For Example, if the files are named *.JPG, the file path must also be in all caps.
I followed the instructions provided in this previous post. I am able to download a working local copy of the webpage (e.g. wget -p -k https://shapeshed.com/unix-wget/) but I would like to integrate all the files (js, css and images e.g. using base64 encoding) into a single html file (or another convenient format). Would this be possible?
Try using HTTrack
It is very efficient and easy to use website copier. All you have to do is paste the link of the website you want to make a local copy of
Follow these steps as you want everything in single page
Minify all the stylesheets and put them in <style> in your main
HTML page use CSS minifier
Minify all the scripts and put them inside <script> in the same file. Use JavaScript Minifier
To deal with images use spites
It certainly can be done. But you’ll have to do couple of simple things manually, since there are no available tools to automate some of the steps.
Download the web page using Wget with all dependencies.
Copy the contents of linked stylesheets and scripts to main HTML file.
Convert images to Base64 data URIs contained in HTML and CSS, then insert them to main HTML file.
Minify the edited HTML file.
Convert HTML file to Base64 data URI.
Here is an example of a single-page application encoded to Base64 data URI created to demonstrate the concept (copy and paste below code to web browser address bar):
data:text/html;charset=utf-8;base64,PCFkb2N0eXBlIGh0bWw+DQo8aHRtbCBsYW5nPSJlbiI+DQoJPG1ldGEgY2hhcnNldD0idXRmLTgiPg0KCTx0aXRsZT5TaW5nbGUtcGFnZSBBcHBsaWNhdGlvbiBFeGFtcGxlPC90aXRsZT4NCgk8c3R5bGU+DQoJCS8qIENvZGUgZnJvbSBDU1MgZmlsZXMgZ29lcyBoZXJlLiAqLw0KCQlib2R5IHsNCgkJCWZvbnQtZmFtaWx5OiBzYW5zLXNlcmlmOw0KCQl9DQoJCWJ1dHRvbiB7DQoJCQlkaXNwbGF5OiBibG9jaw0KCQl9DQoJPC9zdHlsZT4NCgk8c2NyaXB0Pg0KCQkvLyBDb2RlIGZyb20gLmpzIGZpbGVzIGdvZXMgaGVyZS4gDQoJCWZ1bmN0aW9uIGNoYW5nZVBhcmFncmFwaCgpIHsNCgkJICAgIGRvY3VtZW50LmdldEVsZW1lbnRzQnlUYWdOYW1lKCJwIilbMF0uaW5uZXJIVE1MID0gIkNvbnRlbnQgb2YgcGFyYWdyYXBoIGNoYW5nZWQuIjsNCgkJfQ0KCTwvc2NyaXB0Pg0KCTxib2R5Pg0KCQk8aW1nIHNyYz0iZGF0YTppbWFnZS9wbmc7YmFzZTY0LGlWQk9SdzBLR2dvQUFBQU5TVWhFVWdBQUFVQUFBQUR3QkFNQUFBQ0RBNkJZQUFBQU1GQk1WRVZVVmx1T2o1TC8vLzlrWm1xbXA2bUJnb1dhbTUyeHNyTnpkSGk4dkw3dDdlNzI5dmJHeDhqazVPVFEwZExhMnR2SHNtSDhBQUFDSjBsRVFWUjRBZXpCZ1FBQUFBQ0FvUDJwRjZrQ0FBQUFBQUFBQUFBQUFBQUFBQUFBWUExdElLU2twRERxUUdMQXFBTkhIY2dzSWd3a3d4SUJ6SllCaEJSaEdJYmZiWGZiMWUzcU5vRUU5NVN1bTJuM1Z1SndNSHNRa0FGUVpBVUF4bDA2UU9zRXVNaENDTWNRQVRFWEJhaURBOGdFSUpJQXNKYUFNdmsrVGdrQTVuL2cvN3p2NE9HYitZMmN4djdqVkVaMzRLZG5kNStrTlFudXd1b2NNbDJCOTVZZUZoRHZTVHFmRTAwdldhV3RBcUtrTnNHcndFWUw0S1BrSjNFcW5WanNndTBTWURTdVM5Qk1lQUN3WnFGenJBN0dyZ2x1NHl6cUVuUnlnSkdVdzlzU050ekt5YlNFelNXczF5VzR1WjhEcDY4QXRlR1dXaEJaTVp6TWdhd0J3M0d6SkI3WEpQaFoyN0N1aGd0VzFVSXFRVXY0WXFwa1BiZ21IVUJTazJDaUh0ejA3Y294T1JVdzlTbTdBQXVwRHkvcXVtYlVzY20xcEdkSHZ3RUVTRlpuNTNCZ0VZTGdJUTVOd0o4aHV4MlNZTHZBUVlFS1hvVG81YVQ4ZjhXZkJrWWFnT0FCTEh4U0RvbFVRcllDMytUVUwrZ3JWYk1BZlljM1Z2ZzFjeXoxcWlvTFEvQ0RuZ042QlBGcGVYWlJ6NXB6U0FJUVhBRytBcWlQVVVCbXhYQUprUUlRN0dEa1o5OXp2UFBQejhKYUNJSTZBYTc3ZEI5NDdlOWt0d1NJVjRNUWJPV01VcDkwci9veGRrRjFjb2oyRkFiZHdWaC9zUlZiZUhreVUyQThyYXBVV3NKVVliSUQ3MllQSVZhZzlNRzVvVUJwbGppSlFtVUw0NmZDNWM1UjlldFBlM0FnQUFBQWdBQm83UEZYR0tCcUFBQUFBQUFBQUFBQUFBQUFBQUFBQUxnTmtYVy9TUloxSldBQUFBQUFTVVZPUks1Q1lJST0iIGFsdD0iIj4NCgkJPGgxPlNpbmdsZS1wYWdlIEFwcGxpY2F0aW9uIEV4YW1wbGU8L2gxPg0KCQk8cD5UaGlzIGlzIGFuIGV4YW1wbGUgb2YgYSB3ZWIgYXBwIHRoYXQgaW50ZWdyYXRlcyBIVE1MLCBDU1MsIEphdmFTY3JpcHQsIGFuZCBhbiBpbWFnZSBpbnRvIG9uZSAuaHRtbCBmaWxlIHRoYXQgaXMgZW5jb2RlZCB0byBCYXNlNjQuPC9wPg0KCQk8YnV0dG9uIHR5cGU9ImJ1dHRvbiIgb25jbGljaz0iY2hhbmdlUGFyYWdyYXBoKCkiPkNoYW5nZSBQYXJhZ3JhcGg8L2J1dHRvbj4NCgk8L2JvZHk+DQo8L2h0bWw+
Another solution would be to use a web proxy with a custom extension in order to store the sources, cf. https://github.com/SommerEngineering/WebProxy
This GitHub project is a simple web proxy by me, written in Go. Inside the Main.go line 71 and beyond will copy any data from the original site to your browser.
In your case, you would add a query if the data is already stored or not. If so, load from disk and send it to your browser. If not, load it from the source and store it to the disk.
Your condition of using a singe-file storage would not be an issue: Go can read and write e.g. ZIP files, cf. https://golang.org/pkg/archive/zip/. If you need these web site dumps immediately, a bit of code is needed to follow all links in order to store anything now.
Therefore, this answer is not a ready-to-go solution to your question. Rather, it would need to code a little bit. Go code could be compiled to any platform (x86, ARM, PPC) and operating system (Linux, macOS, Windows).
Hope, this answer gives an option for you.
There is a Chrome extension SingleFile that does exactly this
Like a lot of you guys out there, I'm pretty pumped for Steam OS. I have a link to the source code, which I want to download:
http://repo.steampowered.com/steamos/
Is there an easy way for me to download all of these files?
There's no download button, and right clicking doesn't give me anything useful.
You can use wget to recursively download the directories you want.
wget -r --include-directories=steamos/ --directory-prefix=steamos/ --wait=15--reject=index.htm* "http://repo.steampowered.com/steamos/"
-r tells wget that we want to recursively download the given site.
--include-directories=steamos/ limits our download to just the steamos folder, from the root of the site. Otherwise it would try to download absolutely everything from http://repo.steampowered.com/
--directory-prefix=steamos/ specifies the folder this will be place in once its downloaded. By default, the download will be placed in 'repo.steampowered.com/steamos/'.
--reject=index.htm* junks the three index pages that would otherwise be saved to each sub-directory.
--wait=15 places a delay of 15 seconds between your downloads, for the sake of being kind to the servers.
My main reference for this was http://learningbitsandbytes.blogspot.ca/2013/07/downloading-source-code-from-svngit.html