I want to save whole content of this specific website using lynx
http://build.chromium.org/f/chromium/perf/dashboard/ui/changelog.html?url=%2Ftrunk%2Fsrc&range=41818%3A40345&mode=html
I used these commands
webpage="http://build.chromium.org/f/chromium/perf/dashboard/ui/changelog.html?url=%2Ftrunk%2Fsrc&range=41818%3A40345&mode=html"
lynx -crawl -dump $webpage > output
My output was only like this:
SVN path: ____________________ SVN revision range: ____________________
When it was expected to have all information about bugs and comments.
In the URL, it included "/trunk/src" and "41818:40345" values which should be put in to SVN path and SVN revision range and then submit it to get content but it didn't.
Question: Do you have any idea to "tell" lynx to wait a bit while the website is rendering its content until complete?
Thanks in advanced.
The problem here is that the webpage is being built by a javascript function. Such pages can be tricky to download with tools like lynx (or curl, which IMHO is better at the basic download problem). In order to download the contents you see on that page, you'd need to first load the javascript files needed by the page, and then execute the javascript "as though you were a browser". That javascript will proceed to request some data, which turns out to be XML, and then builds HTML from that data.
Note that the "website" doesn't render its data. Your browser renders the data. Or, to be more accurate, your browser is expected to render it but lynx won't because it doesn't do javascript.
So you have a couple of options. You could try to find a scriptable javascript-aware browser (iirc links does javascript, but I don't know offhand how to script it to do what you want.)
Or you can cheat. By using Chrom{e,ium}'s "developer" tools, you can see what URL is being requested by the javascript. It turns out, in this case, to be
http://build.chromium.org/cgi-bin/svn-log?url=http://src.chromium.org/svn//trunk/src&range=41818:40345
so you could get it with curl as follows
curl -G \
-d url=http://src.chromium.org/svn//trunk/src \
-d range=41818:40345 \
http://build.chromium.org/cgi-bin/svn-log \
> 41818-40345.xml
That XML data is in a pretty straightforward (i.e. apparently easy to reverse-engineer) format. And then you could use a simple scriptable xml tool like xmlstarlet (or any XSLT tool) to take the xml apart and reformat as you wish. With luck, you might even find some documentation (or a DTD) somewhere for the xml.
At least, that's how I would proceed.
Related
I need to download a pdf from a website which does not provide a link ending with (.pdf) using ruby. Manually, when i click on the link to download the pdf, it takes me to a new page and the dialog box to save/open the file appears after some time.
Please help me in downloading the file.
The link
You an do this
require 'open-uri'
File.open('my_file_name.pdf', "wb") do |file|
file.write open('http://someurl.com/2013-1-2/somefile/download').read
end
I have been doing this for my projects and it works.
If you just need a simple ruby script to do it, I'd just run wget. Like this exec 'wget "http://path.to.the.file/and/some/params"'
At that point though, you might as well run wget.
The other way, is to just run a get on the page that you know the pdf is at
source = Net::HTTP.get("http://the.website.com", "/and/some/params")
There are a number of other http clients that you could use, but as long as you make a get request to the endpoint that the pdf is at, it should give you the raw data. Then you can just rename the file, and you'll have the pdf
In your case, I ran the following commands to get the pdf
wget http://www.lawcommission.gov.np/en/documents/prevailing-laws/constitution/func-download/129/chk,d8c4644b0f086a04d8d363cb86fb1647/no_html,1/
mv index.html thefile.pdf
Then open the pdf. Note that these are linux commands. If you want to get the file with a ruby script, you could use something like what I previously mentioned.
Update:
There is an added complication that was not initially stated, which is that the url to the pdf changes every time there is an update to the pdf. In order to make this work, you probably want to do something involving web scraping. I suggest nokogiri. This way you can look at the page where the download is and then perform a get request on the desired URL. Furthermore, the server that hosts the pdf is misconfigured, and breaks chrome within a few seconds of opening the page.
How to solve this problem: I went to the site, and refreshed it. Then broke the connection to the server (press the X where there would otherwise be a refresh button). Then right click next to the download link, and select inspect element. Then browse the dom to find something that is definitively identifying (like an id). Thankfully, I found something <strong id="telecharger"> Download</strong>. This means that you can use something like page.css('strong#telecharger')[0].parent['href'] This should give you a URL. Then you can perform a get request as described above. I don't have time to make the script for you (too much work to do), but this should be enough to solve the problem.
What command-line utility renders HTML as Firefox would, creating a
static image, without actually running Firefox and xwd (or ScreenGrab,
etc)?
Since all of Firefox's rendering libraries are open source, I'm
assuming someone's written something like this? It would be very
useful.
I realize static images can't have Flash animation (animated GIF/PNG
notwithstanding), JavaScript, etc, so I'm just looking for something
that renders plain HTML.
html2ps is worth a try, although it does not seem to use the css style sheets. This is a serious limitation.
On Debian/Ubuntu, it is provided as a package, so the classical sudo apt-get install html2ps will be fine.
(I know this has been given in the comments, but for the future reader, I thought it might be easier to find as an answer.)
You could write a small script which simply runs firefox using the command line options does a screen-shot, then closes firefox. Should only be about 3 lines of code to get started.
firefox -url http://mysite.com/homepage.php
https://developer.mozilla.org/en/Command_Line_Options
I would like to save a web page programmatically.
I don't mean merely save the HTML. I would also like automatically to store all associated files (images, CSS files, maybe embedded SWF, etc), and hopefully rewrite the links for local browsing.
The intended usage is a personal bookmarks application, in which link content is cached in case the original copy is taken down.
Take a look at wget, specifically the -p flag
−p −−page−requisites
This option causes Wget to download all the files
that are necessary to properly display
a givenHTML page. Thisincludes such
things as inlined images, sounds, and
referenced stylesheets.
The following command:
wget -p http://<site>/1.html
Will download page.html and all files it requires.
On Windows: you can run IE as a com object and pull everything out.
On other thing, you can take the source of Mozilla.
In Java, Lobo.
Or commons-httpclient and write a lot of code.
You could try the MHTML format (which is what IE uses). http://en.wikipedia.org/wiki/MHTML
In other words, you'd be downloading each object (image, css, etc.) to your computer, and then "embedding" them, via Base64, into a single file.
I'm curious about the web page I'm viewing.
I use the "view--page source" and get a window with the html.
I cut and paste this into notepad++.
I manually parse through adding whitespace to make it readable to me.
Is there a better way to do the last step? I'm hoping something has been written which automates this process, giving the user a readable version of the source file.
Thanks for any help.
-bill
Try HTML Tidy
Numerous editors have support for HTML Tidy (if you use an editor that knows about HTML, check the menus or documentation); alternatively, you can run HTML Tidy from the command line.
There is HtmlTidy, which works in Notepad++
http://tidy.sourceforge.net
I'm trying to load an image from the Firefox cache as the title suggests. I'm running Ubuntu, so the location of my cache is /home/me/.mozilla/firefox/xxxxxx.default/Cache
However, in the Cache (and this is on Mac, too) the filenames are just ridiculous combinations of letters and numbers. Is there a way to pinpoint a certain file?
You should take a look at the source code of the CacheViewer Add-on.
Download the file instead of installing it (right click and save as) and then extract it (it's just a Zip file, even though it has a .xpi extension), then extract the cacheviewer.jar file inside the resulting chrome folder. Finally go into content and then cacheviewer to find the javascript and XUL files.
From my brief investigation, the useful routines are in the cacheviewer.js file, though if you were hoping there would be a simple javascript one liner for accessing cached items you're probably going to be disappointed. The XUL files (which are just XML) are helpful in working out which JS functions are called to perform particular tasks. I'm not too sure how all this maps into Greasemonkey, rather than the extension environment, but hopefully there's enough code to get you started.
Ummm, that really is an internal implementation detail. But I suggest looking at how about:cache?device=disk and about:cache-entry?client=HTTP&sb=1&key=https://stackoverflow.com/Content/img/wmd/blockquote.png are implemented.
Also, http://www.securityfocus.com/infocus/1832 gives details, too. Note that Firefox doesn't use a separate file for everything...
And of course, Firefox may change the format at any time.
Just give your img src= attribute the full URL. If the image happens to be cacheable (the server sends an appropriate Expires: or Cache-control: header, for example) and it's already in the cache, Firefox will not hit the network.
HTTP caching is supposed to be invisible. When you're generating content, you generally shouldn't worry about it.
You can point REDbot at a URL to see all sorts of delicious information about its cacheability.