I'm trying to load an image from the Firefox cache as the title suggests. I'm running Ubuntu, so the location of my cache is /home/me/.mozilla/firefox/xxxxxx.default/Cache
However, in the Cache (and this is on Mac, too) the filenames are just ridiculous combinations of letters and numbers. Is there a way to pinpoint a certain file?
You should take a look at the source code of the CacheViewer Add-on.
Download the file instead of installing it (right click and save as) and then extract it (it's just a Zip file, even though it has a .xpi extension), then extract the cacheviewer.jar file inside the resulting chrome folder. Finally go into content and then cacheviewer to find the javascript and XUL files.
From my brief investigation, the useful routines are in the cacheviewer.js file, though if you were hoping there would be a simple javascript one liner for accessing cached items you're probably going to be disappointed. The XUL files (which are just XML) are helpful in working out which JS functions are called to perform particular tasks. I'm not too sure how all this maps into Greasemonkey, rather than the extension environment, but hopefully there's enough code to get you started.
Ummm, that really is an internal implementation detail. But I suggest looking at how about:cache?device=disk and about:cache-entry?client=HTTP&sb=1&key=https://stackoverflow.com/Content/img/wmd/blockquote.png are implemented.
Also, http://www.securityfocus.com/infocus/1832 gives details, too. Note that Firefox doesn't use a separate file for everything...
And of course, Firefox may change the format at any time.
Just give your img src= attribute the full URL. If the image happens to be cacheable (the server sends an appropriate Expires: or Cache-control: header, for example) and it's already in the cache, Firefox will not hit the network.
HTTP caching is supposed to be invisible. When you're generating content, you generally shouldn't worry about it.
You can point REDbot at a URL to see all sorts of delicious information about its cacheability.
Related
I would like to download all images in full quality from this blog: http://w899c8kcu.homepage.t-online.de/Blog.
I have access to server, but I can not find the directory where the images lie. When I use Firebug on the first picture, it shows me http://w899c8kcu.homepage.t-online.de/Blog;session=f0577255d9df9185d3abe04af0ce922d&focus=CMTOI_de_dtag_hosting_hpcreator_widget_PictureGallery_15716702&path=image.action&frame=CMTOI_de_dtag_hosting_hpcreator_widget_PictureGallery_15716702?id=34877331&width=1000&height=2000&crop=false.
How can I find the file paths like /dirname/image.jpg?
According to its HTML output the page obviously uses the CM4all content management system (CMS).
I don't know how precisely this CMS is working, though generally CMSs normally either save the files under cryptic names within a folder specified in the CMS's configuration or not in the file system at all but within a database.
Also, CMS may only save compressed or resized versions of the original files.
So, if you don't want to or are not able to dig into the server-side script code to find out if and where the images are saved, you should contact the company behind CM4all about this.
The benefits of hiding a file extension that I know of are user-friendly URLs, and a thin layer of security (I say thin because if someone really wanted to find out the extension of a file whose type has been hidden, it probably wouldn't be difficult. Am I wrong?).
But why should you do this (hide the extension), rather than use a file of type "file", with no extension? For example, if I have an extension-less file named "404", Error page works without error (pretend I have absolutely no IE visitors).
Is there any added benefit of actively hiding the extension of a file that has one, over using files that don't have extensions? See any linked pages from schema.org for an example.
You hide the file extensions because it is Good Design.
The idea is that URIs and URLs are independent of implementation and the user should not bother about what type of file he is looking at, whether .php or .html. If I want to look at a page on the latest Fender Strats, I should just go to something like www.fender.com/strats/latest and get all that I need.
The added benefit is that the URL remains "Uniform" and you don't have to change it (especially when the user bookmarks your site), if one day you decide to shift from php to Django or Rails.
Shorter urls are one benefit of leaving out the extension?
I would like to save a web page programmatically.
I don't mean merely save the HTML. I would also like automatically to store all associated files (images, CSS files, maybe embedded SWF, etc), and hopefully rewrite the links for local browsing.
The intended usage is a personal bookmarks application, in which link content is cached in case the original copy is taken down.
Take a look at wget, specifically the -p flag
−p −−page−requisites
This option causes Wget to download all the files
that are necessary to properly display
a givenHTML page. Thisincludes such
things as inlined images, sounds, and
referenced stylesheets.
The following command:
wget -p http://<site>/1.html
Will download page.html and all files it requires.
On Windows: you can run IE as a com object and pull everything out.
On other thing, you can take the source of Mozilla.
In Java, Lobo.
Or commons-httpclient and write a lot of code.
You could try the MHTML format (which is what IE uses). http://en.wikipedia.org/wiki/MHTML
In other words, you'd be downloading each object (image, css, etc.) to your computer, and then "embedding" them, via Base64, into a single file.
Back in the earlier days of the internet I remember that in certain browsers, every time you downloaded an image or a file, the URL of where that file was downloaded from would be written into that file's properties (I guess the summary tab?). I think Netscape v2 did this if I remember correctly.
I really miss that kind of functionality as every once in a while I'll run into a neat little program stored somewhere in the depths of my hard drive and wonder where I got it from originally.
I googled around but I'm not quite sure what terms to use to describe what I'm looking for. So I'm wondering if anyone knows of a Firefox plug-in or something similar that would do this?
If you use the DownThemAll! extension for Firefox, you can tell it to prepend the URL of the site to the downloaded file name...
thus you end up with files like:
download.com_utils_compression_ABCD32.exe
It also works really well when you want to download/queue a bunch of files.
You download http://example.com/foo to ~/Desktop/foo, and you want to see the originating URL in the properties of the local file foo?
Back when I used OS X, I remember Safari used to record the original URL in the resource fork of the downloaded file. Can't remember what the named fork is, well, named, but it'll show up in the properties panel from Finder. Since it's there, Spotlight will probably index it, too, but I haven't used OS X since 10.3.
If you use Opera, and haven't cleared the file out from your download manager, select the download and it'll show the original URL that the file is from in the properties pane.
Is this what you want? If so... well, I don't know of a similar Firefox extension, but it'll clarify the question.
For the IE Browser I use the hell out of Fidler to look at all traffic going across the wire.
For FireFox, you can use the FireBug plugin. There is a "Net" tab that will show you request information that is going across the wire.
Most of the time you can use one of these tools to see what URL was requested in order to start a download. You can also view all the get and post information that might need to be sent in order to have your request succeed.
Fidler is here: http://www.fiddlertool.com/fiddler/
FireBug is here: https://addons.mozilla.org/en-US/firefox/addon/1843
Best of Luck!
When I am using a form containing <input id="myFile" type="file" runat="server" /> to upload a file, my server-side code only sees the filename without the full path when using Firefox, while it works just fine in IE.
Is it possible to retrieve the full file path server-side in this case?
You cannot. Actually, only IE gives this information which isn't important for the server in most cases. Neither FF nor Opera, at least, provide this info.
[UPDATE] Also tried with Safari, still no path... Somebody reported that Chrome might provide the info, although being a beta, that might change...
Perhaps you might need them in some intranet cases. In such case, you might ask the user to paste the path in a secondary input field... Not very friendly, but at least they will know they provide the info.
Actually, I know some people needed this info for some reasons, so they used JavaScript to pick up the path from the file input field and put it in an hidden field. FF developers found it was insecure (you can learn a lot from a simple path... like the login name of the user!) so prohibited such usage in FF3, making some people angry against this release...
References: Firefox 3's file upload box mentioned in Firefox 3 annoyance: Keying-in disabled in file upload control ...; also File input box disabled leads to great usability problem, among many other ones.
You can never be sure of getting a full filepath or even a reliable filename or content-type submitted in a file upload file. Even if you get a full filepath you don't know what the path separator character is on the client's operating system, or whether a file extension (if present) denotes anything at all.
If your application requires the filepath/filename/content-type of a submitted file for anything more than giving the user a default title for the item uploaded, it's doing something wrong and will need fixing.
I already stated this in a comment, but I think it bears repeating.
Microsoft opted to make the file control give the entire path to the file for use in intranet applications.
The HTML specification only makes mention of what the value should contain in one spot:
User agents may use the value of the
value attribute as the initial file
name.
However, they also have examples of what the multipart/form-data encoding should look like, and it doesn't contain the file path.
In other words, IE is breaking the standard and you can't rely on other browsers, even later versions of IE, to support it.