I am trying to program a simple Firefox (66.0 Quantum) Extension intended to download complete HLS video streams. The basic approach is to start video-playback using the page's regular UI functionality and to intercept the loading of respective HLS .m3u8-playlist files from within the extension's background-script (i.e. using "browser.webRequest.onBeforeRequest"). The extension similarily also intercepts the first "video-chunk" request triggered by the playlist (using "browser.webRequest.onSendHeaders") - so it knowns what the correct http-headers for respective "video-chunk" requests would need to look like.
permission-wise I am currently using these:
"permissions": [
"downloads", "activeTab", "webRequest", "webRequestBlocking", "<all_urls>"
],
Based on the above collected information, ideally the extension should be able to create correct XMLHttpRequest requests for all the "movie-chucks" found in the playlist and to ultimately download all those files. Obviously a respective long running "download" task should be performed in the background-script and once the above information has been collected there should be no need to still keep the original browser tab open.
Problem: Some of the servers that serve the movie-chunks apparently rely on the "Referer" (and maybe "Origin") header for some kind of access-control and the XMLHttpRequest instances created from within the background-script DO NOT allow to create the respective correct header (since it is a restricted field it cannot be manually set to the correct value).
The only workaround that I found so far leaves a lot to be desired: A content.XMLHttpRequest created within a content-script uses the correct headers and my background-script can delegate (via messaging) the respective file loading to the content-script (which is rather silly and means that the background-script process will fail if the original browser tab is closed during download).
Is there any way that allows to properly do/complete all the download logic on the background-script side (even after the originating browser tab has been closed)?
I'm creating a web application using parse and have found that in order for a user to authenticate I need to make all requests using HTTPS. I'm able to switch this over and get it to work correctly, but when I do I get all kinds of mixed content errors because I'm retrieving PFFile objects which only return a non-secure URL.
This wouldn't even be a huge concern with Chrome or Safari but of course IE needs to present a message to the user and block all this content. Are there any potential work arounds? Why can't parse just put a setting in the app to enable files to be served from a secure url? This seems completely ridiculous. How do people get around this? Are you completely avoiding the use of PFFile?
Replace http:// with https://s3.amazonaws.com/.
So if you start with this:
http://files.parsetfss.com/b05e3211-bf8b-.../tfss-fa825f28-e541-...-jpg
The final url will look something like this:
https://s3.amazonaws.com/files.parsetfss.com/b05e3211-bf8b-.../tfss-fa825f28-e541-...-jpg
I have an application which at one point wants to launch a particular URL in the default browser. This is pretty simple and can be achieved using ShellExecute on Windows. However the catch is that the server expects some additional custom header information (for authentication/identification purposes) to be sent along with the GET request.
Is there any way by which this (additional header) information could be passed to the browser while launching it?
Note:- I want to launch the default browser and not use a Web browser control
As I understand you have only one option: add intermidiate page (in internet or on localhost).
You have to create yoursite.com/sendHeaders.php or localhost/sendHeaders.php (or any another extension; choose language what do you prefer), which does following:
Unpack parameters (URL and headers),
Connect to the URL, send the headers,
Print the answer in browser.
So you will open in your browser intermediate page yoursite.com/sendHeaders.php?url=realUrl&headers=packedHeaders, but browser will show you a page realUrl, which received proper headers.
The Problem
There's an item (foo.js) that rarely changes. I'd like this item to be stored in the browser's cache (using Expires header). However, when it does change, I'd like the browser to update to the newest version.
The Attempt
Foo.js is returned with a far future Expires header. It's cached on the browser and requires no round trip query to the server. Just the way I like it. Now, when it changes....
Let's assume I know that the user's version of foo.js is outdated. How can I force a fresh copy of it to be obtained? I use xhr to perform a POST to foo.js. This should, in theory, force the browser to get a newer version of foo.js.
Unfortunately, this only seems to work in Firefox. Other browsers will use their cached version of the copy, even if other POST paramters are set.
WTF
First off, is there a way to do what I'm trying to do?
Second, why is there no sensible key/value type of cache that browser's have? Why can I not simply not include in headers: "Cache: some_key, some_expiration_time" and also specify "Clear-Cache: key1, key2, key3" (the keys must be domain specific, of course). Instead, we're stuck with either expensive round-trips that ask "is content new?", or the ridiculous "guess how long it'll be before you modify something" Expires header.
Thanks
Any comments on this matter are greatly appreciated.
Edits
I realize that adding a version number to the file would solve this. However, in my case it is not possible -- the call to "foo.js" is hardcoded into a bookmarklet.
You can just add a querystring to the end of the file, the server can ignore it, but the browser can't, it must treat it as a new request:
http://www.site.com/foo.js?v=1.12345
Many people use this approach, SO uses a hash of some sort, I use the build number (so users get a new version each build). If either of these is an option, you get the benefit of long duration cache headers, but still force a fetch of a new copy when needed.
Why set your cache expiration so far in the future? If you set it to one day for instance, the only overhead that you will incur (once a day) is the browser revalidating that it is the same file. If you still have not changed it, then you will not re-download the file, the server will respond with a not-modified response.
All caches have a set of rules that
they use to determine when to serve a
representation from the cache, if it’s
available. Some of these rules are set
in the protocols (HTTP 1.0 and 1.1),
and some are set by the administrator
of the cache (either the user of the
browser cache, or the proxy
administrator).
Generally speaking, these are the most
common rules that are followed (don’t
worry if you don’t understand the
details, it will be explained below):
If the response’s headers tell the cache not to keep it, it won’t.
If the request is authenticated or secure (i.e., HTTPS), it won’t be
cached.
A cached representation is considered fresh (that is, able to be
sent to a client without checking with
the origin server) if:
* It has an expiry time or other age-controlling header set, and
is still within the fresh period, or
* If the cache has seen the representation recently, and it was
modified relatively long ago.
Fresh representations are served directly from the cache, without
checking with the origin server.
If an representation is stale, the origin server will be asked to
validate it, or tell the cache whether
the copy that it has is still good.
Under certain circumstances — for example, when it’s disconnected
from a network — a cache can serve
stale responses without checking with
the origin server.
If no validator (an ETag or
Last-Modified header) is present on a
response, and it doesn't have any
explicit freshness information, it
will usually — but not always — be
considered uncacheable.
Together, freshness and validation are
the most important ways that a cache
works with content. A fresh
representation will be available
instantly from the cache, while a
validated representation will avoid
sending the entire representation over
again if it hasn’t changed.
http://www.mnot.net/cache_docs/#BROWSER
There is an excellent suggestion made in this thread: How can I make the browser see CSS and Javascript changes?
See the accepted answer by user, "grom".
The idea is to use the "modified" time stamp from the server to note when the file has been modified, and adding a version parameter to the end of the URL, making your CSS and JS files have URLs like this: my.js?version=12345678
This makes the browser think it is a new file, and so it does not refer to the cached version.
I am using a similar method in my app. It works pretty well. Of course, this would assume you are using something like PHP to process your HTML.
Here is another link with a more simple implementation for WordPress: http://markjaquith.wordpress.com/2009/05/04/force-css-changes-to-go-live-immediately/
With these constraints I guess your only option is to use window.location.reload(true) and force the browser to fresh all the cached items.. it's not pretty
You can invalidate cache on a specific url, using Cache-Control HTML header.
On your desired URL you can run (with xhr/ajax for instance) a request with following headers :
headers: {
'Cache-Control': 'no-cache, no-store, must-revalidate, max-age=0',
Pragma: 'no-cache',
Expires: '0',
}
Your cache will be invalidated, and next GET requests will return a brand new result.
I'm doing an AJAX download that is being redirected. I'd like to know the final target URL the request was redirected to. I'm using jQuery, but also have access to the underlying XMLHttpRequest. Does anyone know a way to get the final URL?
It seems like I'll need to have the final target insert its URL into a known location in the headers or response body, then have the script look for it there. I was hoping to have something that would work regardless of the target though.
Additional note: I'm asking how my code can get the full url from production code, which will run from the user's system. I'm not asking how I can get the full url when I'm debugging.
The easiest way to do this is to use Fiddler or Wireshark to examine the HTTP traffic. Use Fiddler at the client if your interface uses a browser, otherwise use Wireshark to capture the traffic on the wire.
One word - Firebug, it is a Firefox plugin. Never do any kind of AJAX development without it.
Activate Firebug and select Net, then perform your AJAX request. This will show the URL that is called, the entire request (header and body) and the entire response (once again, header and body). It also allows you to step through your JavaScript and debug it - breakpoints, watches, etc.
I'll second the Firebug suggestion. You'll see the url as the "Location" header in the http response.
It sounds like you also want to get this url in js? If so, you can get it off the xhr response object in the callback (which you can also inspect using FB!). :)