How can I implement a content converter in Firefox for all page elements? - firefox

I'm attempting to port over an Internet Explorer plugin to Firefox, but I'm not sure where to look for what I need.
Basically I need to be able to filter all content that is received by the browser with a certain Content-Type header. I tried implementing a stream converter, and this works, but only for the top-level document in the page, frame, or iframe. I had the same problem with IE, and getting around it was really hacky, and since I would ideally like this to be cross platform I would really like to be able to do this in Firefox without resorting to vtable hacks.
The content is served compressed with a proprietary compression format. So I need to receive the data, decompress it, and change the Content-Type back to what the original uncompressed file should have.
If there is a way to just filter all data received, that would probably be acceptable, I could handle parsing the header myself.
Thanks

I think I may have found what I needed. I came across this link which is used for tracing HTTP calls: http://blues.ath.cx/firekeeper/resources/http_tracer.html
There seems to be some problems with the JavaScript implementation for some reason, and I'm not a JavaScript guru to figure it out, but I've implemented it in C++ and initial results suggest that I should be able to modify it for my needs.
Basically we're replacing the nsIHttpProtocolHandler service with our own implementation, which keeps a reference to the initial implementation. When a call is made to the service, we just proxy it over to the saved original implementation. Then we provide our own implementation of nsIHttpChannel and nsIStreamListener which we use as proxies too.
Again we proxy most of the calls back off to the original handlers. But in OnDataAvailable, instead of passing the data on to the underlying nsIStreamListener, we save it using nsIStorageStream. Then in OnStopRequest, after we've gotten all of the data, we can decompress it and then call OnDataAvailable on the original handler, followed by OnStopRequest.
It has worked on some small simple tests so far, but I'll have to put it through some more rigorous tests... I'll also have to figure out if I can do the same thing with HTTPS.
The biggest problem I see at the moment is that it relies on some unfrozen interfaces such as nsIHttpChannelInternal. Can't be helped though as far as I can tell, and my version compatibility requirements are pretty small, so I can live with it if I have to.
In the meantime, if anybody has any other suggestions, I'm all ears :D

Related

Issues with Swashbuckle

I have a WebAPI service, written in ASP.NET (not Core), for which I am trying to generate documentation, in order to allow other devs to use it. I found Swashbuckle, and installed it. Then, since I also use OData for some of my services, I added Swashbuckle.OData. Then, I modified the CustomProvider setting in SwaggerConfig to use the ODataSwaggerProvider. I also set ResolveConflictingActions(apiDescriptions => apiDescriptions.First()) because I had a few Actions with the same URL path, differing only by query string (I'll need to address that later). So far so good.
Then, I tested it. I started my web app, then added "/swagger/" to then end. I got a message stating that it was loading the resource info. However, after several minutes, I got a browser error debug popup, stating "Error: Not enough storage is available to complete this operation." It asks if I want to debug, and if I do, it takes me to the debugger in IE (the browser I'm using). The only code in the stack is either from jquery-1-8-0-min-js or swagger-ui-min-js (this part confuses me, as there is no "swagger-ui-min-js" file in my project; I'm assuming it's embedded in the dll). There is no part of the stack trace that floats back up to my code, and all the code there is minified, so it's very difficult to debug.
However, I do know that it is at least partially working, as three of the controllers do show up in the resulting page after you close the error popup. You can navigate through them, and all the GETs, POSTs, PUTs, and DELETEs seem to be there, and you can test them.
Is it the case that whenever you navigate to the "/swagger/" url, Swagger hits all the URLs in the service, in order to generate the documentation? I'm wondering if maybe it is hitting an action that is taking a particularly long time to run, or possibly its generated documentation is taking too much disk space (I have plenty of space on my disk, but maybe it is referring to RAM?).
Anyway, even if that were not an issue, how can I get it to generate something, some kind of document file, that I can send off to someone? I see no new files added to my folders, so it would seem that it re-does the whole process every time you navigate to the swagger URL.
When I tried the Chrome browser, I no longer had the issue (I was using IE11 before). Not sure what the problem was, but this was the workaround.

Getting the request body of search response with 2.X NEST client

I'm using the new 2.X NEST client. That part is important, because there were a great many breaking changes which will effect potential answers here.
Previously, I used the Glimpse Elasticsearch plugin to see the underlying queries being generated by NEST. However, it would appear that that plugin is no longer compatible with 2.X NEST. As a result, I'm trying to find a workaround to see the JSON query. The problem here is that the old way of accessing response.RequestInformation to get at the request body is gone. It seems to have been replaced with a combination of ApiCall, CallDetails, and DebugInformation. The problem here is that in all of these the request byte array is null unless you add .DisableDirectStreaming() to the ConnectionSettings instance you pass into ElasticClient. The problem there is that I'm handling all that using dependency injection with Ninject, so in something like a controller action, I have no access to the ConnectionSettings instance to make such a change. I suppose I could just add .DisableDirectStreaming() globally, but I have no idea what the potential consequences of that is and the documentation on this is frustratingly sparse.
So, there's a few avenues for an answer here, any of which I'll accept. First, if anyone has manage to get the Glimpse plugin functioning with 2.X, I'd love to know what you did. However, based on the fact that the underlying API has changed dramatically, my assumption is that the plugin is fundamentally broken until someone branches it for 2.X or Elastic comes out with their own version (which is supposedly coming at some undetermined point in the future).
Second, if there's some way to get at the request body without disabling direct streaming, and I simply missed it. I'd appreciate guidance there.
Third, if anyone has any ideas for how I can disable direct streaming at the controller action level, without affecting my Ninject setup or applying it globally, feel free to chime in.
Finally, it would be great if someone from the Elastic team can enlighten me to the effects of disabling direct streaming and what potential problems can arise from that, so I can make a determination about whether it would be wise to apply it globally or not.
With .DisableDirectStreaming() set to true, the request bytes and response bytes are buffered in memory streams to enable them to be available on the response via response.RequestBodyInBytes and response.ResponseBodyInBytes, respectively.
By default, it is set to false so the request type e.g. SearchDescriptor<T>, SearchRequest<T>, etc. is serialized directly to the request stream of the http request and similarly, the response type is deserialized from the response stream. The overhead of setting it to true is therefore keeping the request and response bytes around in memory for the lifetime of the response (and GC kicking in).
With Connection Settings, it's best to have one instance for the lifetime of the application; Serialization settings are cached per connection settings as well as caches for field and property expressions. There's no way currently to keep the request and response bytes around on a per request basis i.e. ad-hoc introspection, but I think this would be a useful addition; I'll add an issue for it :)
I'm not personally overly familiar with the Glimpse integration but I would expect it would require updating to work with NEST 2.x because of some of the changes. Having just given it a brief look, the changes look pretty straightforward. Looks like this can be done without having to set .DisableDirectStreaming() to true, but only grabbing the request bytes before they're written to the request stream.

How to disable browser caching in Vaadin

My question is short (and hopefully simple to solve!): How can I completely disable browser-caching in my webservice realized with vaadin?
I want to completely disable caching since I'm getting problems when I try to do some PDF streaming and displaying them in my browers.
I have read about a solution for my problem for example here:
Using <meta> tags to turn off caching in all browsers?
They talk about adding some headers to the web application that disable browser caching. But how do I add them to my Vaadin application?
A short code snippet would be very welcome (and helpful!)
Thanks once again for every answer and thought you're sharing with me.
It seems to me that you want to disable caching when downloading a PDF file. Assuming you are using a DownloadStream to stream the content, then setting the Content-Disposition and Cache-Control headers as follows should work.
DownloadStream stream = new DownloadStream(getStreamSource().getStream(), contentType, filename);
stream.setParameter("Content-Disposition", "attachment;filename=" + filename);
// This magic incantation should prevent anyone from caching the data
stream.setParameter("Cache-Control", "private,no-cache,no-store");
// In theory <=0 disables caching. In practice Chrome, Safari (and, apparently, IE) all ignore <=0. Set to 1s
stream.setCacheTime(1000);
If you want to disable caching for all Vaadin requests, you'll have to look at the source of AbstractApplicationServlet, and extend methods such as #serveStaticResourcesInVAADIN and others - this is quick tricky, as alot of them are private methods.
A simpler method may be to use an Http Servlet Filter to add the appropriate parameters to the response, without having to modify your app at all. You can write this yourself - should be quick easy - although a quick search finds the Apache2 licensed Cache-Filter : http://code.google.com/p/cache-filter/wiki/NoCacheFilter
I have not used Cache-Filter, but a quick skim suggests it'll work just fine for you.

How to obtain firefox user agent string?

I'm building an add-on for FireFox that simulates a website, but running from a local library. (If you want to know more, look here)
I'm looking for a way to get a hold of the user-agent string that FireFox would send if it were doing plain http. I'm doing the nsIProtocolHandler myself and serve my own implementation of nsIHttpChannel, so if I have a peek at the source, it looks like I'll have to do all the work myself.
Unless there's a contract/object-id on nsHttpHandler I could use to create an instance just for a brief moment to get the UserAgent? (Though I notice I'll need to call Init() because it does InitUserAgentComponents() and hope it'll get to there... And I guess the http protocol handler does the channels and handlers so there won't be a contract to nsHttpHandler directly.)
If I have a little peek over the wall I notice this globally available call ObtainUserAgentString which does just this in that parallel dimension...
Apparently Firefox changed how this was done in version 4. Have you tried:
alert(window.navigator.userAgent);
You can get it via XPCOM like this:
var httpHandler = Cc["#mozilla.org/network/protocol;1?name=http"].
getService(Ci.nsIHttpProtocolHandler);
var userAgent = httpHandler.userAgent;
If for some reason you actaully do need to use NPAPI like you suggest in your tags, you can use NPN_UserAgent to get it; however, I would be shocked if you actually needed to do that just for an extension. Most likely Anthony's answer is more what you're looking for.

Where is Firefox's URL processor?

After 3 days digging in this Mozilla NSS and the Firefox source codes (and some extensions and running the SSLsample codes too), I'm clearly lost now.
My intention is just to do simple thing. To divert any https request from Firefox to my very own callback functions. In which, my callbacks here have some information for the NSS/SSL to work with.
My only problem is, where is Firefox's code for processing the https URL. I mean, when we key in the https address at the address bar, we press Enter. I just need access to the source that triggers at that point (after we press Enter). Somehow, I can intercept any request for https URLs.
Thanks.
So, you want to intercept the data, right? You'll want to play around with this interface.
not sure, if it helps, but I asked this question:
XULRunner: Prevent links to arbitrary domains
Maybe it could be sufficient for your task to use the method explained in the first answer. However, it is an all-JavaScript solution.
Cheers,

Resources