Chrome.runtime.connect no longer identified? - ajax

I have an extension with a background page and a sandbox page where most of the content scripts execute.
Whenever I need to do an Ajax call it has to run in the background environment as otherwise I get a CORS error. Recently as of last week I believe, the chrome.runtime is no longer available in the sandbox environment for some reason. I can't find any notes etc about it and trying to figure out a solution how to communicate with background page now.
I had this in the sandbox environment to initialize a connect port to pass messages from an Ajax request
var ajaxCall = chrome.runtime.connect({name: "ajaxCall"});
Is there any info out there that I'm missing on why this change occurred and what are some possible workarounds?
Here's the output for chrome. 1st is the background page and 2nd is the sandbox. They used to be identical in both.

Related

Firefox seems to fail on registering a ServiceWorker for Push Notifications?

Firefox seems to fail on registering a ServiceWorker for Push Notifications, with an error "InvalidStateError: An attempt was made to use an object that is not, or is no longer, usable", but the code works in Chrome and Edge, and appears to be compliant with the examples online and the spec.
I've thrown an example up on one of my test sites, https://wiegandtech.net/ - visiting it in Chrome will prompt for permission and then opt-in successfully, sending the info to the server. But Firefox prompts, doesn't complete the registration, and doesn't fire any error or throw anything into the console. When I try to debug, it seems to never return from navigator.serviceWorker.ready.then call - I debug in and reg is undefined, even though the promise says it shouldn't be. I can find no reason why this is failing. I do see in Fiddler that FF gets the worker file, so it appears to be starting the call, but never finishing? The worker is valid JavaScript, as far as I can tell. Does anyone have any documentation on how Firefox's implementation is different from Chrome's/the spec?
Firefox requires the ServiceWorker's URL to end in .js - I was using an ASP.Net site and returning javascript but through my own controller. When I just give it the URL for the .js file itself, it now works. Would file a bug, but too non-trivial to setup a site given that ServiceWorkers require a real life site to troubleshoot, and their source code doesn't appear to be on github.

"New version available" with service worker and sw-precache

I'm trying to use sw-precache, but I must be doing something wrong!
I'm mostly using the demo code available from the github repo and can't seem to get updates to the app to come through. Once it's cached the first time, it never checks for new versions.
I was expecting that when I publish a new service worker, the browser would request the new service worker and update the cache accordingly in the background. Then using the registration code in the example, I would be able to prompt the user to refresh and get the latest version from their newly refreshed cache.
Would really appreciate if someone could please point me in the right direction.
Example
To demonstrate the problem, I've created an isolated example here:
https://github.com/stevenocchipinti/sw-precache-demo
The example uses a basic skeleton from create-react-app which has a built in build task which take care of fingerprinting the filenames, etc.
I suspect the problem is with me caching everything by using the following sw-precache config:
{
"staticFileGlobs": [ "build/**/*.*" ],
"stripPrefix": "build/"
}
There are more accurate steps in the repo's readme, but the basic steps I'm taking to reproduce the problem are as follows (with my probably incorrect expectations).
Steps and Assumptions
Browse to the app for the first
I should see Content is now available offline! in the console
Reload the page
The message in the console should not appear again because the service worker is installed, but the page should still work.
Go offline and reload the page
The page should still work
Make a visible change to the source code
Rebuild (run the build task and sw-precache)
This is where my understanding must be wrong
Reload the page
The service worker should update the cache in the background
When its done, you should see New or updated content is available. in the console
The actual visible changes should not be visible until the next reload
Reload the page again
The browser will use the new cache this time around
The changes should be visible now!
There shouldn't be any messages in the console
The problem
Once the app has been cached initially, it will never update unless you unregister the service worker or force a reload.
I'm not sure how to make this work - any help would be greatly appreciated!
After replicating your development hosting environment, I can see that you're serving your service-worker.js file with a browser HTTP cache lifetime of one hour:
There's more information as to why this is leading to the behavior you're seeing, along with best practices, in this previous answer. As mentioned at the top of that answer, browsers plan on changing their behavior to stop honoring the HTTP cache for the service worker file by default, mainly due to the type of confusion that you're experiencing here. For the time being, though, the production versions of both Chrome and Firefox continue to honor those headers.

Jetty server 7.6.9 loosing cookie in AJAX and high concurrency situation

I have a page which will send some ajax request to my Jetty7.6.9 server. All of them containing a COOKIE named JSESSIONID so that the server knows the request is logged in.
But sometimes, the method org.eclipse.jetty.server.Request.getCookies() returns an empty Cookie[]. I set a breakpoint and checked the _connection._requestFields and I found the Cookie right there, but Request.getCookies() cannot fetch it or parse it.
The situation can happen in any one or more ajax request in that page, can happen in any time, can happen in both windows and linux. It seems that it's a random case, and even when I dropped the frame at the breakpoint to the pre line, it would run correctly when it ran to the same place, so I think it's an issue about synchronize/concurrent.
I didn't find the same case in jetty bug list.
Is it a bug? What can I do to verify or repeat it? How to fix it?
(For some reason,maybe I cannot update the Jetty version for our system.)

Ajax Post Request blocks website loading

I have a strange problem with using ajax post requests. I use the request to run an ImageMagick process directly on the command line by using php function exec(). The process takes about a minute, and then responds with some variables. This is working fine, except from one problem. During the execution time I cannot excess other parts of the website that are installed on the same webserver (as if the server is unreachable). When the process finishes, everything works fine again.
I first thought this to be due to an overloaded server. However, when you access the website via another browser, there are no problems, even during the execution time of the process in the other browser. So it looks like the problems has something to do with browsers blocking other requests during the post request.
Could anyone help me out here? What could be the root problem?
Found the solution! Thanks from the help by kukipei By adding session_write_close(); to the file of the ajax request (after is has read the userid and token), the session file is no longer locked, and all pages are accessible again. Problem was that the session was locked during the whole execution time of the process, which was not necessary, since I only needed the session to read the userid and token. So before calling the ImageMagick operation, I now add session_write_close()

Ajax call from local html page in Webkit Qt

I'm trying to perform an Ajax/XMLHTTPrequest from within a local HTML file in QT 4.7RC QWebview. It consistently fails with an empty responseText and status 0. I've set the follwing
page->settings()->setAttribute(QWebSettings::LocalContentCanAccessRemoteUrls,true);
but it has no effect (I can load remote images without problems though).
It seems to be a known issue and I'm not sure if there is a solution already.
https://bugs.webkit.org/show_bug.cgi?id=31875
Any ideas for a workaround would be very helpful. Basically what I'm trying to do is running a HTML/Javascript WebApp in QWebview that talks to a local server at 127.0.0.0 and this problem is kind of a show-stopper. Interestingly, the actual query is sent and my server responds with 200 and the requested data. But the response never arrives in my Javascript callbacks.
Not sure about your question but are you sure that you are inside an AJAX security sandbox that works with webkit? In Firefox, IE and others using AJAXin different domains does not work. In fact, http://demo1.demo.com is different than demo2.demo.com

Resources