Workbox: the danger of self.skipWaiting() - caching

I use Workbox to pre-cache assets required to render the app shell, including a basic version of index.html. Workbox assumes that index.html is available in cache, otherwise, page navigation fails because I have this registered in my Service Worker:
workbox.routing.registerNavigationRoute('/index.html');
I also have the self.skipWaiting() instruction in the install listener:
self.addEventListener('install', e => {
self.skipWaiting();
});
As I understand it, there are 2 install listeners now:
One that's registered by Workbox for pre-caching assets (including index.html)
One that I registered manually in my Service Worker
Is it possible for self.skipWaiting() to succeed while Workbox's install listener fails? This would lead to a problematic state where assets don't get pre-cached but the Service Worker is activated. Is such a scenario possible and should I protect against it?

I highly recommend "The Service Worker Lifecycle" as an authoritative source of information about the different stages of a service worker's installation and updating.
To summarize some info from that article, as it applies to your question:
The service worker first enters the installing phase, and however many install listeners you've registered, they will all get a chance to execute. As you suggest, Workbox creates its own install listener to handle precaching.
Only if every install listener completes without error will the service worker move on to the next stage, which might either be waiting (if there is already an open client using the previous version of the service worker) or activating (if there are no clients using the previous version of the service worker).
skipWaiting(), if you choose to use it, it will bypass the waiting stage regardless of whether or not there are any open clients using the previous version of the service worker.
Calling skipWaiting() will not accomplish anything if any of the install listeners failed, because the service worker will never leave the installing phase. It's basically a no-op.
The one thing that you should be careful about is using skipWaiting() when you are also using lazy-loading of versioned, precached assets. As the article warns:
Caution: skipWaiting() means that your new service worker is likely
controlling pages that were loaded with an older version. This means
some of your page's fetches will have been handled by your old service
worker, but your new service worker will be handling subsequent
fetches. If this might break things, don't use skipWaiting().
Because lazy-loading precached, versioned assets is a much more common thing to do in 2018, Workbox does not call skipWaiting() for you by default. It's up to you to opt-in to using it.

Related

Windows GlobalSystemMediaTransportControlsSession Events Not Fired in Service

I'm trying to listen to events dispatched by GlobalSystemMediaTransportControlsSession in a Windows Service (in Rust).
I made a reproduction repository with instructions.
The most important part is this:
session.PlaybackInfoChanged(TypedEventHandler::new(move |_, _| {
tx.send(()).ok();
Ok(())
}))
Here a new event handler is created that listens on every open application that emits SMTC metadata for changes to the playback, i.e. play/pause/stop events.
Once the event is fired, the application terminates (rx receives an event).
This works fine in a "regular" executable (e.g. by running cargo run).
However, once I'm running this as a service (through nssm*), no event is emitted (the log-file is not created/written to).
I couldn't find any documentation by Microsoft related to services and events. Is there a workaround, am I doing something wrong or is this known to not be supported?
(*) I'm using nssm here, but the same happens when I'm running the executable as a service. This however would include unnecessary code for managing the service.
So I'd guess the problem is that the executable doesn't run with the user-account.
Then I'm wondering why I can get the sessions, and even metadata, in the first place and no error is thrown/emitted.

Conditionally restart a service

I've just learned how to use notifications and subscriptions in Chef to carry out actions such as restarting services if a config file is changed.
I am still learning chef so may just have not got to this section yet but I'd like to know how to do the actions conditionally.
Eg1 if I change a config file for my stand alone apache server I only want to restart the service if we are outside core business hours ie the current local time is between 6pm and 6am. If we are in core business hours I want the restart to happen but at a later time, outside core hours.
Eg2 if I change a config file for my load balanced apache server cluster I only want restart the service if a) the load balancer service status is "running" and b) all other nodes in the cluster have their apache service status as running ie I'm not taking down more than one node in the cluster at once.
I imagine we might need to put the action in a ruby block that either loops until the conditions are met or sets a flag or creates a scheduled task to execute later but I have no idea what to look for to learn how best to do this.
I guess this topic is kind of philosophical. For me, Chef should not have a specific state or logic beyond the current node and run. If I want to restart at a specific time, I would create a cron job with a conditional and just set the conditional with chef (Something like debian's /var/run/reboot-required). Then crond would trigger the reboot.
For your second example, the LB should have no issues to deal with a restarting apache backend and failover to another backend. Given that Chef runs regulary with something called "splay" the probability is very low that no backend is reachable. Even with only 2 backends. That said, reloading may be the better way.

How to clear a browser cached service worker when the old site is no longer accessible?

I have built a new site for a customer and taken over managing their domain and using a new hosting. The previous site and hosting have been completely taken down.
I am running into a major issue that I am not sure how to fix. The previous developer used a service worker to cache and load the previous site. The problem is that users that had previous visited the site keep seeing the old one since it is all loading from a cache. This old site no longer even exists so I have no way of adding any javascript to remove the service worker from their browser unless they hit the new site.
Has anyone ever had this issue and know of a way to resolve it? Note, asking the users to delete the service worker from their browser won't work.
You can use cache busting to achieve the outcome. As per Keycdn
Cache busting solves the browser caching issue by using a unique file
version identifier to tell the browser that a new version of the file
is available. Therefore the browser doesn’t retrieve the old file from
cache but rather makes a request to the origin server for the new
file.
In case you want to update the service worker itself, you should know, for a service worker an update is triggered if any of the following happens:
A navigation to an in-scope page.
A functional events such as push and sync, unless there's been an update
check within the previous 24 hours.
Calling .register() only if the service worker URL has changed. However, you should avoid changing the worker URL.
Updating the service worker
Maybe using the clear-site-data header would be the most thorough solution.

My polymer project is not up to date

I have been working on a polymer web app which I started in polymer 1.0
My problem is though i push new code some times the web app is in old version only. To solve the problem i disabled service worker(To avoid caching) and added time stamps to my back end APIs. Still I am facing the same problem.Suggest me solution.Also some times some elements don't respond and render.
Thanks in advance.
When you push new versions of your code, it doesn't automatically update the cached versions of those resources in the users' browsers. And I believe your service worker is coded to serve the cached resources, thus making your new versions of your code not served.
In order to serve the new versions, you need to make the service worker update its cached resources. This can be done by making the service worker cache the resources again (thus caching the new versions this time).
This can be done by making changes in your service worker file (even a single character change will do!). Once the users' browsers sees that the service worker has changed, it will download the updated service worker, run its install phase (thus caching the new versions of your resources).
If you can't decide what "change" to do in your service worker file, simply changing the cache name will do. Make sure to do this everytime you push new versions of your resources.

Checking in OSGI if a bundle is still operating normally

I'm currently making a watchdog to check if all bundles in a pipeline are still functioning properly. (This will be in a distributed environment so failure can be a network failure, software failure, one of the servers failing, ...)
Because a bundle can be bound to N amount of services, N arbitrary, the checking should will happen recursively using the following methodology:
START at the first step in the pipeline
Use getServicesInUse to get the services references of the next step
use getBundle() on the gathered ServiceRerefence objects
REPEAT until we arrive at the bundle we want to stop at
So that way I can get all the bundle objects of the pipeline (I assume) now to check if they are functioning correctly (or just if they are still reachable) I was wondering if
Bundle b = ...
if(b.getState() == Bundle.ACTIVE) ...;
will do the trick? Ofcourse also surrounding this with the necessary try catch clauses to detect hardware/network failure.
Can you clarify what you mean by "all bundles in a pipeline"?
You are right that a bundle can provide and consume zero or more services, but if I were to create a watchdog for an OSGi system I would use one of two approaches:
If the nodes in your distributed system provide mainly REST services, I would write a separate "watchdog" program that monitors these REST services to see if they still respond (on any of the nodes in my distributed system). You can either make "real" calls or just request some HEAD and see if you get a response.
If the nodes in your distributed system provide mainly OSGi services, I would write a watchdog bundle and deploy that to each node. I would then add a REST endpoint to my watchdog to allow me to monitor it remotely (by another watchdog, similar to approach #1).
Checking the active state of a bundle will tell you nothing. Bundles will remain active once started, but the services they provide could be unresponsive.

Resources