I have been working on a polymer web app which I started in polymer 1.0
My problem is though i push new code some times the web app is in old version only. To solve the problem i disabled service worker(To avoid caching) and added time stamps to my back end APIs. Still I am facing the same problem.Suggest me solution.Also some times some elements don't respond and render.
Thanks in advance.
When you push new versions of your code, it doesn't automatically update the cached versions of those resources in the users' browsers. And I believe your service worker is coded to serve the cached resources, thus making your new versions of your code not served.
In order to serve the new versions, you need to make the service worker update its cached resources. This can be done by making the service worker cache the resources again (thus caching the new versions this time).
This can be done by making changes in your service worker file (even a single character change will do!). Once the users' browsers sees that the service worker has changed, it will download the updated service worker, run its install phase (thus caching the new versions of your resources).
If you can't decide what "change" to do in your service worker file, simply changing the cache name will do. Make sure to do this everytime you push new versions of your resources.
Related
I have built a new site for a customer and taken over managing their domain and using a new hosting. The previous site and hosting have been completely taken down.
I am running into a major issue that I am not sure how to fix. The previous developer used a service worker to cache and load the previous site. The problem is that users that had previous visited the site keep seeing the old one since it is all loading from a cache. This old site no longer even exists so I have no way of adding any javascript to remove the service worker from their browser unless they hit the new site.
Has anyone ever had this issue and know of a way to resolve it? Note, asking the users to delete the service worker from their browser won't work.
You can use cache busting to achieve the outcome. As per Keycdn
Cache busting solves the browser caching issue by using a unique file
version identifier to tell the browser that a new version of the file
is available. Therefore the browser doesn’t retrieve the old file from
cache but rather makes a request to the origin server for the new
file.
In case you want to update the service worker itself, you should know, for a service worker an update is triggered if any of the following happens:
A navigation to an in-scope page.
A functional events such as push and sync, unless there's been an update
check within the previous 24 hours.
Calling .register() only if the service worker URL has changed. However, you should avoid changing the worker URL.
Updating the service worker
Maybe using the clear-site-data header would be the most thorough solution.
The issue is I cant find any documentation on changing a managed(Autoscaling) group into an un-managed instance group with 0 servers group. I've looked at pythons google.cloud and googleapiclient without any luck. They both show ways of managing each individually but not changing it. service.instanceGroupManagers().resize also no go.
Also https://cloud.google.com/sdk/gcloud/reference/compute/instance-groups/
also treats them individually.
I know they support this but I can't figure out how to do this without the gui.
Maybe someone has a better way of doing this. The idea is having a load balancer with a maintenance splash page in it with a RPS of 0 so it get no traffic. When we want the sites to go down for an update we drain all the active connection with the built drain feature when a server is being deleted. We do this by setting the instance group to autoscale no (Unmanaged) and 0 servers.
If you’re using a managed instance group, & all of the images are the same, the below options are available & much simpler.
It does not seem possible to change from a managed instance group to unmanaged in any way, so, I cannot provide steps to doing this through automation.
Best to use a rolling update or canary deployments. You can also use opportunistic or proactive update. These methods & how to use them (gcloud commands & API examples included) are documented here.
Rolling Update: Replace x instances at a time, i.e. imagine 3 instances, the first instance will go down & be updated, once it is finished the second will go down to be update, once finished lastly, the third will be updated. If there are 50 instances you can specify 10 at a time be updated, etc.
Canary Update: Imagine you want to test your new application. Only x/y (i.e. 1 of 3) instances will be updated. So some users will use the new application while some use the old. This allows you to test the new application in production without affecting all instances. If the new version is running smoothly you can roll forward the update (rolling update) or you can roll back the update by removing the few instance(s) running the new version.
Proactive update: Instances are simply recreated with the new version.
Opportunistic: If proactive updates are too disruptive, opportunistic updates will wait for the autoscaler or some other event that would restart or recreate the instance anyway to then also create the instance with the new template.
Hope this helps.
I use Workbox to pre-cache assets required to render the app shell, including a basic version of index.html. Workbox assumes that index.html is available in cache, otherwise, page navigation fails because I have this registered in my Service Worker:
workbox.routing.registerNavigationRoute('/index.html');
I also have the self.skipWaiting() instruction in the install listener:
self.addEventListener('install', e => {
self.skipWaiting();
});
As I understand it, there are 2 install listeners now:
One that's registered by Workbox for pre-caching assets (including index.html)
One that I registered manually in my Service Worker
Is it possible for self.skipWaiting() to succeed while Workbox's install listener fails? This would lead to a problematic state where assets don't get pre-cached but the Service Worker is activated. Is such a scenario possible and should I protect against it?
I highly recommend "The Service Worker Lifecycle" as an authoritative source of information about the different stages of a service worker's installation and updating.
To summarize some info from that article, as it applies to your question:
The service worker first enters the installing phase, and however many install listeners you've registered, they will all get a chance to execute. As you suggest, Workbox creates its own install listener to handle precaching.
Only if every install listener completes without error will the service worker move on to the next stage, which might either be waiting (if there is already an open client using the previous version of the service worker) or activating (if there are no clients using the previous version of the service worker).
skipWaiting(), if you choose to use it, it will bypass the waiting stage regardless of whether or not there are any open clients using the previous version of the service worker.
Calling skipWaiting() will not accomplish anything if any of the install listeners failed, because the service worker will never leave the installing phase. It's basically a no-op.
The one thing that you should be careful about is using skipWaiting() when you are also using lazy-loading of versioned, precached assets. As the article warns:
Caution: skipWaiting() means that your new service worker is likely
controlling pages that were loaded with an older version. This means
some of your page's fetches will have been handled by your old service
worker, but your new service worker will be handling subsequent
fetches. If this might break things, don't use skipWaiting().
Because lazy-loading precached, versioned assets is a much more common thing to do in 2018, Workbox does not call skipWaiting() for you by default. It's up to you to opt-in to using it.
thanks everyone!
recently i want to built a small cms on meteor,but have some question
1,cache,page cache,data cache,etc..
For example,when people search some article
in server side:
Meteor.publist('articles',function(keyword){
return Articles.find({keyword:keyword});
});
in client:
Meteor.subscribe('articles',keyword);
that's ok ,but ......
the question is ,everytime people doing so ,it invoke a mongo query,and reduce the performance,
in other framework use common http or https,people can depend on something like squid or varnish to cache the page or data,so everytime you route to a url,you read data from the cache server ,but Meteor built on socket.js or websocket,and I don't know how to cache throught the socket.......I trid varnish ,but seen no effect.
so,may be it ignore the websocket?is there some method to cache the data,in the mongodb,in server,can i add some cache server ?
2, chat
I see the chatroom example in https://github.com/zquestz/simplechat
But unlike implyment using socket.js,this example save the chat message in the mongodb ,so the data flow is message ->mongo->query->people,this invoke the mongo query too!
and in socket.js,just save the socket in the context(or the server side cache),so the data don't go throught the db.
My question is , is there a socket interface in Meteor ,so I can message->socket->people? and if can't , how is the performace in the productive envirment as the chatroom example doing(i see it runs slow ...)
With Meteor, you don't have to worry about caching Mongodb queries. Meteor does that for you. Per the docs on data and security:
Every Meteor client includes an in-memory database cache. To manage the client cache, the server publishes sets of JSON documents, and the client subscribes to those sets. As documents in a set change, the server patches each client's cache.
[...]
Once subscribed, the client uses its cache as a fast local database, dramatically simplifying client code. Reads never require a costly round trip to the server. And they're limited to the contents of the cache: a query for every document in a collection on a client will only return documents the server is publishing to that client.
Because Meteor does poll the server every so often to see if the client's cache needs patching, you're probably seeing those polls happening every now and then. But they probably aren't very large requests. Additionally, due to a feature of Meteor called latency compensation, when you update a data source, the client immediately reflects the change without first waiting on the server. This reduces the appearance of performance reduction to the user.
If you have many documents in mongo, you may also be seeing them all get fetched if you still have the autopublish package enabled. You can fix that by removing it with meteor remove autopublish and write code to only publish the relevant data instead of the entire database.
If you really need to manage caching manually, the docs also go into that:
Sophisticated clients can turn subscriptions on and off to control how much data is kept in the cache and manage network traffic. When a subscription is turned off, all its documents are removed from the cache unless the same document is also provided by another active subscription.
Additional performance improvements to Meteor are currently being worked on, including a DDP-level proxy to support "very large number of clients". You can see more detail on this at the Meteor roadmap.
If you stumble upon this question not because of a lack of understanding of meteor's minimongo and are instead interested in how to cache subscriptions after they are no longer needed for the moment (but they maybe in the future and don't want to keep their extra DDP overhead on client server) there are two package options:
https://github.com/ccorcos/meteor-subs-cache
https://github.com/kadirahq/subs-manager
I was creating a mobile app and cache of database was not working hence I used GroundDB package of meteor https://github.com/raix/Meteor-GroundDB now the database is always in local whenever I restart the app,
Also you need to look in appcache package of meteor to cache the entire app locally.
I'm using System.Runtime.Caching.MemoryCache to simulate a repeated task on a running .NET MVC application deployed on AppHarbor.
Entries in the cache are added using a CacheItemPolicy which contains an AbsoluteExpiration offset and a RemovedCallback that calls a method and retriggers the adding of the item in the cache (as described here)
MemoryCache is populated first time in Application_Start. It works fine locally, but doesn't seem to work once deployed on AppHarbor (tried also with HttpRuntime.Cache, same result).
My application is running under a CANOE (free) account on AppHarbor that only has one worker. Does this mean that I won't be able to simulate the background task until I upgrade to some paid plan?
Thanks!
Your application has to have visitors every once in a while for this to work. Other than StillAlive, Pingdom is also a good bet for generating requests to your app. You should also take a look at MomentApp. We expect to have background tasks ready shortly.
I don't think upgrading will help, they are working on adding background jobs to AppHarbor but to my knowledge they available yet.
What about using a service like https://stillalive.com/ to periodically hit a page on your site that then spins up a new thread and starts running your background task? Its available as a free add-on.
I was thinking of doing something like this while waiting for the background task functionality to be available.