Chrome extension MV3 service worker persistent data - browser-extension

I am in the process of migrating v2 Chrome extension to Manifest V3 format. Looking for input what is the best way to maintain the persistent data using chrome storage.
In v2 version, the logic is background script is holding the required persistent data(say tabs, document etc) which gets updated on multiple events, Now the same needs to be migrated to V3, so that data is persistent for the life of GoogleChrome instance. Is there an event that gets fired when service worker is just about to become inactive so that data can be stored to chrome storage.(without this, the logic to store the data will be at multiple places in the script, not sure is this the way to do).

Related

how to update shared core data store between apps?

My Usecase:
For an automatic export running once in a while I build an HelperApp, while the configuration of still is made in the MainApp.
My Setup:
MainApp (writes data)
HelperApp (mostly reads data but also writes to very few fields)
both Apps share a single core data persistent store using the same myapp.storedata file in a Group Contrainer
both Apps observe NSManagedObjectContextDidSaveNotification tell each other through NSDistributedNotificationCenter when their context got saved
this works, so the other App knows when it should update it's persistent store / managed object context
both Apps are sandboxed
My Problem:
I can not pass NSManagedObjectContextDidSaveNotification to the other app through NSDistributedNotificationCenter
How can I either
Cause the core data stack of any app to reload it's data?
or
Merge changes from the other app so that MOC gets updated?

Keeping state in sync between server and GUI in realtime

I am looking for a library that will help me keep some state in sync between my server and my GUI in "real time". I have the messaging and middleware sorted (push updates etc), but what I need is a protocol on top of that which guarantees that the data stays in sync within some reasonably finite period - an error / dropped message / exception might cause the data to go out of syn for a few seconds, but it should resync or at least know it is out of sync within a few seconds.
This seems like it should be something that has been solved before but I can't seem to find anything suitable - any help much appreciated
More detail - I have a Rich Client (Silverlight but likely to move to Javascript/C# or Java soon) GUI that is served by a JMS type middleware.
I am looking to re engineer some of the data interactions to something like as follows
Each user has their own view on several reasonably small data sets for items such as:
Entitlements (what GUI elements to display)
GUI data (e.g. to fill drop down menus etc)
Grids of business data (e.g. a grid of orders)
Preferences (e.g. how the GUI is laid out)
All of these data sets can be changed on the server at any time and the data should update on the client as soon as possible.
Data is changed via the server – the client asks for a change (e.g. cancel a request) and the server validates it against entitlements and business rules and updates its internal data set which would then send the change back to the GUI. In order to provide user feedback an interim state may be set on the gui (cancel submitted or similar) which is the over ridden by the server response.
At the moment the workflow is:
User authenticates
GUI downloads the initial data sets from the server (which either loads them from the database or some other business objects it has cached)
GUI renders
GUI downloads a snapshot of the business data
GUI subscribes to updates to the business data
As updates come in the GUI updates the model and view on screen
I am looking for a generalised library that would improve on this
Should be cross language using an efficient payload format (e.g. Java back end, C# front end, protobuf data format)
Should be transport agnostic (we use a JMS style middleware we don’t want to replace right now)
The client should be sent a update when a change occurs to the server side dataset
The client and server should be able to check for changes to ensure they are up to date
The data sent should be minimal (minimum delta)
Client and Server should cope with being more than one revision out of sync
The client should be able to cache to disk in between session and then just get deltas on login.
I think the ideal solution would be used something like
Any object (or object tree) can be registered with the library code (this should work with data/objects loaded via Hibernate)
When the object changes the library notifys a listener / callback with the change delta
The listener sends that delta to the client using my JMS
The client gets the update and can give that back to the client side version of the library which will update the client side version of the object
The client should get sufficient information from the update to be able to decide what UI action needs to be taken (notify user, update grid etc)
The client and server periodically check that they are on the same version of the object (e.g. server sends the version number to the client) and can remediate if necessary by either the server sending deltas or a complete refresh if necessary.
Thanks for any suggestions
Wow, that's a lot!
I have a project going on which deals with the Synchronization aspect of this in Javascipt on the front end. There is a testing server wrote in Node.JS (it actually was easy once the client was was settled).
Basically data is stored by key in a dataset and every individual key is versioned. The Server has all versions of all data and the Client can be fed changes from the server. Version conflicts for when something is modified on both client and server are handled by a conflict resolution callback.
It is not complete, infact it only has in-memory stores at the moment but that will change over the new week or so.
The actual notification/downloading and uploading is out of scope for the library but you could just use Sockets.IO for this.
It currently works with jQuery, Dojo and NodeJS, really it's got hardly any dependencies at all.
The project (with a demo) is located at https://github.com/forbesmyester/SyncIt

Windows Azure:Do we need External Persisted storage for Web role having Multiple Instances

Do we need to maintain extranal Persisted storage if we maintain a single Web Role with multiple instances?
If we deploy a site into azure with WebRole instance count greater that 1, is session state, Application State is shared among the Instances automaticaly?
Consider we have created two instances for webRole. If I made a request to the server, consider Instance1 processed the request given the response. In this request processing we saved some data into session. When I do a post back consider due to some reason Instance2 processed my postback request. Here my question is how the Instance2 can access the session data that is saved in my previous request?
If you use in-proc session state, each instance will have its own session data (not a good thing). You can easily use the new AppFabric Cache Session State provider, which gives you an instance-agnostic storage medium for your session state.
Here's the MSDN info on this. There's also a lab in the Windows Azure Platform Training Kit, called Building Windows Azure Applications with the Caching Service, that walks you through this.

Recommend cache updating strategy

Our site is divided into several smaller sites recently, which are then distributed in different IDCs.
One of these sites serves user authentication and other user-related services, the other sites access it through web services.
On every site that fetches data remotely, we make a local cache so that we don't have to go remote every time user information is needed.
What cache updating strategy would you recommend to ensure data integrity?
Since you need the updated-policy close to realtime, you definitely need the cache-invalidation notification engine.
There are 2 possible implementation models for it:
1.Pull
Main server pulls child-servers with notification messages like "resourceID=34392 not more valid in your cache".
This message should be sent on each data update on main server.
Poll
Each child-server ask main server about the cache item validity right before serving it to user.
Ofcourse, in this case, main server should keep the list of objects updated during last cache-lifetime period, and respond to "If-object-was-updated" requests very quickly.
As you see in both cases, your main server should trigger an event on each data change.
In first case this event will be transferred via 'notification bus' to child server, and in second case this event will be stored in recently-updated-objects list.
So both options need some code changes on main server.
As for me the second options is much more easy to implement in common, but it`s very depends of the software stack you're using.

When should I load data from isolated storage for Windows Phone 7?

Microsoft's best practices state:
If an application relies on data from isolated storage, you should not load this data in the Launching event handler or in the Activated event handler. Disk operations can take several seconds and these events are called before the application is loaded and active, so accessing Isolated Storage in these handlers results in a long wait time as the application loads. Instead, you should load data from Isolated Storage asynchronously, after the application has loaded.
Why is this, and when should data be read from isolated storage?
What I'm looking to load is if the phone user has their username/password persisted to isolated storage, and preloading those for login on the first screen. When should this action take place?
thanks,
Mark
Firstly, don't store the password. Ever! Anywhere! Store a salted hash of the password. If you need to store this to pass to a webservice (or similar) have the webservice return a token on successful login and store that instead.
Now, your actual question.
You've answered the first part of it yourself though.
Because you want the application to be responsive, when the application launches you should perform timely operations off of the UI thread.
Data should be loaded from and saved to IsolatedStorage at times which are most appropriate to the application, the volume of data and the frequency with which it is needed or updated.
In your specific instance, I wouldn't expect the retrieval of 2 strings from isolated stroage to be very slow at all and so I would perform their retrieval in the Loaded() event of the page in question.
If you were just retrieving a username and password I would consider using IsolatedStorageSettings to persist these.
If you also had lots of other details to store and needed these at about the same time you may want to store them all together so you could retrieve them all together.
The motivation to defer loading of data from isolated storage until the app is loaded, and to do it asycnrhonously is two-fold.
There is a lot of work being done to load the app on these resource constrained devices. If it's practical for your app to do so, then defering any actions until after the app is responsive will improve the user experience. Noting also that you have a cert reqt of 5s to show something and 20s to be responsive - and some devices are slower/faster than others.
Async is good for any non trivial operations so that you don't block the UI thread and interfere and cause the device to appear not fully responsive.
It's worth keeping in mind that the phone may be doing things you haven't considered in the background in additional to your normal low load testing scenarios - such as syncing mail.
I concur with Matt, that loading 2 strings, for the sake of login credentials, is probably not going to cause you a performance issue.
If you wanted to be ultra diligent, you could load that data asynchronously after the login page is loaded and have the controls disabled until such time as that data is retrieved.

Resources