Sharing singleton flux stores in React - reactjs-flux

So the general gist I've got with flux is that stores should always be a singleton. In my example I have the following :
A people store which controls the CRUD operations of people, as well as search / filtering.
I now have 2 components which show at the same time that make use of this filtering, my issue at the moment is with the current implementation they would be filtering on both components due to their shared store.
My current idea for solutions are:
Have filtering controlled in controller components
Have 2 separate stores that cover each of their domains and have filtering functionality in a shared util

The first solution sounds ok to me.
Nonetheless you can also implement a buffer, made of a hashtable in order to separate temporary filtering results, as if they were sessions.
Pros:
You can alter the shared data and if several components are looking to the same data, every change will be reflected in all those components.
Cons:
There will be a lot of change events and you will need to check if the store change event is important to your component before change its state in order to prevent unnecessary render calls.

Related

"Translate" utag.link (tealium tracking function) into _satellite.track (Adobe Launch tracking)

we are migrating Tealium web analytics tracking into Adobe Launch.
Part of the website is tagged by utag.link method, e.g.
utag.link({
"item1" : "item1_value",
"item2" : "item2_value",
"event" : "event_value"})
and we need to "translate" it into Adobe Launch syntax, to save developers time, e.g.
_satellite.track("event_value",{item1:"item1_value",item2:"item2_value"})
How would you approach it? Is it doable?
Many thanks
Pavel
Okay, this is a bit more complex than it looks. Technically, this answers your question completely: https://experienceleaguecommunities.adobe.com/t5/adobe-experience-platform-launch/satellite-track-and-passing-related-information/m-p/271467
Hooowever! This will make the tracking only accessible by Launch/DTM. Other TMSes or even global env JS will end up relying on Launch if they need a piece of that data too. And imagine what happens when in five years you wanna migrate from Launch like you do now with Tealium? You will have to do the same needless thing. If your Tealium implementation had been implemented more carefully, you wouldn't need to waste your time on this migration now.
Therefore, I would suggest not using _satellite.track(). I would suggest using pure JS CustomEvents with payloads in details. Launch natively has triggers for native JS events and the ability to access their details through CJS: event.details. But even if I need to use that in GTM, I can deploy a simple event listener in GTM that will re-route all the wonderful CustomEvents into DL events and their payloads in neat DL vars.
Having this, you will never need to bother front-end devs when you need to make tracking available for a different TMS whether as a result of migration or parity tracking into a different analytics system.
In general, agree with BNazaruk's answer/philosophy that the best way to future-proof your implementation is to create a generic data layer and broadcast it to custom javascript events. Virtually all modern tag managers have a way to subscribe to them, map to their equivalent of environment variables, event rules, etc.
Having said that, here is an overview of Adobe's current best practice for Adobe Experience Platform Data Collection (Launch), using the Adobe Client Data Layer Extension.
Once you install the extension, you change your utag calls, e.g.
utag.link({
"item1" : "item1_value",
"item2" : "item2_value",
"event" : "event_value"
})
to this:
window.adobeDataLayer = window.adobeDataLayer || [];
window.adobeDataLayer.push({
"item1" : "item1_value",
"item2" : "item2_value",
"event" : "event_value"
});
A few notes about this:
adobeDataLayer is the default array name the Launch extension will look for. You can change this to something else within the extension's config (though Adobe does not recommend this, because reasons).
You can keep the current payload structure you used for Tealium and work with that, though longer term, you should consider restructuring your data layer. Things get a little complicated when dealing with Tealium's data layer syntax/conventions vs. Launch. For example, if you have multiple comma delimited events in your event string (Tealium convention) vs. creating Event Rules in Launch (which expects a single event in the string). There are workarounds for this stuff (ask a separate question if you need help), but again, longer term best path would be to change the structure of the data layer to something more standard.
Then, within Launch, you can create Data Elements to map to a given data point passed in the adobeDataLayer.push call.
Meanwhile, you can make a Rule with an event that listens for the data pushed, based on various criteria. Common example is to listen for a Specific Event, which corresponds to the event value you pushed. For example:
And then in the Rule's Conditions and Actions, you can reference the Data Elements you made. For example, if you want to trigger the Rule if event equals "event_value" (above), AND if item2 equals "item2_value", you can add a Condition like this:
Another example, Action to set Adobe Analytics eVar1 to the value of item2:
I would advise to remove any dependencies to TMS from your platform code and migrate to use a generic data layer. This way you developers will not have any issues in future to migrate TMS.
See this article about generic data layer not TMS provider specific: https://dev.to/alcazes/generic-data-layer-1i90

Conditional Incremental builds in Nextjs

Context
I am learning Nextjs which is a framework for developing react applications quickly by providing many functionalities out of the box such as Server Side Rendering, Fast Refresh and many others out of the box without any configuration. It also provides a functionality to optionally generate some web pages statically which are pre rendered at build time instead of rendering on demand. It achieves it by querying the data required for the page at build time. Nextjs also provides an optional argument expressed in seconds after which the data is re queried and the page re rendered. All of it happens on page level rather than rebuilding the entire website.
Problems
We cannot know in advance how frequently data would change, the data may change after 1 second or 10 minutes and it is impossible to know in advance and extremely hard to predict. However, it is most certainly not a constant number of seconds. With this approach, I might show outdated information due to higher time limit or I might end up querying the database unnecessarily even if data hasn't changed.
Suppose I have implemented some sort of pagination and I want to exploit the fact that most users would only visit first few pages before going to a different link. I could statically pre render first 1000 pages, so the most visited pages are served statically without going to the database whereas the rest are server side rendered. Now, if my data might change frequently, I would have to re render the first 1000 pages after regular intervals and each page would issue a separate query against the same database or external API which would cause too many round trips. I am not aware of the details of Nextjs but I suspect this would be true because Nextjs does not assume anything about the function which pulls the data and a generic implementation would necessitate it.
Attempted Solution
Both problems can be solved by client or server side rendering because the data would be fetched on demand but we lose the benefits of static generation specifically serving static assets compared to querying the database. I believe static generation would be useful if mutations to my data happen infrequently most of the time but we still want to show the updated information as fast as we can when it becomes available.
If I forget about Nextjs for a a while, both problems can be solved by spawning a new process which listens for mutations to the relevant data and only rebuilds those static assets which needs to be updated; kind of like React updates components but on server side. However Nextjs offers a lot of functionalities which would be difficult to replicate, so I cannot use this approach.
If I want to use Nextjs, problem (1) seems impossible to solve due to (perceived?) limitation of Nextjs which only offers one way to rebuild static pages, periodically re render them after a predetermined time. However, (2) can be solved by using some sort of in memory cache which pulls all the required data from the data store in one round trip and structures it up for every page. Then every page will pull data from this cache instead of the database. However, it looks like a hack to me.
Questions
Are there other ways to deal with the problem I might have have missed?
Is there a built-in way to deal with problem (1) and (2) in Nextjs?
Is my assessment of attempted solutions and their viability correct?

asp.net: pass data between pages using delegate

I want to know if it is possible to pass data to another page without Querystrings or Session.
In other words:
Can i do this using delegates or any other way?
You can POST data to another page (this is slightly different than using querystrings but may be too similar for your liking). Any data POSTED to another web form can be read with Request.Form["name_of_control"].
In certain cases I've had to develop my own approach involving generating a GUID and passing that around from page-to-page. Then any page can pull my data structures associated with a given GUID from a static key/value structure... Similar to Sessions I suppose but I had more control over how it worked. It allowed for any user to have multiple simultaneous windows/tabs open to my application and each one would work without affecting or being affected by the others (because each were passing around a different GUID). Whatever approach you choose I do urge you to consider the fact that users may want to use your application via multiple windows/tabs at the same time.
The right tool for you depends on your needs. Remember your challenge lies is making HTTP which is inherently stateless more state-ful. This thread has a very good discussion on this topic: Best Practices for Passing Data Between Pages

Using Core Data as cache

I am using Core Data for its storage features. At some point I make external API calls that require me to update the local object graph. My current (dumb) plan is to clear out all instances of old NSManagedObjects (regardless if they have been updated) and replace them with their new equivalents -- a trump merge policy of sorts.
I feel like there is a better way to do this. I have unique identifiers from the server, so I should be able to match them to my objects in the store. Is there a way to do this without manually fetching objects from the context by their identifiers and resetting each property? Is there a way for me to just create a completely new context, regenerate the object graph, and just give it to Core Data to merge based on their unique identifiers?
Your strategy of matching, based on the server's unique IDs, is a good approach. Hopefully you can get your server to deliver only the objects that have changed since the time of your last update (which you will keep track of, and provide in the server call).
In order to update the Core Data objects, though, you will have to fetch them, instantiate the NSManagedObjects, make the changes, and save them. You can do this all in a background thread (child context, performBlock:), but you'll still have to round-trip your objects into memory and back to store. Doing it in a child context and its own thread will keep your UI snappy, but you'll still have to do the processing.
Another idea: In the last day or so I've been reading about AFIncrementalStore, an NSIncrementalStore implementation which uses AFNetworking to provide Core Data properties on demand, caching locally. I haven't built anything with it yet but it looks pretty slick. It sounds like your project might be a good use of this library. Code is on GitHub: https://github.com/AFNetworking/AFIncrementalStore.

Cache Management with Numerous Similar Database Queries

I'm trying to introduce caching into an existing server application because the database is starting to become overloaded.
Like many server applications we have the concept of a data layer. This data layer has many different methods that return domain model objects. For example, we have an employee data access object with methods like:
findEmployeesForAccount(long accountId)
findEmployeesWorkingInDepartment(long accountId, long departmentId)
findEmployeesBySearch(long accountId, String search)
Each method queries the database and returns a list of Employee domain objects.
Obviously, we want to try and cache as much as possible to limit the number of queries hitting the database, but how would we go about doing that?
I see a couple possible solutions:
1) We create a cache for each method call. E.g. for findEmployeesForAccount we would add an entry with a key account-employees-accountId. For findEmployeesWorkingInDepartment we could add an entry with a key department-employees-accountId-departmentId and so on. The problem I see with this is when we add a new employee into the system, we need to ensure that we add it to every list where appropriate, which seems hard to maintain and bug-prone.
2) We create a more generic query for findEmployeesForAccount (with more joins and/or queries because more information will be required). For other methods, we use findEmployeesForAccount and remove entries from the list that don't fit the specified criteria.
I'm new to caching so I'm wondering what strategies people use to handle situations like this? Any advice and/or resources on this type of stuff would be greatly appreciated.
I've been struggling with the same question myself for a few weeks now... so consider this a half-answer at best. One bit of advice that has been working out well for me is to use the Decorator Pattern to implement the cache layer. For example, here is an article detailing this in C#:
http://stevesmithblog.com/blog/building-a-cachedrepository-via-strategy-pattern/
This allows you to literally "wrap" your existing data access methods without touching them. It also makes it very easy to swap out the cached version of your DAL for the direct access version at runtime quite easily (which can be useful for unit testing).
I'm still struggling to manage my cache keys, which seem to spiral out of control when there are numerous parameters involved. Inevitably, something ends up not being properly cleared from the cache and I have to resort to heavy-handed ClearAll() approaches that just wipe out everything. If you find a solution for cache key management, I would be interested, but I hope the decorator pattern layer approach is helpful.

Resources