How to "bind" a persistent data structure to a GUI in Scala? - user-interface

I need a GUI control to update whenever a persistent data structure (PDS) is updated.
I need to have the PDS updated when the user takes certain actions.
So, for example, an SWT Tree and a simple tree data structure.
There are lots of manual, ugly ways to do this, but it seems to me this is a very common situation and there would likely be a very clean approach out there.
I've been reading about FRP, Lenses, Actors, etc... seems like there could be a very simple, clean, effective approach to handling this type of situation.

What I can think about is having a component with a mutable reference to the PDS. This component could raise an event with the new version of the PDS every time it changes the value of the var. Your GUI control could be listening to that event and react to it by redrawing itself with the new info. Other option is that the component that listens to the event be the parent of your GUI control, reacting by creating a new instance of it, so the control can receive the PDS in the constructor and draw itself only once.

A persistent data structure never updates. You probably have a reference to a persistent data structure that changes to the new version, when you change it. If you want to track incremental change in the PDS, it's going to be awkward. The thing is, at the point you're storing the new version of the PDS, you still have the old version. Maybe you can run a diff to produce the incremental change.

Yes, there is a nice and clean approach: ValueModels. It should be pretty easy to implement in Scala (I've found nothing in a quick search). AFAIK there is a Java implementation embedded in Spring Rich Client.

How you describe it, it seems that the user calls takes certain actions inside the GUI, and then the GUI and the Database has to be updated. As long as the database update is a side effect you could completely rely on all SWT events.

Related

"Translate" utag.link (tealium tracking function) into _satellite.track (Adobe Launch tracking)

we are migrating Tealium web analytics tracking into Adobe Launch.
Part of the website is tagged by utag.link method, e.g.
utag.link({
"item1" : "item1_value",
"item2" : "item2_value",
"event" : "event_value"})
and we need to "translate" it into Adobe Launch syntax, to save developers time, e.g.
_satellite.track("event_value",{item1:"item1_value",item2:"item2_value"})
How would you approach it? Is it doable?
Many thanks
Pavel
Okay, this is a bit more complex than it looks. Technically, this answers your question completely: https://experienceleaguecommunities.adobe.com/t5/adobe-experience-platform-launch/satellite-track-and-passing-related-information/m-p/271467
Hooowever! This will make the tracking only accessible by Launch/DTM. Other TMSes or even global env JS will end up relying on Launch if they need a piece of that data too. And imagine what happens when in five years you wanna migrate from Launch like you do now with Tealium? You will have to do the same needless thing. If your Tealium implementation had been implemented more carefully, you wouldn't need to waste your time on this migration now.
Therefore, I would suggest not using _satellite.track(). I would suggest using pure JS CustomEvents with payloads in details. Launch natively has triggers for native JS events and the ability to access their details through CJS: event.details. But even if I need to use that in GTM, I can deploy a simple event listener in GTM that will re-route all the wonderful CustomEvents into DL events and their payloads in neat DL vars.
Having this, you will never need to bother front-end devs when you need to make tracking available for a different TMS whether as a result of migration or parity tracking into a different analytics system.
In general, agree with BNazaruk's answer/philosophy that the best way to future-proof your implementation is to create a generic data layer and broadcast it to custom javascript events. Virtually all modern tag managers have a way to subscribe to them, map to their equivalent of environment variables, event rules, etc.
Having said that, here is an overview of Adobe's current best practice for Adobe Experience Platform Data Collection (Launch), using the Adobe Client Data Layer Extension.
Once you install the extension, you change your utag calls, e.g.
utag.link({
"item1" : "item1_value",
"item2" : "item2_value",
"event" : "event_value"
})
to this:
window.adobeDataLayer = window.adobeDataLayer || [];
window.adobeDataLayer.push({
"item1" : "item1_value",
"item2" : "item2_value",
"event" : "event_value"
});
A few notes about this:
adobeDataLayer is the default array name the Launch extension will look for. You can change this to something else within the extension's config (though Adobe does not recommend this, because reasons).
You can keep the current payload structure you used for Tealium and work with that, though longer term, you should consider restructuring your data layer. Things get a little complicated when dealing with Tealium's data layer syntax/conventions vs. Launch. For example, if you have multiple comma delimited events in your event string (Tealium convention) vs. creating Event Rules in Launch (which expects a single event in the string). There are workarounds for this stuff (ask a separate question if you need help), but again, longer term best path would be to change the structure of the data layer to something more standard.
Then, within Launch, you can create Data Elements to map to a given data point passed in the adobeDataLayer.push call.
Meanwhile, you can make a Rule with an event that listens for the data pushed, based on various criteria. Common example is to listen for a Specific Event, which corresponds to the event value you pushed. For example:
And then in the Rule's Conditions and Actions, you can reference the Data Elements you made. For example, if you want to trigger the Rule if event equals "event_value" (above), AND if item2 equals "item2_value", you can add a Condition like this:
Another example, Action to set Adobe Analytics eVar1 to the value of item2:
I would advise to remove any dependencies to TMS from your platform code and migrate to use a generic data layer. This way you developers will not have any issues in future to migrate TMS.
See this article about generic data layer not TMS provider specific: https://dev.to/alcazes/generic-data-layer-1i90

Novice question about structuring events in Tcl/Tk

If one is attempting to build a desktop program with a semi-complex GUI, especially one in which users can open multiple instances of identical GUI components such as having a "project" GUI and permitting users to open multiple projects concurrently within the main window, is it good practice to push the event listeners further up the widget hierarchy and use the event detail to determine upon which widget the event took place, as opposed to placing event listeners on each individual widget?
For example, in doing something similar in a web browser, there were no event listeners on any individual project GUI elements. The listeners were on the parent container that held the multiple instances of each project GUI. A project had multiple tabs within its GUI, but only one tab was visible within a project at a time and only one project was visible at any one time; so, it was fairly easy to use classes on the HTML elements and then the e.matches() method on the event.target to act upon the currently visible tab within the currently visible project in a manner that was independent of which project it was that was visible. Without any real performance testing, it was my unqualified impression as an amateur that having as few event listeners as possible was more efficient and I got most of that by reading information that wasn't very exact.
I read recently in John Ousterhout's book that Tk applications can have hundreds of event handlers and wondered whether or not attempting to limit the number of them as described above really makes any difference in Tcl/Tk.
My purpose in asking this question is solely to understand events better in order to start off the coding of my Tcl/Tk program correctly and not have to re-code a bunch of poorly structured event listeners. I'm not attempting to dispute anything written in the mentioned book and don't know enough to do so if I wanted to.
Thank you for any guidance you may be able to provide.
Having hundreds of event handlers is usually just a mark that there's a lot of different events possibly getting sent around. Since you usually (but not always) try to specialize the binding to be as specific as possible, the actual event handler is usually really small, but might call a procedure to do the work. That tends to work out well in practice. Myself, my rule of thumb is that if it is not a simple call then I'll put in a helper procedure; it's easier to debug them that way. (The main exception to my rule is if I want to generate a break.)
There are four levels you can usually bind on (plus more widget-specific ones for canvas and text):
The individual widget. This is the one that you'll use most.
The widget class. This is mostly used by Tk; you'll usually not want to change it because it may alter the behaviour of code that you just use. (For example, don't alter the behaviour of buttons!)
The toplevel containing the widget. This is ideal for hotkeys. (Be very careful though; some bindings at this level can be trouble. <Destroy> is the one that usually bites.) Toplevel widgets themselves don't have this, because of rule 1.
all, which does what it says, and which you almost never need.
You can define others with bindtags… but it's usually not a great plan as it is a lot of work.
The other thing to bear in mind is that Tk supports virtual events, <<FooBarHappened>>. They have all sorts of uses, but the main one in a complex application (that you should take note of) is for defining higher-level events that are triggered by a sequence of low-level events occasionally, and yet which other widgets than the originator may wish to take note of.

Is it possible to import old events in a new EventSourcing system

I'm current trying to choose a technical solution for this problem : How to import, (why not) replay, and access a list of events (from various internal and external sources), in an adapted system?
EventSourcing seems to be a good solution for that but I can't find if it is possible to import old events.
I must say that I can receive old events anytime, but for me the important concept is not to have the states of the objects, but store the events themselves, then give it to the apps which need it.
Thanks for your help!
Of course you can import old events into new system.
I have had similar problem: several event storage with different event structure (mongodb, sql, sql with json text fields). At the end of researching we decided to replicate all events (with transformation to similar structure - timestamp, aggregateName, etc) to a new event storage. And now all applications works with this new es.
The only thing you must keep in mind is reactive replication of events from old storages if they are not read-only.

Using Core Data as cache

I am using Core Data for its storage features. At some point I make external API calls that require me to update the local object graph. My current (dumb) plan is to clear out all instances of old NSManagedObjects (regardless if they have been updated) and replace them with their new equivalents -- a trump merge policy of sorts.
I feel like there is a better way to do this. I have unique identifiers from the server, so I should be able to match them to my objects in the store. Is there a way to do this without manually fetching objects from the context by their identifiers and resetting each property? Is there a way for me to just create a completely new context, regenerate the object graph, and just give it to Core Data to merge based on their unique identifiers?
Your strategy of matching, based on the server's unique IDs, is a good approach. Hopefully you can get your server to deliver only the objects that have changed since the time of your last update (which you will keep track of, and provide in the server call).
In order to update the Core Data objects, though, you will have to fetch them, instantiate the NSManagedObjects, make the changes, and save them. You can do this all in a background thread (child context, performBlock:), but you'll still have to round-trip your objects into memory and back to store. Doing it in a child context and its own thread will keep your UI snappy, but you'll still have to do the processing.
Another idea: In the last day or so I've been reading about AFIncrementalStore, an NSIncrementalStore implementation which uses AFNetworking to provide Core Data properties on demand, caching locally. I haven't built anything with it yet but it looks pretty slick. It sounds like your project might be a good use of this library. Code is on GitHub: https://github.com/AFNetworking/AFIncrementalStore.

Generating UI from DB - the good, the bad and the ugly?

I've read a statement somewhere that generating UI automatically from DB layout (or business objects, or whatever other business layer) is a bad idea. I can also imagine a few good challenges that one would have to face in order to make something like this.
However I have not seen (nor could find) any examples of people attempting it. Thus I'm wondering - is it really that bad? It's definately not easy, but can it be done with any measure success? What are the major obstacles? It would be great to see some examples of successes and failures.
To clarify - with "generating UI automatically" I mean that the all forms with all their controls are generated completely automatically (at runtime or compile time), based perhaps on some hints in metadata on how the data should be represented. This is in contrast to designing forms by hand (as most people do).
Added: Found this somewhat related question
Added 2: OK, it seems that one way this can get pretty fair results is if enough presentation-related metadata is available. For this approach, how much would be "enough", and would it be any less work than designing the form manually? Does it also provide greater flexibility for future changes?
We had a project which would generate the database tables/stored proc as well as the UI from business classes. It was done in .NET and we used a lot of Custom Attributes on the classes and properties to make it behave how we wanted it to. It worked great though and if you manage to follow your design you can create customizations of your software really easily. We also did have a way of putting in "custom" user controls for some very exceptional cases.
All in all it worked out well for us. Unfortunately it is a sold banking product and there is no available source.
it's ok for something tiny where all you need is a utilitarian method to get the data in.
for anything resembling a real application though, it's a terrible idea. what makes for a good UI is the humanisation factor, the bits you tweak to ensure that this machine reacts well to a person's touch.
you just can't get that when your interface is generated mechanically.... well maybe with something approaching AI. :)
edit - to clarify: UI generated from code/db is fine as a starting point, it's just a rubbish end point.
hey this is not difficult to achieve at all and its not a bad idea at all. it all depends on your project needs. a lot of software products (mind you not projects but products) depend upon this model - so they dont have to rewrite their code / ui logic for different client needs. clients can customize their ui the way they want to using a designer form in the admin system
i have used xml for preserving meta data for this sort of stuff. some of the attributes which i saved for every field were:
friendlyname (label caption)
haspredefinedvalues (yes for drop
down list / multi check box list)
multiselect (if yes then check box
list, if no then drop down list)
datatype
maxlength
required
minvalue
maxvalue
regularexpression
enabled (to show or not to show)
sortkey (order on the web form)
regarding positioning - i did not care much and simply generate table tr td tags 1 below the other - however if you want to implement this as well, you can have 1 more attribute called CssClass where you can define ui specific properties (look and feel, positioning, etc) here
UPDATE: also note a lot of ecommerce products follow this kind of dynamic ui when you want to enter product information - as their clients can be selling everything under the sun from furniture to sex toys ;-) so instead of rewriting their code for every different industry they simply let their clients enter meta data for product attributes via an admin form :-)
i would also recommend you to look at Entity-attribute-value model - it has its own pros and cons but i feel it can be used quite well with your requirements.
In my Opinion there some things you should think about:
Does the customer need a function to customize his UI?
Are there a lot of different attributes or elements?
Is the effort of creating such an "rendering engine" worth it?
Okay, i think that its pretty obvious why you should think about these. It really depends on your project if that kind of model makes sense...
If you want to create some a lot of forms that can be customized at runtime then this model could be pretty uselful. Also, if you need to do a lot of smaller tools and you use this as some kind of "engine" then this effort could be worth it because you can save a lot of time.
With that kind of "rendering engine" you could automatically add error reportings, check the values or add other things that are always build up with the same pattern. But if you have too many of this things, elements or attributes then the performance can go down rapidly.
Another things that becomes interesting in bigger projects is, that changes that have to occur in each form just have to be made in the engine, not in each form. This could save A LOT of time if there is a bug in the finished application.
In our company we use a similar model for an interface generator between cash-software (right now i cant remember the right word for it...) and our application, just that it doesnt create an UI, but an output file for one of the applications.
We use XML to define the structure and how the values need to be converted and so on..
I would say that in most cases the data is not suitable for UI generation. That's why you almost always put a a layer of logic in between to interpret the DB information to the user. Another thing is that when you generate the UI from DB you will end up displaying the inner workings of the system, something that you normally don't want to do.
But it depends on where the DB came from. If it was created to exactly reflect what the users goals of the system is. If the users mental model of what the application should help them with is stored in the DB. Then it might just work. But then you have to start at the users end. If not I suggest you don't go that way.
Can you look on your problem from application architecture perspective? I see you as another database terrorist – trying to solve all by writing stored procedures. Why having UI at all? Try do it in DB script. In effect of such approach – on what composite system you will end up? When system serves different businesses – try modularization, selectively discovered components, restrict sharing references. UI shall be replaceable, independent from business layer. When storing so much data in DB – there is hard dependency of UI – system becomes monolith. How you implement MVVM pattern in scenario when UI is generated? Designers like Blend are containing lots of features, which cannot be replaced by most futuristic UI generator – unless – your development platform is Notepad only.
There is a hybrid approach where forms and all are described in a database to ensure consistency server side, which is then compiled to ensure efficiency client side on deploy.
A real-life example is the enterprise software MS Dynamics AX.
It has a 'Data' database and a 'Model' database.
The 'Model' stores forms, classes, jobs and every artefact the application needs to run.
Deploying the new software structure used to be to dump the model database and initiate a CIL compile (CIL for common intermediate language, something used by Microsoft in .net)
This way is suitable for enterprise-wide software and can handle large customizations. But keep in mind that this approach sets a framework that should be well understood by whoever gonna maintain and customize the application later.
I did this (in PHP / MySQL) to automatically generate sections of a CMS that I was building for a client. It worked OK my main problem was that the code that generates the forms became very opaque and difficult to understand therefore difficult to reuse and modify so I did not reuse it.
Note that the tables followed strict conventions such as naming, etc. which made it possible for the UI to expect particular columns and infer information about the naming of the columns and tables. There is a need for meta information to help the UI display the data.
Generally it can work however the thing is if your UI just mirrors the database then maybe there is lots of room to improve. A good UI should do much more than mirror a database, it should be built around human interaction patterns and preferences, not around the database structure.
So basically if you want to be cheap and do a quick-and-dirty interface which mirrors your DB then go for it. The main challenge would be to find good quality code that can do this or write it yourself.
From my perspective, it was always a problem to change edit forms when a very simple change was needed in a table structure.
I always had the feeling we have to spend too much time on rewriting the CRUD forms instead of developing the useful stuff, like processing / reporting / analyzing data, giving alerts for decisions etc...
For this reason, I made long time ago a code generator. So, it become easier to re-generate the forms with a simple restriction: to keep the CSS classes names. Simply like this!
UI was always based on a very "standard" code, controlled by a custom CSS.
Whenever I needed to change database structure, so update an edit form, I had to re-generate the code and redeploy.
One disadvantage I noticed was about the changes (customizations, improvements etc.) done on the previous generated code, which are lost when you re-generate it.
But anyway, the advantage of having a lot of work done by the code-generator was great!
I initially did it for the 2000s Microsoft ASP (Active Server Pages) & Microsoft SQL Server... so, when that technology was replaced by .NET, my code-generator become obsoleted.
I made something similar for PHP but I never finished it...
Anyway, from small experiments I found that generating code ON THE FLY can be way more helpful (and this approach does not exclude the SAVED generated code): no worries about changing database etc.
So, the next step was to create something that I am very proud to show here, and I think it is one nice resolution for the issue raised in this thread.
I would start with applicable use cases: https://data-seed.tech/usecases.php.
I worked to add details on how to use, but if something is still missing please let me know here!
You can change database structure, and with no line of code you can start edit data, and more like this, you have available an API for CRUD operations.
I am still a fan of the "code-generator" approach, and I think it is just a flavor of using XML/XSLT that I used for DATA-SEED. I plan to add code-generator functionalities.

Resources