Best practices for autosaving drafts? - autosave

What is the best strategy for applications that autosave an email before it is sent or save a blog post before it's finished or officially saved? Would it be best to use a separate table in the database for temporary drafts or to have a status column that marks a post as draft or published? I'm not looking for code, just methods, but any other related advice would be welcome as well, like how often to save, etc.

Considering that separate tables for drafts and published articles would be essentially duplicates of each other, I would lean towards just one table with a status column to differentiate between the two.

I do drafting on the Wikipedia way: I save the first version, and all modification's saved (based on time or explicit user command) as a next version. After ie. publication you can delete the draft-graph - or not.
If you save data in database I think it's good to use the same table (you can avoid schema conflicts), and use version/status to track drafts lifecycle.

this applies to more than emails...
I changed my mind on this one. The best way is to use a is_draft column in your table and store both drafts and valid entities in the same table. this has the advantage of the entity keeping the same id even if it switches in and out of draft state (you might want to edit it after you save it, but temporarily remove a required value). it would be confusing for users if they were collaborating on the same document and the id kept changing, amirite?
you would use is_draft=1 to turn off ORM validation rules, trigger validations or check constraints to allow an invalid object to save. yes, you'd likely have to allow nullable fields in your table.
process:
try to save object. validation fails. set is_draft=1 and try to save again. it saves. put big "DRAFT" on the screen somewhere :)
user fills in required info. try to save object. validation passes. set is_draft=0. it saves.
now, regarding email and blog posts, your server shouldn't try to send it or post it right away unless the user hits the save/post button, but that is a different issue really.
OLD ANSWER
The problem is that a draft might not be valid, and cannot be saved in the actual table. For example, say your table demands that the subject be not null, but the user hasn't filled it in yet.
One way would be to have a draft table, and store a serialized version of the entity (and its children) to it. php's serialize() would be something to use, or you could use json. when it is finally valid, the system would save instead to the email (or whatever) table, and delete the draft:
pseudo sql:
create table draft
id int primary key auto increment,
entity varchar(64) not null comment 'this way you can find all drafts of say type Email',
contents longblob not null,
modified timestamp comment 'this way you can sort by newer drafts'
modified_by int not null foreign key to user.id comment 'this way you can filter by the user\'s drafts'
you could also consider a draft_file table for storing attachments or photos for the draft, and be able to access them individually:
create table draft_file
id int primary key auto increment,
draft_id int not null foreign key to draft.id on delete cascade,
size int not null comment 'bytes',
mime_type varchar(64) not null,
file_name varchar(255) not null,
contents longblob,
thumbnail blob comment 'this could be an icon for files/documents'
so, a user starts composing an email, maybe just types in the body, and adds some attachments. your gui saves the email to drafts, and uploads the attachments, saves them to draft_file, and returns the draft id, and the download urls for the files which you display in your gui.
he types in the Subject (To is still blank). Your gui saves the email to drafts, updating the draft table by id, as it knows its id from the previous step.
your users fills in the To field, and hits Send. Your server saves the email to the email table, copies the attachments from draft_file to the email_attachment table, and deletes the draft, preferably within a transaction.
this allows for long-term drafts, gmail-style attachment uploads, while maintaining integrity of your real entity table.

Related

Model-driven PowerApp: Best practice to display subgrid of records with no appropriate primary column name

Background
Each Dataverse table contains a primary name column. When displayed in a subgrid, clicking on the primary name column will navigate to the form so that the user can edit that row. Most subgrids in my application work this way.
The Problem
I have a Course form with a list of participants displayed in a subgrid. The subgrid displays each student's name (as a link) and the grade received in the course. There is no appropriate primary name column for this Participant table. To edit the participant record, the user must select the row in the subgrid, then click the subgrid's Edit button. As a result, this UI is different from all other subgrids in the application and I know that user's will click the student name to try to edit the participant record and be confused when they are presented with the student record.
Am I missing something? Is there a better way to handle this?
It's a common problem I face quite often. Here is usually what I would do.
Make sure the Primary Name Column always contains relevant information to the user to be able to quickly identify a record. Sometimes it requires copying information from one or multiple other columns into the primary column.
In your case that would probably means concatenating the student's name and grade.
How to do that?
Common to all solutions below
Use one of the following solution to copy the content of one or several fields into the primary column.
Make sure the solution you select also updates the content of the primary name column when one of the copied field is updated.
Remove or hide the primary column from the form, the name of the record will be displayed at the top of the form anyway and you probably don't want users to play with it.
Display the primary name column in every subgrid.
I would recommend not adding the fields copied into the primary column in the subgrids to avoid confusion.
Solution 1 - Classic Workflow
Create a classic workflow that runs when a record is created / updated
Pros:
Very quick to put in place
Runs synchronously (users will see the name updated in real-time)
Cons:
Not very practical if you need to add business logic (using different fields as source depending on a certain condition for example)
Solution 2 - Power Automate
Create a Flow that runs when a record is created / updated
Pros:
You can implement complex business logic in your Flow
Cons:
Runs asynchronously (users will have to refresh the page after the creation of a record to see the record's name)
According to Power Automate licensing that flow would certainly be considered as an "enterprise flow" and you are supposed to pay 100$ / month. That specific point must be taken with a grain of salt. I had several discussions with Microsoft about it and they haven't given me a clear answer about what would be considered an enterprise flow.
Solution 3 - Plugin
Create a plugin that executes when a record is created / updated
Pros:
You can implement very complex business logic in your Flow
It can run synchronously
Cons:
Pro-code (I put it as a con since Model-Driven App is a low-code / no-code approach but there is nothing wrong about pro-code per say)
Developing a new plugin for each entity where you need this logic is kind of overkill in my opinion. I would consider developing something very generic that would only require some sort of configuration when the logic needs to be applied to a new table.

Loading records into Dynamics 365 through ADF

I'm using the Dynamics connector in Azure Data Factory.
TLDR
Does this connector support loading child records which need a parent record key passed in? For example if I want to create a contact and attach it to a parent account, I upsert a record with a null contactid, a valid parentcustomerid GUID and set parentcustomeridtype to 1 (or 2) but I get an error.
Long Story
I'm successfully connecting to Dynamics 365 and extracting data (for example, the lead table) into a SQL Server table
To test that I can transfer data the other way, I am simply loading the data back from the lead table into the lead entity in Dynamics.
I'm getting this error:
Failure happened on 'Sink' side. ErrorCode=DynamicsMissingTargetForMultiTargetLookupField,'Type=Microsoft.DataTransfer.Common.Shared.HybridDeliveryException,Message=,Source=,''Type=Microsoft.DataTransfer.Common.Shared.HybridDeliveryException,Message=Cannot find the target column for multi-target lookup field: 'ownerid'.
As a test I removed ownerid from the list of source columns it loads OK.
This is obviously a foreign key value.
It raises two questions for me:
Specifically with regards to the error message: If I knew which lookup it needed to use, how can I specify which lookup table it should validate against? There's no settings in the ADF connector to allow me to do this.
This is obviously a foreign key value. If I only had the name (or business key) for this row, how can I easily lookup the foreign key value?
How is this normally done through other API's, i.e. the web API?
Is there an XRMToolbox addin that would help clarify?
I've also read some posts that imply that you can send pre-connected data in an XML document so perhaps that would help also.
EDIT 1
I realised that the lead.ownertypeid field in my source dataset is NULL (that's what was exported). It's also NULL if I browse it in various Xrmtoolbox tools. I tried hard coding it to systemuser (which is what it actually is in the owner table against the actual owner record) but I still get the same error.
I also notice there's a record with the same PK value in systemuser table
So the same record is in two tables, but how do I tell the dynamics connector which one to use? and why does it even care?
EDIT 2
I was getting a similar message for msauto_testdrive for customerid.
I excluded all records with customerid=null, and got the same error.
EDIT 2
This link appears to indicate that I need to set customeridtype to 1 (Account) or 2 (Contact). I did so, but still got the same error.
Also I believe I have the same issue as this guy.
Maybe the ADF connector suffers from the same problem.
At the time of writing, #Arun Vinoth was 100% correct. However shortly afterwards there was a documentation update (in response to a GitHub I raised) that explained how to do it.
I'll document how I did it here.
To populate a contact with against a parent account, you need the parent accounts GUID. Then you prepare a dataset like this:
SELECT
-- a NULL contactid means this is a new record
CAST(NULL as uniqueidentifier) as contactid,
-- the GUID of the parent account
CAST('A7070AE2-D7A6-EA11-A812-000D3A79983B' as uniqueidentifier) parentcustomerid,
-- customer id is an account
'account' [parentcustomerid#EntityReference],
'Joe' as firstname,
'Bloggs' lastname,
Now you can apply the normal automapping approach in ADF.
Now you can select from this dataset and load into contact. You can apply the usual automapping approach, this is: create datasets without schemas. Perform a copy activity without mapping columns
This is the ADF limitation with respect to CDS polymorphic lookups like Customer and Owner. Upvote this ADF idea
Workaround is to use two temporary source lookup fields (owner team and user in case of owner, account and contact in case of customer) and with parallel branch in a MS Flow to solve this issue. Read more, also you can download the Flow sample to use.
First, create two temporary lookup fields on the entity that you wish to import Customer lookup data into it, to both the Account and Contact entities respectively
Within your ADF pipeline flow, you will then need to map the GUID values for your Account and Contact fields to the respective lookup fields created above. The simplest way of doing this is to have two separate columns within your source dataset – one containing Account GUID’s to map and the other, Contact.
Then, finally, you can put together a Microsoft Flow that then performs the appropriate mapping from the temporary fields to the Customer lookup field. First, define the trigger point for when your affected Entity record is created (in this case, Contact) and add on some parallel branches to check for values in either of these two temporary lookup fields
Then, if either of these conditions is hit, set up an Update record task to perform a single field update, as indicated below if the ADF Account Lookup field has data within it

breeze.js insert parent/child with identity

Simple parent/child scenario like Order and OrderLineItems. I am inserting a new Order, the OrderID is an identity column (sql server). I'm also inserting OrderLineItems in the same SaveChanges transaction. I need to get the new OrderID into the OrderLineItems, but not sure how to do it. I have the appropriate FK relationships setup properly. When I save, I get an error that OrderID is a required field in OrderLineItems.
Will I have to split this out into 2 server calls? First to insert the Order, which will return the OrderID. And then another to insert the OrderLineItems?
The Breeze documentation discusses this topic (key generation) at several points including but not limited to: http://www.breezejs.com/documentation/save-changes, http://www.breezejs.com/documentation/extending-entities and http://www.breezejs.com/documentation/add-new-entity.
The basic idea idea is that providing that your model and metadata are set up properly, breeze can assign a temporary id in place of the identity column for use in linking your order and orderlineitem entities prior to being saved. As part of the save process, Breeze updates these temporary keys to their "real" key values and updates the local cache as well upon successful completion of the save.

Issues with Data Integrity with RDBMS

Anyone know about Cascade events in a relational data base system? How it works, how it helps and if there are any disadvantages. Thanks.
Cascade events are quite simple really. For example, say you have a User table with attribute and primary key username, and an Email table with attributes username and email address. Now it's quite likely that we might make username in Email a reference (foreign key) to username in User, because we want every user that has an email to also be in our User table. Now think about what would happen if you deleted a user in User. Should you delete all the matching rows in Email? If not, what do you do? Some DBMS's will just throw an error, saying something like "You mustn't do that! References exist and we don't know what to do with them!". This is where cascade events come in. If the DMBS supported cascading events, you might be allowed the option to specify whether the DMBS actually throws that error, or maybe delete all the matching (on username in Email) rows, so there are no "dangling" references. This is called a cascade delete.
There are other cascading options too! Another occurs if we try to update username in User to something different. Without cascading options, we would probably throw an error if there are matching rows in Email. But with cascading options, we have the option to automatically update username in Email with the new username. That is called a cascading update.
These are two major ones, but by no means the only existing "cascading" options that exist in some DBMS's.
If it helps, think of "cascading" modifications as "recursive" modifications, as their are synonymous, and is what is meant by "cascading". Modifications "cascade" down to other tables that use the same attribute.
Think about the advantages and disadvantages of this feature. We can now specify exactly what we want to happen when we want to have changes "cascade" to attributes in different tables. A possible disadvantage of allowing these features is that we now have the opportunity to cause modifications on a bigger scale than we might like (depending on design). Changing username in User may cause changes in a different table Email, even if we don't mean to!
Hope this helps.

refactoring a database and application due to new requirements

My application manages customer's complaints and has already been deployed into production. Each complaint has a code to identify it (for eaxmple "late delivery" ), a "department" type (wich is essentially the department responsible for that kind of complaint) and another "model" code which identifies the route through department's employees this complaint dossier has to follow (first to hr responsible then to hr big boss finally back to customer care). Each dossier has some common info and can have department specific infos, that's why i need deparment code.
For example Customer care get a complaint about "rudeness" of a call center operator, opens a dossier with code ABC and type "HR" (there's could be more HR dossier types). When the customer care has filled all the infos, forward it to hr(a mail is sent to the user configured in the system as HR responsible ). The hr employee fills his own section and send it back to customer care.
Till now each complaint code might have only one department and one model, now requirements have changed and i've two problems:
Some complaints are identified by the same code but might be due to different departments . For example a complaint about employees rudeness could be sent to the department which rules the call centers or to the department which rules logistics
i could solve this simply extending the table primary key to include the department (hoping they'll not decide the same code for the same department can follow different routes), changing application code might be a bit painful but it can be done :
Does extending primary keys to composite keys is a problem in Oracle or have side effects on existing records? the actual primary key is not used as foreign key anywere and all fields are filled.
this is a quite more difficult problem (at least for me): marketing department (the rulers) wants a special dossier.They monitor time departments take to answer complaints and open a new type of dossier if they exceeds the standard time.
For the above example, if hr always needs the 30% more time to complete employees rudeness dossiers, marketing can open an "inquire" dossier about that complaint code directed to hr.
Now, referring to point 1, i could add a new record for each complaint code having the second part of the key being the marketing code and associating it to a new model.This is going to double the rows of the table (which is already quite large). I see it very error prone for inserting new complaint codes.
I know it's very hard to give an opinion without being able to see the schema and the code, but i would appreciate your opinion anyway
"Does extending primary keys to
composite keys is a problem in Oracle
or have side effects on existing
records? the actual primary key is not
used as foreign key anywere and all
fields are filled."
Oracle allows us to have composite primary keys. They are not a problem from a relational perspective.
The only objection to primary composite keys is the usual one, that they make foreign key relationships and joins more cumbersome. You say you currently don't have foreign keys which reference this table. Nevertheless I would suggest you define a synthetic (surrogate) primary key using an index, and enforce the composite key as a unique constraint. Because you may well have foreign keys in the future: your very predicament shows that your current data model is not correct, or at least not complete.
"i could add a new record for each
complaint code having the second part
of the key being the marketing code"
Smart keys are dumb. Add a separate column for a marketing code if necessary. This would be populated if Marketing open their own dossier. I don't see why it needs to be associated with the Complaint Code or form part of any primary key (other than the Marketing Code lookup table).
I admit I don't fully understand your data model or business logic, so the following might be wrong. However what I think you want is a table DOSSIERS which can have two dossier types:
normal dossier identified by DEPT_CODE and COMPLAINT_CODE
Marketing dossier which I presume would be identified by DEPT_CODE, COMPLAINT_CODE and MARKETING_CODE.
Unique constraints permit NULL columns, so MARKETING_CODE can be optional. This is another advantage of using one instead of a composite primary key.
"I see it very error prone for
inserting new complaint codes."
Do you mean creating new complaints? Or new complaint types? Creating new complaints shouldn't be a problem: the process for creating Normal Dossiers will offer a choice of COMPLAINT_CODES where MARKETING_CODE is null, whereas the process for creating Marketing Dossiers will offer a choice of COMPLAINT_CODES where MARKETING_CODE is not null.
If you're talking about adding new complaint types then I suppose the question becomes: does there have to be a separate MARKETING_CODE for each regular COMPLAINT_CODE? I suspect not. In which case, instead of a MARKETING_CODE perhaps you need a CODE_TYPE - values NORMAL or MARKETING.

Resources