All,
I am building a Power Flow. I am returning an object that has a relationship to another entity.
I want to get some values from the related entity.
I am attemping to use the "Get Record" connector. The returning object returns just the logicalEntityName (in this case "opportunities") but Get Record wants an Entity Name that is the Schema Name ("Working Opportunities").
Big Question: What's the secret to use CDS to get information from a related record in another object?
Little Question: How do I do get the Schema Name?
The logical name will be the same as schema name except some casing difference, ie schema name will have camel casing (first letter of first/second word with capitals, you can notice it clearly in custom entity which will have publisher prefix like new_entityname) and logical name will have pascal casing (all lower case).
You can find the details in XrmToolBox metadata browser or in Solution.
In the below snip, (Logical) Name = Opportunity and Schema Name = Opportunity, also Display Name can be anything and can be changed anytime.
Regarding the related entities, you should use List Records: GetItems_V2 and you can use filter by passing parent record to get related child records. Read more
Could you please share flow screenshot and response to help you with your requirement?
As suggested by Arun you could use List Record and filter query to pass parent record id which will be available from dynamic content.
see below link.
https://crmkeeper.com/2019/08/31/cds-list-records-filter-query-using-flow/
Please mark my answer verified if i were helpful
I'm using the Dynamics connector in Azure Data Factory.
TLDR
Does this connector support loading child records which need a parent record key passed in? For example if I want to create a contact and attach it to a parent account, I upsert a record with a null contactid, a valid parentcustomerid GUID and set parentcustomeridtype to 1 (or 2) but I get an error.
Long Story
I'm successfully connecting to Dynamics 365 and extracting data (for example, the lead table) into a SQL Server table
To test that I can transfer data the other way, I am simply loading the data back from the lead table into the lead entity in Dynamics.
I'm getting this error:
Failure happened on 'Sink' side. ErrorCode=DynamicsMissingTargetForMultiTargetLookupField,'Type=Microsoft.DataTransfer.Common.Shared.HybridDeliveryException,Message=,Source=,''Type=Microsoft.DataTransfer.Common.Shared.HybridDeliveryException,Message=Cannot find the target column for multi-target lookup field: 'ownerid'.
As a test I removed ownerid from the list of source columns it loads OK.
This is obviously a foreign key value.
It raises two questions for me:
Specifically with regards to the error message: If I knew which lookup it needed to use, how can I specify which lookup table it should validate against? There's no settings in the ADF connector to allow me to do this.
This is obviously a foreign key value. If I only had the name (or business key) for this row, how can I easily lookup the foreign key value?
How is this normally done through other API's, i.e. the web API?
Is there an XRMToolbox addin that would help clarify?
I've also read some posts that imply that you can send pre-connected data in an XML document so perhaps that would help also.
EDIT 1
I realised that the lead.ownertypeid field in my source dataset is NULL (that's what was exported). It's also NULL if I browse it in various Xrmtoolbox tools. I tried hard coding it to systemuser (which is what it actually is in the owner table against the actual owner record) but I still get the same error.
I also notice there's a record with the same PK value in systemuser table
So the same record is in two tables, but how do I tell the dynamics connector which one to use? and why does it even care?
EDIT 2
I was getting a similar message for msauto_testdrive for customerid.
I excluded all records with customerid=null, and got the same error.
EDIT 2
This link appears to indicate that I need to set customeridtype to 1 (Account) or 2 (Contact). I did so, but still got the same error.
Also I believe I have the same issue as this guy.
Maybe the ADF connector suffers from the same problem.
At the time of writing, #Arun Vinoth was 100% correct. However shortly afterwards there was a documentation update (in response to a GitHub I raised) that explained how to do it.
I'll document how I did it here.
To populate a contact with against a parent account, you need the parent accounts GUID. Then you prepare a dataset like this:
SELECT
-- a NULL contactid means this is a new record
CAST(NULL as uniqueidentifier) as contactid,
-- the GUID of the parent account
CAST('A7070AE2-D7A6-EA11-A812-000D3A79983B' as uniqueidentifier) parentcustomerid,
-- customer id is an account
'account' [parentcustomerid#EntityReference],
'Joe' as firstname,
'Bloggs' lastname,
Now you can apply the normal automapping approach in ADF.
Now you can select from this dataset and load into contact. You can apply the usual automapping approach, this is: create datasets without schemas. Perform a copy activity without mapping columns
This is the ADF limitation with respect to CDS polymorphic lookups like Customer and Owner. Upvote this ADF idea
Workaround is to use two temporary source lookup fields (owner team and user in case of owner, account and contact in case of customer) and with parallel branch in a MS Flow to solve this issue. Read more, also you can download the Flow sample to use.
First, create two temporary lookup fields on the entity that you wish to import Customer lookup data into it, to both the Account and Contact entities respectively
Within your ADF pipeline flow, you will then need to map the GUID values for your Account and Contact fields to the respective lookup fields created above. The simplest way of doing this is to have two separate columns within your source dataset – one containing Account GUID’s to map and the other, Contact.
Then, finally, you can put together a Microsoft Flow that then performs the appropriate mapping from the temporary fields to the Customer lookup field. First, define the trigger point for when your affected Entity record is created (in this case, Contact) and add on some parallel branches to check for values in either of these two temporary lookup fields
Then, if either of these conditions is hit, set up an Update record task to perform a single field update, as indicated below if the ADF Account Lookup field has data within it
I have three database views that are mapped in Hibernate as entities.
The entities are in a parent-child relationship (1 parent (A), 2 children(B & C)).
One of the children views (B) uses Oracle's dbms_utility.get_hash_value() to calculate its ID.
This is because it does a UNION over several tables that use different ID sequences and thus the IDs from there may not be unique.
I now have the very puzzling effect that a simple entityManager.find(B.class, id) cannot find the appropriate row.
When I look at the children through a loaded parent (A) entity, I can see that the ID shown in B is completely different from the one in the database. If I use this ID with entityManager.find(B.class, hibernateId), Hibernate finds the appropriate entity.
The database, on the other hand, only returns a value when using the ID shown in the ID column there (and not with the ID Hibernate shows).
Child entity C does not use the hash function and does not show this peculiar behaviour - which means the hash must be responsible.
Does anyone have an idea why?
We found the reason:
Child view B used all of its (content containing) columns as a string concatenation for the hash function.
This included date fields, which were not explicitly formatted when creating the string.
So, when Hibernate selected from the view, it obviously used another format than SQL Developer and thus produced completely different (but consistent) IDs.
Explicitly formatting the used date fields removed the problem.
In Dynamics CRM 2016 (on-prem), I've custom child entity Partiesthat have a look of type order. Currently it is working properly and it is showing order name but I want it to show Order ID.
I tried to resolve by creating my own view, adding order id field only and making it default view for the look up but it don't meet my requirements.
This is controlled by the lookup view for Orders. The first three fields in the lookup view are shown when selecting records from the lookup field.
You cannot change the position of (or remove) the Name field from the lookup view. That still leaves you with two additional fields to be shown, one of which could be the Order ID.
I know no supported ways around this (except creating a workflow to copy the Order ID to the Name field and adding a custom name field instead, if you really insist).
Changing what appears on the form after you make a selection in a lookup field is not possible, unfortunately. The primary attribute is always what gets displayed after a user specifies a lookup value (which, for the Order entity, is the Name attribute). As Henrik's answer mentions, people commonly get around this by copying the value they want to see into the primary attribute using a workflow or a plugin.
I am creating a laboratory database which analyzes a variety of samples from a variety of locations. Some locations want their own reference number (or other attributes) kept with the sample.
How should I represent the columns which only apply to a subset of my samples?
Option 1:
Create a separate table for each unique set of attributes?
SAMPLE_BOILER: sample_id (FK), tank_number, boiler_temp, lot_number
SAMPLE_ACID: sample_id (FK), vial_number
This option seems too tedious, especially as the system grows.
Option 1a: Class table inheritance (link): Tree with common fields in internal node/table
Option 1b: Concrete table inheritance (link): Tree with common fields in leaf node/table
Option 2: Put every attribute which applies to any sample into the SAMPLE table.
Most columns of each entry would most likely be NULL, however all of the fields are stored together.
Option 3: Create _VALUE_ tables for each Oracle data type used.
This option is far more complex. Getting all of the attributes for a sample requires accessing all of the tables below. However, the system can expand dynamically without separate tables for each new sample type.
SAMPLE:
sample_id*
sample_template_id (FK)
SAMPLE_TEMPLATE:
sample_template_id*
version *
status
date_created
name
SAMPLE_ATTR_OF
sample_template_id* (FK)
sample_attribute_id* (FK)
SAMPLE_ATTRIBUTE:
sample_attribute_id*
name
description
SAMPLE_NUMBER:
sample_id* (FK)
sample_attribute_id (FK)
value
SAMPLE_DATE:
sample_id* (FK)
sample_attribute_id (FK)
value
Option 4: (Add your own option)
To help with Googling, your third option looks a little like the Entity-Attribute-Value pattern, which has been discussed on StackOverflow before although often critically.
As others have suggested, if at all possible (eg: once the system is up and running, few new attributes will appear), you should use your relational database in a conventional manner with tables as types and columns as attributes - your option 1. The initial setup pain will be worth it later as your database gets to work the way it was designed to.
Another thing to consider: are you tied to Oracle? If not, there are non-relational databases out there like CouchDB that aren't constrained by up-front schemas in the same way as relational databases are.
Edit: you've asked about handling new attributes under option 1 (now 1a and 1b in the question)...
If option 1 is a suitable solution, there are sufficiently few new attributes that the overhead of altering the database schema to accommodate them is acceptable, so...
you'll be writing database scripts to alter tables and add columns, so the provision of a default value can be handled easily in these scripts.
Of the two 1 options (1a, 1b), my personal preference would be concrete table inheritance (1b):
It's the simplest thing that works;
It requires fewer joins for any given query;
Updates are simpler as you only write to one table (no FK relationship to maintain).
Although either of these first options is a better solution than the others, and there's nothing wrong with the class table inheritance method if that's what you'd prefer.
It all comes down to how often genuinely new attributes will appear.
If the answer is "rarely" then the occasional schema update can cope.
If the answer is "a lot" then the relational DB model (which has fixed schemas baked-in) isn't the best tool for the job, so solutions that incorporate it (entity-attribute-value, XML columns and so on) will always seem a little laboured.
Good luck, and let us know how you solve this problem - it's a common issue that people run into.
Option 1, except that it's not a separate table for each set of attributes: create a separate table for each sample source.
i.e. from your examples: samples from a boiler will have tank number, boiler temp, lot number; acid samples have vial number.
You say this is tedious; but I suggest that the more work you put into gathering and encoding the meaning of the data now will pay off huge dividends later - you'll save in the long term because your reports will be easier to write, understand and maintain. Those guys from the boiler room will ask "we need to know the total of X for tank grouped by this set of boiler temperature ranges" and you'll say "no prob, give me half an hour" because you've done the hard yards already.
Option 2 would be my fall-back option if Option 1 turns out to be overkill. You'll still want to analyse what fields are needed, what their datatypes and constraints are.
Option 4 is to use a combination of options 1 and 2. You may find some attributes are shared among a lot of sample types, and it might make sense for these attributes to live in the main sample table; whereas other attributes will be very specific to certain sample types.
You should really go with Option 1. Although it is more tedious to create, Option 2 and 3 will bite you back when trying to query you data. The queries will become more complex.
In fact, the most important part of storing the data, is querying it. You haven't mentioned how you are planning to use the data, and this is a big factor in the database design.
As far as I can see, the first option will be most easy to query. If you plan on using reporting tools or an ORM, they will prefer it as well, so you are keeping your options open.
In fact, if you find building the tables tedious, try using an ORM from the start. Good ORMs will help you with creating the tables from the get-go.
I would base your decision on the how you usually see the data. For instance, if you get 5-6 new attributes per day, you're never going to be able to keep up adding new columns. In this case you should create columns for 'standard' attributes and add a key/value layout similar to your 'Option 3'.
If you don't expect to see this, I'd go with Option 1 for now, and modify your design to 'Option 3' only if you get to the point that it is turning into too much work. It could end up that you have 25 attributes added in the first few weeks and then nothing for several months. In which case you'll be glad you didn't do the extra work.
As for Option 2, I generally advise against this as Null in a relational database means the value is 'Unknown', not that it 'doesn't apply' to a specific record. Though I have disagreed on this in the past with people I generally respect, so I wouldn't start any wars over it.
Whatever you do option 3 is horrible, every query will have join the data to create a SAMPLE.
It sounds like you have some generic SAMPLE fields which need to be join with more specific data for the type of sample. Have you considered some user_defined fields.
Example:
SAMPLE_BASE: sample_id(PK), version, status, date_create, name, userdata1, userdata2, userdata3
SAMPLE_BOILER: sample_id (FK), tank_number, boiler_temp, lot_number
This might be a dumb question but what do you need to do with the attribute values? If you only need to display the data then just store them in one field, perhaps in XML or some serialised format.
You could always use a template table to define a sample 'type' and the available fields you display for the purposes of a data entry form.
If you need to filter on them, the only efficient model is option 2. As everyone else is saying the entity-attribute-value style of option 3 is somewhat mental and no real fun to work with. I've tried it myself in the past and once implemented I wished I hadn't bothered.
Try to design your database around how your users need to interact with it (and thus how you need to query it), rather than just modelling the data.
If the set of sample attributes was relatively static then the pragmatic solution that would make your life easier in the long run would be option #2 - these are all attributes of a SAMPLE so they should all be in the same table.
Ok - you could put together a nice object hierarchy of base attributes with various extensions but it would be more trouble than it's worth. Keep it simple. You could always put together a few views of subsets of sample attributes.
I would only go for a variant of your option #3 if the list of sample attributes was very dynamic and you needed your users to be able to create their own fields.
In terms of implementing dynamic user-defined fields then you might first like to read through Tom Kyte's comments to this question. Now, Tom can be pretty insistent in his views but I take from his comments that you have to be very sure that you really need the flexibility for your users to add fields on the fly before you go about doing it. If you really need to do it, then don't create a table for each data type - that's going too far - just store everything in a varchar2 in a standard way and flag each attribute with an appropriate data type.
create table sample (
sample_id integer,
name varchar2(120 char),
constraint pk_sample primary key (sample_id)
);
create table attribute (
attribute_id integer,
name varchar2(120 char) not null,
data_type varchar2(30 char) not null,
constraint pk_attribute primary key (attribute_id)
);
create table sample_attribute (
sample_id integer,
attribute_id integer,
value varchar2(4000 char),
constraint pk_sample_attribute primary key (sample_id, attribute_id)
);
Now... that just looks evil doesn't it? Do you really want to go there?
I work on both a commercial and a home-made system where users have the ability to create their own fields/controls dynamically. This is a simplified version of how it works.
Tables:
Pages
Controls
Values
A page is just a container for one or more controls. It can be given a name.
Controls are linked to pages and represents user input controls.
A control contains what datatype it is (int, string etc) and how it should be represented to the user (textbox, dropdown, checkboxes etc).
Values are the actual data that the users have typed into the controls, a value contains one column for every datatype that it can represent (int, string, etc) and depending on the control type, the relevant column is set with the user input.
There is an additional column in Values which specifies which group the value belong to.
Each time a user fills in a form of controls and clicks save, the values typed into the controls are saved into the same group so that we know that they belong together (incremental counter).
CodeSpeaker,
I like you answer, it's pointing me in the right direction for a similar problem.
But how would you handle drop-downlist values?
I am thinking of a Lookup table of values so that many lookups link to one UserDefinedField.
But I also have another problem to add to the mix. Each field must have multiple linked languages so each value must link to the equivilant value for multiple languages.
Maybe I'm thinking too hard about this as I've got about 6 tables so far.