Dynamics365 web api update vs upsert - dynamics-crm

I'm quite confused. I understand the actual difference between those two, but I can't see any difference in the actual implementation here.
Here is an excerpt from the docs
Basic update
Update operations use the HTTP PATCH verb. Pass a JSON object
containing the properties you want to update to the URI that
represents the entity. A response with a status of 204 will be
returned if the update is successful.
This example updates an existing account record with the accountid
value of 00000000-0000-0000-0000-000000000001.
PATCH [Organization URI]/api/data/v9.0/accounts(00000000-0000-0000-0000-000000000001) HTTP/1.1
Content-Type: application/json
OData-MaxVersion: 4.0
OData-Version: 4.0
{
"name": "Updated Sample Account ",
"creditonhold": true,
"address1_latitude": 47.639583,
"description": "This is the updated description of the sample account",
"revenue": 6000000,
"accountcategorycode": 2
}
Upsert
An upsert operation is exactly like an update. It uses a PATCH
request and uses a URI to reference a specific entity. The difference
is that if the entity doesn’t exist it will be created. If it already
exists, it will be updated. Normally when creating a new entity you
will let the system assign a unique identifier. This is a best
practice. But if you need to create a record with a specific id value,
an upsert operation provides a way to do this. This can be valuable in
situation where you are synchronizing data in different systems.
Sometimes there are situations where you want to perform an upsert,
but you want to prevent one of the potential default actions: either
create or update. You can accomplish this through the addition of
If-Match or If-None-Match headers. For more information, see Limit
upsert operations.
So in reality Basic update as stated above will be an upsert and to achieve a real basic update (update if given account eixists, 404 otherwise) I need to add the If-Match: * header to the PATCH request.
Did I understand that correctly?

I have the same understanding here as you have. In practice, I've found that using a patch request without If-Match: * will do an insert if the record doesn't exist. The puzzling piece however, is that when the upsert succeeds in inserting a record, it returns a 404 error. When I've included the If-Match: *, I've received a 400 error when the update failed.

Related

Does GraphQL ever redundantly visit fields during execution?

I was reading this article and it used the following query:
{
getAuthor(id: 5){
name
posts {
title
author {
name # this will be the same as the name above
}
}
}
}
Which was parsed and turned into an AST like the one below:
Clearly it is bringing back redundant information (the Author's name is asked for twice), so I was wondering how GraphQL Handles that. Does it redundantly fetch that information? Is the diagram a proper depiction of the actual AST?
Any insight into the query parsing and execution process relevant to this would be appreciated, thanks.
Edit: I know this may vary depending on the actual implementation of the GraphQl server, but I was wondering what the standard / best practice was.
Yes, GraphQL may fetch the same information multiple times in this scenario. GraphQL does not memoize the resolver function, so even if it is called with the same arguments and the same parent value, it will still run again.
This is a fairly common problem when working with databases in GraphQL. The most common solution is to utilize DataLoader, which not only batches your database requests, but also provides a cache for those requests for the duration of the GraphQL request. This way, even if a particular record is requested multiple times, it will only be fetched from the database once.
The alternative (albeit more complicated) approach is to compose a single database query based on the requested fields that executes at the root level. For example, our resolver for getAuthor could constructor a single query that would return the author, their posts and each of that post's author. With this approach, we can skip writing resolvers for the posts field on the Author type or the author field on the Post type and just utilize the default resolver behavior. However, in order to do this and avoid overfetching, we have to parse the GraphQL request inside the getAuthor resolver in order to determine which fields were requested and should therefore be included in our database query.

Cannot retrieve selected lookups using the OData endpoint

I'm unable to retrieve lookup fields when filtering an OData Request.
I used the following requests :
https://mycrm.api.crm4.dynamics.com/api/data/v9.1/contacts(guid)?$select=contactid,ownerid,createdby,new_expirefin,new_testcumul_stat```
This request retrieves contactid, new_expirefin, and new_testcumul_stat, but no trace of ownerid and createdby.
In another hand, this request:
https://mycrm.api.crm4.dynamics.com/api/data/v9.1/contacts(guid)
return all fields, including those missing on the other request. Lookups are sent as Guid.
Both request uses the
Prefer = odata.include-annotations="*"
header. Knowing I cannot know which column are lookups (I'm working on a generic library), how could I retrieve those lookups ?
Using the format _lookupName_value allows you to retrieve the lookups:
https://myOrg.api.crm.dynamics.com/api/data/v9.1/contacts(guid)?$select=contactid,fullname,_ownerid_value,_createdby_value
Which of course leaves the problem of knowing which fields are lookups and thus need this formatting.
This can help:
https://myOrg.api.crm.dynamics.com/api/data/v9.1/EntityDefinitions(LogicalName='contact')?$select=LogicalName&$expand=ManyToOneRelationships($select=ReferencingAttribute,ReferencedEntity)

WebAPI - odata service adding ForeignKey

i am building my the model using ODataModelBuilder, i am trying to create navigation property however in the metadata i dont see any foreginkey indication, in my solution i am not using EF, so there is no foreignKey attribute, is it possible to add it by code?
As you clarified in your comment, the reason you want to add foreign key information is because your client application is not including related entities when you query the main entity. I don't think foreign keys are the problem here.
As an example, I'll use two entity types: Customer and Order. Every Customer has some number of associated Orders, so I have a navigation property on Customer called Orders that points to a collection of Orders. If I issue a GET request to /MyService.svc/Customers(1), the server will respond with all of the Customer's information as well as URLs that point to the related Order entities*. I won't, by default, get the data of each related Order within the same payload.
If you want a request to Customers(1) to include all of the data of its associated Orders, you would add the $expand query option to the request URI: /MyService.svc/Customers(1)?$expand=Orders. Using the WCF Data Services client (DataServiceContext), you can do this with .Expand():
DataServiceQuery<Customer> query = context.Customers.Expand("Orders");
However, WebAPI OData doesn't currently support $expand (the latest nightly builds do though, so this will change soon).
The other approach would be to make a separate request to fill in the missing Order data. You can use the LoadProperty() method to do this:
context.LoadProperty(customer, "Orders");
The LoadProperty approach should work with WebAPI as it stands today.
I know this doesn't answer your original question, but I hope addresses your intent.
*In JSON, which is the default format for WebAPI OData services, no links will show up on the wire, but they are still there "in spirit". The client is expected to be able to compute them on its own, which the WCF Data Services Client does.

Why can't I trust a client-generated GUID? Does treating the PK as a composite of client-GUID and a server-GUID solve anything?

I'm building off of a previous discussion I had with Jon Skeet.
The gist of my scenario is as follows:
Client application has the ability to create new 'PlaylistItem' objects which need to be persisted in a database.
Use case requires the PlaylistItem to be created in such a way that the client does not have to wait on a response from the server before displaying the PlaylistItem.
Client generates a UUID for PlaylistItem, shows the PlaylistItem in the client and then issue a save command to the server.
At this point, I understand that it would be bad practice to use the UUID generated by the client as the object's PK in my database. The reason for this is that a malicious user could modify the generated UUID and force PK collisions on my DB.
To mitigate any damages which would be incurred from forcing a PK collision on PlaylistItem, I chose to define the PK as a composite of two IDs - the client-generated UUID and a server-generated GUID. The server-generated GUID is the PlaylistItem's Playlist's ID.
Now, I have been using this solution for a while, but I don't understand why/believe my solution is any better than simply trusting the client ID. If the user is able to force a PK collison with another user's PlaylistItem objects then I think I should assume they could also provide that user's PlaylistId. They could still force collisons.
So... yeah. What's the proper way of doing something like this? Allow the client to create a UUID, server gives a thumbs up/down when successfully saved. If a collision is found, revert the client changes and notify of collison detected?
You can trust a client generated UUID or similar global unique identifier on the server. Just do it sensibly.
Most of your tables/collections will also hold a userId or be able to associate themselves with a userId through a FK.
If you're doing an insert and a malicious user uses an existing key then the insert will fail because the record/document already exists.
If you're doing an update then you should validate that the logged in user owns that record or is authorized (e.g. admin user) to update it. If pure ownership is being enforced (i.e. no admin user scenario) then your where clause in locating the record/document would include both the Id and the userId. Now technically the userId is redundant in the where clause because the Id will uniquely find one record/document. However adding the userId makes sure the record belongs to the user that's doing the update and not the malicious user.
I'm assuming that there's an encrypted token or session of some sort that the server is decrypting to ascertain the userId and that this is not supplied by the client otherwise that's obviously not safe.
A nice solution would be the following: To quote Sam Newman's "Building Microservices":
The calling system would POST a BatchRequest, perhaps passing in a
location where a file can be placed with all the data. The Customer
service would return a HTTP 202 response code, indicating that the
request was accepted, but has not yet been processed. The calling
system could then poll the resource waiting until it retrieves a 201
Created indicating that the request has been fulfilled
So in your case, you could POST to server but immediately get a response like "I will save the PlaylistItem and I promise its Id will be this one". Client (and user) can then continue while the server (maybe not even the API, but some background processor that got a message from the API) takes its time to process, validate and do other, possibly heavy logic until it saves the entity. As previously stated, API can provide a GET endpoint for the status of that request, and the client can poll it and act accordingly in case of an error.

JSON Results - Storing ID and/or Reference

I'm trying to build a simple reviews site for a very specific search parameter, which I can pull information back from Google Places API. I understand I cannot store any information other than what Google says I can, and it sounds like I can only store the "reference" parameter and the "id" parameter.
Upon creation of a review for a place returned from Google, I need to store some identifier so that when someone else searches Google Places through my site, I can do an AJAX call to my DB and pull all reviews for that Place.
Ultimately, my question is, which key should I store? Or both?
As per the documentation:
id contains a unique stable identifier denoting this place. This
identifier may not be used to retrieve information about this place,
but is guaranteed to be valid across sessions. It can be used to
consolidate data about this Place, and to verify the identity of a
Place across separate searches.
reference contains a unique token that you can use to retrieve
additional information about this place in a Place Details request.
You can store this token and use it at any time in future to refresh
cached data about this Place, but the same token is not guaranteed to
be returned for any given Place across different searches.
It would make sense to store both, reference to retrieve reviews from Google Places and id to group your place reviews in your db.
As of June 24, 2014 the id and reference fields are deprecated. placeId (for requests) and place_id (in responses) should be used instead.
The Places API currently returns a place_id in all responses, and accepts a placeid in the Place Details and Place Delete requests. Soon after June 24, 2015, the API will stop returning the id and reference fields in responses. Some time later, the API will no longer accept the reference in requests. We recommend that you update your code to use the new place ID instead of id and reference as soon as possible.

Resources