I'm building a microservice system with multiple disconnected components, and I'm currently trying to find out how to implement knowing which fields on an object should be updated based on the protobuf data provided.
The flow is this:
The client sends a JSON-request to an API.
The API translates the JSON-data into a protobuf struct, which is then sent along to the microservice responsible for handling it.
The microservice receives the data from the API and performs any action on it, in this case, I'm trying to change a single value in a MySQL table, such as a client's email address.
Now, the problem I have is that since protobuf (understandably) doesn't allow pointers, the protobuf object will contain zero-values for everything not provided. This means that if a customer wants to update their email address, I can't know if they also set IncludeInMailLists to false - or if it was simply not provided (having its zero-value) and shouldn't change.
The question is: how will I - from the protobuf object - know if a value is expressively set to 0, or just not provided?
My current solution is pretty much having a special UpdateCustomer-object which also has an array of Fields specifying which fields the microservice should care about, but it feels like bad solution.
Someone must have solved this better already. How should I implement it?
Protobufs field masks are one way.
https://developers.google.com/protocol-buffers/docs/reference/google.protobuf#google.protobuf.FieldMask
https://github.com/golang/protobuf/issues/225
But if you are using grpc then there's a (sort of) built in way.
Grpc wrappers
Since proto3 (protobufs v3) there's been no distinction between a primitive that is not set, and a primitive that's been set to the "zero value" (false, 0, "", etc).
Instead you can use objects or in protobufs language a "message", as objects can be nil / null. You've not mentioned what language you are working in but hopefully these examples make sense.
Given an RPC service such as:
import "google/protobuf/wrappers.proto";
service Users {
rpc UpdateUser(UpdateUserRequest) returns (UpdateUserResponse)
}
message UpdateUserRequest {
int32 user_id = 1;
google.protobuf.StringValue email = 2;
}
message UpdateUserResponse {}
Note the import "google/protobuf/wrappers.proto"; is important.
It given you access to the google protobufs wrappers source code here. These are not objects that have methods that allow you to test for presence.
Grpc generated code in java gives you methods such as .hasEmail() which returns true if the value is present. The getter on an unset value will still return you the zero value. I think the golang version uses pointers that you can test for nil instead of an explicit hasX() method.
More info / discussion in this github issue
Related
I am developing an API in a microservice for Invoice entity that takes in input a list of Purchase Order Item (i.e. PO Item) identifiers for ex. PO# + productIdentifier together can be used to identify a POItem uniquely. The response of the API is the invoiced quantity of each PO Item.
Input Model -
input GetInvoicedQuantityForPOItemsRequest {
poItemIdentifierList : POItemIdentifierList
}
Structures
list POItemIdentifierList {
member : POItemIdentifier
}
structure POItemIdentifier {
purchaseOrderNumber : String,
productIdentifier : Long
}
Invoiced Quantity of a POItem = SUM of Quantity of Invoice Items created from that PO Item.
Note : A single PO can be used to create multiple Invoices. An Invoice can be created from multiple POs.
I am quite new to REST and so far we have been using RPC endpoints in our legacy service. But now i am building a new service where i am defining endpoints in REST format (for ex. CreateInvoice has been changed to POST /invoice) and I need some suggestions from Stack Overflow community what would be the right approach for defining the REST endpoint of this API or should we keep it in RPC format itself.
RPC endpoint for this API in legacy system : POST /getInvoicedQuantityForPOItems
Our first attempt on REST for this is : POST /invoice/items/invoicedQuantityForPOItems. But this URI does not look like a Noun it is a Verb.
this URI does not look like a Noun it is a Verb.
REST doesn't care what spelling conventions you use for your resource identifiers.
Example: this URI works exactly the same way that every other URI on the web works, even though "it looks like a verb"
https://www.merriam-webster.com/dictionary/post
The explanation is that, in HTTP, the semantics of the request are not determined by parsing the identifier, but instead by parsing the method token (GET, POST, PUT, etc).
So the machines just don't care about the spelling of the identifier (besides purely mechanical concerns, like making sure it satisfies the RFC 3986 production rules).
URI are identifiers of resources. Resources are generalizations of documents. Therefore, human beings are likely to be happier if your identifier looks like the name of a document, rather than the name of an action.
Where it gets tricky: HTTP is an application protocol whose application domain is the transfer of files over a network. The methods in HTTP are about retrieving documents and metadata (GET/HEAD) or are about modifying documents (PATCH/POST/PUT). The notion of a function, or a parameterized query, doesn't really exist in HTTP.
The usual compromise is to make the parameters part of the identifier for a document, then use a GET request to fetch the current representation of that document. On the server, you parse the identifier to obtain the arguments you need to generate the current representation of the document.
So the identifier for this might look something like
/invoicedQuantityForPOItems?purchaseOrder=12345&productIdentifiers=567,890
An application/x-www-form-urlencoded representation of key value pairs embedded in the query part of the URI is a common spelling convention on the web, primarily because that's how HTML forms work with GET actions. Other identifier conventions can certainly work, though you'll probably be happier in the long term if you stick to a convention that is easily described by a URI template.
I am pondering about schema design with regard to explicit vs implicit relation when to...
for example:
in an imaginary schema with 2 custom types author and post, each with several properties, A post type can reference an author in 1 of 2 ways:
explicit: having an Autor type property
implicit: having a scalar value that indirectly points to the author
when designing a shema. what should be my compass in this kind of desicion making?
thanks in advance
There's absolutely no value to the client in only returning the ID of a related resource when you could just expose a field that would return the entire resource. Exposing only the ID will mean the client will have to make subsequent requests to your service to fetch the related resources, instead of being able to fetch the entire data graph in one request.
In the context of other services, like a REST API, it might make sense to only return the ID or URL of a related resource. This is because in those cases, the payload is of a fixed size, so returning every related resource by default can quickly and unnecessarily bloat a response. In GraphQL, however, request payloads are client-driven so this is not a concern -- the client will always get exactly what it asks for. If the client needs only the author's ID, they may still fetch just that field through the author field -- while allowing a more complete Author object to be fetched in other requests or by other clients.
I have an app written with reason-react using apollo-client. I have defined some fragments on the frontend to basically reuse some field definitions. I'm setting up automated tests for a components that uses fragments, but I keep getting this warning saying I need to use the IntrospectionFragmentMatcher.
'You are using the simple (heuristic) fragment matcher, but your queries contain union or interface types. Apollo Client will not be able to accurately map fragments. To make this error go away, use the `IntrospectionFragmentMatcher` as described in the docs: https://www.apollographql.com/docs/react/advanced/fragments.html#fragment-matcher'
I've tried setting up the fragment matcher according to the docs. The codegen result returns no types:
{
"__schema": {
"types": []
}
}
When I queried my server and looked at the manual method recommended by apollo-client, I noticed it would also return no types.
Another strange thing is that when I don't use the fragment matcher, I get the mocked response back but I just get the warnings from apollo. If I do use it then the mocked response doesn't return correctly.
Why would I query the graphql api for fragments defined in my frontend code? Why would I only received these errors when running the tests & using mock data, but not when running my actual application?
As the error states, the default fragment matcher does not work on intersection or union types. You will need to use Apollo's IntrospectionFragmentMatcher. It works by asking the server (introspecting) for information about your schema types, and then providing that information for reference to the cache so that it can match the fields accurately. It's not querying the server for information about the fragments you are defining on the front end, it's asking for data about the GraphQL schema that must be defined on your back end so that it can properly relate the two. There is an example in the documentation, also more information here.
As for why your server is not returning any types, that is a separate issue that would require more info to debug. If you're using Apollo Server, doublecheck your schema to make sure all the necessary types are defined properly and that you are passing them into the server when it's initialized.
We often have use cases where we only want to update a subset fields on a resource. So if we have a resource Person:
type Person struct {
Age int
Name string
Otherfield string
}
Say the calling client only wants to update the Age field. How would an endpoint be normally set up to handle this?
I believe this should be done with a PATCH request, with only the fields being set as part of the payload, ie:
{
Age: 21
}
However, this won't work with proto3 because as far as I know there are no null fields, only default values. This won't work in many cases where the default value is valid.
Looking at Google own protobuf files (e.g. here), they use FieldMask for partial update.
FieldMask object is passed along with the request, and has the form (in JSON):
{
mask: "Person.Age"
}
This allows the client to tell the server which fields they wish to update, without counting on the partial message itself to figure this out.
I think this adds unnecessary complexity on (each!) client, but we couldn't find any other way to achieve partial updates with proto3.
You can see full documentation of FieldMask here.
Note that it can also be used to filter out responses if the client doesn't need the entire object.
I'm trying to understand the plug-in sample from here.
There's this condition:
// The InputParameters collection contains all the data passed in the message request.
if (context.InputParameters.Contains("Target") &&
context.InputParameters["Target"] is Entity)
Speaking generally, not just with regard to this sample, on what prior knowledge should I base my decision to access a specific property? How could I have known to test whether the InputParameters contains a "Target" key (I assume I'm not supposed to guess it)?
And on what basis could I have known to ask whether the "Target" mapped value is of Entity type, and not some other type?
I found this post from 2 years ago, and I've found this webpage, saying (emphasis is mine):
Within a plugin, the values in context.InputParameters and
context.OutputParameters depend on the message and the stage that you
register the plugin on. For example, "Target" is present in
InputParameters for the Create and Update messages, but not the
SetState message. Also, OutputParameters only exist in a Post stage,
and not in a Pre stage. There is no single source of documentation
that provides the complete set of InputParameters and OutputParameters
by message and stage.
From my searchings, a single source still doesn't exist, but maybe the possible values can be found using the Dynamics Online platform, somewhere deep down the Settings menu, maybe? Any source would be great.
I know this is an "old" question that already has been answered, but I think this can be helpful. I've built a small web page that contains all the messages with all the Input/Output parameters. You can access it from here:
The best practice for doing this is to use a strongly typed approach. If, for example, you want to know which propertes are available on a CreateRequest, you would do:
var createReq = new CreateRequest() { Parameters = context.InputParameters };
createReq.Target; // Has type Entity
Take a look at the full blog post explaining this approach: Tip: Proper handling of Plugin InputParameters
Original answer:
It depends on which request we are talking about. See Understand the data context passed to a plug-in on MSDN.
As an example, take a look at CreateRequest. One property of
CreateRequest is named Target, which is of type Entity. This is the
entity currently being operated upon by the platform. To access the
data of the entity you would use the name “Target” as the key in the
input parameter collection. You also need to cast the returned
instance.
Note that not all requests contain a Target property that is of type
Entity, so you have to look at each request or response. For example,
DeleteRequest has a Target property, but its type is EntityReference.
In summary: Look at the actual request, e.g the CreateRequest.
In 2011 someone actually generated typed properties based on the message type. Kind of neat: https://xrmpalmer.wordpress.com/2013/05/27/crm2011-plugin-inputparameter-and-outputparameter-helper/
It would show you want parameters are possible per message.