Allow multiple provider states with parameters ( Golang ) - go

As our team ( namely myself and two other developers ) spiked on PACT past week or so, one of the areas of concern is not having the ability associate parameters to provider states. The absence of this key feature ( which is slated for version 3 release ), we likely will not get buy in from each of our respective service sub-teams.
#MattFellows - Any projections on when version 3 might be available for Go? Any chance we can get this feature earlier?
Allow multiple provider states with parameters
In previous versions, provider states are defined as a descriptive string. There is no way to infer the data required for the state without encoding the values into the description.
{
"providerState": "an alligator with the given name Mary exists and the user Fred is logged in"
}
The change would be:
{
"providerStates": [
{
"name": "an alligator with the given name exists",
"params": {"name" : "Mary"}
}, {
"name": "the user is logged in",
"params" : { "username" : "Fred"}
}
]
}

You are correct in that it won't be available until version 3.
You can still achieve what you are after, however. The state itself is just a handle for the Consumer to some set of data on the Provider - that can be a one-to-one or one-to-many mapping - it's completely up to you.
Typically the Provider is notified of the state during verification, it will then setup a test data fixture (often seeding a database) that sets up the 'state' of the entire system based on that reference, which allows the Consumer test to run.
Whilst the ability to pass through parameters and multiple states is nice, it's somewhat an advanced feature and I very much doubt this will be the first problem you run into as a team. I've never needed to use them myself.
For a crude but effective example of this, take a look at the gin code in the examples folder of the project.

Related

Apollo Gateway: duplicate / identical types across subgraphs returns null results?

My team and I are trying to move away from our monolithic Apollo Server API pattern toward GQL micro services with Apollo Federation.
Before we get into any code, some preface: my team has used GraphQL Codegen for our monolith to make a single master schema by finding and combining all the various type defs scattered across any number of .graphql files. The resulting output is a src/generated/types.ts file, which several of our resolvers and utility functions import the generated types from. This of course won't work if we're looking to deploy our micro services in isolation.
So, moving towards using a gateway, and with the current goal of being able to continue using GraphQL Codegen to generate and import types, we've simply got some type defs duplicated for now in order to do local type generation. Any optimizations and deduping will occur when we get the time, or by necessity if that isn't something we can have as tech debt. 😬
Minus some redacted information for security purposes, this same file is duplicated across all subgraphs which the gateway consumes.
users.graphql
extend type Query {
self: User
}
type User implements IDocument & ICreated & IUpdated & IDisplayName {
"Unique identifier for the resource across all collections"
_id: ID
"Unique identifier for the resource within its collection"
_key: ID
"Unique identifier for revision"
_rev: String
"ISO date time string for the time this resource was created"
createdAt: String
"Unique identifier for users that created this resource"
createdBy: ID
"ISO date time string for the time this resource was created"
updatedAt: String
"Unique identifier for users that created this resource"
updatedBy: ID
"A preformatted name safe to display in any HTML context"
displayName: String
"Email addresses"
email: String
"Determines if a users is a service account supporting applications"
isServiceAccount: Boolean
}
extend type Mutation {
user: UserMutation
}
type UserMutation {
save(user: UserInput): User
}
input UserInput {
"Unique identifier for the resource across all collections"
_id: ID
"Unique identifier for the resource within its collection"
_key: ID
"Unique identifier for revision"
_rev: String
"A preformatted name safe to display in any HTML context"
displayName: String
"Email addresses"
email: String
}
GraphQL Codegen generates types as expected, and the service compiles just fine. What's more is that the Gateway also seems to have no problems (i.e. compiles and runs) in stitching together several subgraphs containing the duplicate types.
However, when I attempt to execute the following query on GraphQL Playground,
query Self {
self {
_id
_key
displayName
email
}
}
It just returns
{
"data": {
"self": null
}
}
If I change the Gateway's supergraphSdl to only grab just one micro service, thus avoiding type duplication, I get results as expected:
{
"data": {
"self": {
"_id": "users/c3a6062f-b070-4e39-8b2a-9d1354e9dccb",
"_key": "c3a6062f-b070-4e39-8b2a-9d1354e9dccb",
"displayName": "redacted",
"email": "redacted"
}
}
}
(Here's the resolver if it matters. I've debugged the dickens out of it and have come to the conclusion that it works fine.)
const query: QueryResolvers = {
self: (_, __, context) => context.user,
};
I'm still pretty new to Federation, so I apologize if there's an obvious answer. But given what I've related, is there any way to
allow type duplication across the several subgraphs to be stitched together, and is there a way to
still keep isolated, generated types for each service?
I've looked at various possible ways to resolve this issue
, exploring the extends keyword and wondering if extending the User type def across all but one of the services would leave just one "master" User type def: that either didn't work or I did something wrong. I've only got the vaguest idea of what's going on, and I'm guessing the Gateway is confused about which type and from which service it's supposed to use in order to return a response.
Here are various relevant packages and versions which might help solve the issue:
Gateway
"#apollo/gateway": "^2.0.1",
"graphql": "^16.4.0"
GQL Micro services
"graphql": "^16.4.0"
"#graphql-codegen/cli": "^2.6.2",
"#apollo/federation": "^0.36.1",
Any help is immensely appreciated, even if it means what I want isn't possible! If more information and / or code is required I will be happy to give it.
I didn't solve the quest for allowing duplication, but ultimately, I didn't want to have to do it anyway, and since I don't think I even can now, that's fine.
Instead, I was able to find a much more elegant solution!
We just have to point GraphQL Codegen's schema field in the various codegen.yml files to whichever .graphql sources are required. That way, we get the types we need to use, and we prevent usurpation of single sources of truth by not redeclaring type defs. So, happy ending. 🥳
Great discussion y'all

Graphql type with id property that can have different values for same id

I was wondering if an object type that has an id property has to have the same content given the same id. At the moment the same id can have different content.
The following query:
const query = gql`
query products(
$priceSelector: PriceSelectorInput!
) {
productProjectionSearch(
priceSelector: $priceSelector
) {
total
results {
masterVariant {
# If you do the following it will work
# anythingButId: id
id
scopedPrice {
country
}
}
}
}
}
`;
If the PriceSelectorInput is {currency: "USD", country: "US"} then the result is:
{
"productProjectionSearch": {
"total": 2702,
"results": [
{
"name": "Sweater Pinko white",
"masterVariant": {
"id": 1,
"scopedPrice": {
"country": "US",
"__typename": "ScopedPrice"
},
"__typename": "ProductSearchVariant"
},
"__typename": "ProductProjection"
}
],
"__typename": "ProductProjectionSearchResult"
}
}
If the PriceSelectorInput is {currency: "EUR", country: "DE"} then the result is:
{
"productProjectionSearch": {
"total": 2702,
"results": [
{
"name": "Sweater Pinko white",
"masterVariant": {
"id": 1,
"scopedPrice": {
"country": "DE",
"__typename": "ScopedPrice"
},
"__typename": "ProductSearchVariant"
},
"__typename": "ProductProjection"
}
],
"__typename": "ProductProjectionSearchResult"
}
}
My question is that masterVariant of type ProductSearchVariant has id of 1 in both cases but different values for scopedPrice. This breaks apollo cache defaultDataIdFromObject function as demonstrated in this repo. My question is; is this a bug in apollo or would this be a violation of a graphql standard in the type definition of ProductSearchVariant?
TLDR
No it does not break the spec. The spec forces absolutely nothing in regards caching.
Literature for people that may be interested
From the end of the overview section
Because of these principles [... one] can quickly become productive without reading extensive documentation and with little or no formal training. To enable that experience, there must be those that build those servers and tools.
The following formal specification serves as a reference for those builders. It describes the language and its grammar, the type system and the introspection system used to query it, and the execution and validation engines with the algorithms to power them. The goal of this specification is to provide a foundation and framework for an ecosystem of GraphQL tools, client libraries, and server implementations -- spanning both organizations and platforms -- that has yet to be built. We look forward to working with the community in order to do that.
As we just saw the spec says nothing about caching or implementation details, that's left out to the community. The rest of the paper proceeds to give details on how the type-system, the language, requests and responses should be handled.
Also note that the document does not mention which underlying protocol is being used (although commonly it's HTTP). You could effectively run GraphQL communication over a USB device or over infra-red light.
We hosted an interesting talk at our tech conferences which you might find interesting. Here's a link:
GraphQL Anywhere - Our Journey With GraphQL Mesh & Schema Stitching • Uri Goldshtein • GOTO 2021
If we "Ctrl+F" ourselves to look for things as "Cache" or "ID" we can find the following section which I think would help get to a conclusion here:
ID
The ID scalar type represents a unique identifier, often used to refetch an object or as the key for a cache. The ID type is serialized in the same way as a String; however, it is not intended to be human‐readable. While it is often numeric, it should always serialize as a String.
Result Coercion
GraphQL is agnostic to ID format, and serializes to string to ensure consistency across many formats ID could represent, from small auto‐increment numbers, to large 128‐bit random numbers, to base64 encoded values, or string values of a format like GUID.
GraphQL servers should coerce as appropriate given the ID formats they expect. When coercion is not possible they must raise a field error.
Input Coercion
When expected as an input type, any string (such as "4") or integer (such as 4) input value should be coerced to ID as appropriate for the ID formats a given GraphQL server expects. Any other input value, including float input values (such as 4.0), must raise a query error indicating an incorrect type.
It mentions that such field it is commonly used as a cache key (and that's the default cache key for the Apollo collection of GraphQL implementations) but it doesn't tell us anything about "consistency of the returned data".
Here's the link for the full specification document for GraphQL
Warning! Opinionated - My take on ID's
Of course all I am about to say has nothing to do with the GraphQL specification
Sometimes an ID is not enough of a piece of information to decide whether to cache something. Let's think about user searches:
If I have a FavouriteSearch entity that has an ID on my database and a field called textSearch. I'd commonly like to expose a property results: [Result!]! on my GraphQL specification referencing all the results that this specific text search yielded.
These results are very likely to be different from the moment I make the search or five minutes later when I revisit my favourite search. (Thinking about a text-search on a platform such as TikTok where users may massively upload content).
So based on this definition of the entity FavouriteSearch it makes sense that the caching behavior is rather unexpected.
If we think of the problem from a different angle we might want a SearchResults entity which could have an ID and a timestamp and have a join-table where we reference all those posts that were related to the initial text-search and in that case it would make sense to return a consistent content for the property results on our GraphQL schema.
Thing is that it depends on how we define our entities and it's ultimately not related to the GraphQL spec
A solution for your problem
You can specify how Apollo generates the key for later use as key on the cache as #Matt already pointed in the comments. You may want to tap into that and override that behavior for those entitites that have a __type equal to your masterVariant property type and return NO_KEY for all of them (or similar) in order to avoid caching from your ApolloClient on those specific fields.
I hope this was helpful!

Best practice for custom REST actions with API Platform

This top-ranking Stackoverflow answer from 10 years ago suggests using POST /users/:user_id/reset_password to initiate a password reset.
I know the API platform recommends against using custom operations. The docs page for the Symfony Messenger integration uses a ResetPasswordRequest entity (with a username field). That makes sense to me. 
Say I have a User and Notification entity and maybe a joined UserNotification (with a hasRead) entity as well. I want to expose an endpoint on my API to mark all the notifications older than a month as read. So I might make a ClearOldNotification entity, again with a username field.
Another example might be I want a report showing Customers that haven't been contacted due to some criteria. So I want to join the tables in the server and return a custom JSON data object. Again I could make a CustomerNoContact entity.
The issue I see is that I now have a distinction between pure entities, like a User or Product, as opposed to these service type entities.
Is this method of making individual entities classes for actions the recommended, best practice for Symfony and API Platform?
Should I be name-spacing (or something) these entities differently within my app to distinguish them?
I could imagine on a really large and complex application you could have hundreds of these service entities, compared to the pure entities. Is that expected or desired?
Can anyone recommend some good resources on this pattern?
You're asking for a best practice for two different use cases. Let's break it down:
ClearOldNotification
I think you've already found the solution: using Messenger. As you've read, there is an example in the documentation for this use case:
#[ApiResource(collectionOperations: [
"post", "get", "delete",
"reset_password" => ["status" => 202, "messenger" => "input", "input" => ResetPasswordRequest::class, "output" => false, "method" => "POST", "path" => "/users/reset_password"]
]
)]
final class User
{
}
The ResetPasswordRequest class is a Data Transfer Object (DTO). In your ResetPasswordRequestHandler you should inject the service that is responsible for resetting the password and sending an email.
CustomerNoContact
This could be a Custom (Doctrine ORM) Filter.

Correct way to handle entities anytime in middle of conversation

I have started working with the LUIS and bot framework recently, after having some experience also with API AI / Google home development.
In the sample below that, I will use an example (from https://learn.microsoft.com/en-us/bot-framework/nodejs/bot-builder-nodejs-dialog-waterfall) is exemplified a step by step interaction with a user. First, it asks for a date, then a number, then a name for the reserve, and so on.
var bot = new builder.UniversalBot(connector, [
function (session) {
session.send("Welcome to the dinner reservation.");
builder.Prompts.time(session, "Please provide a reservation date and time (e.g.: June 6th at 5pm)");
},
function (session, results) {
session.dialogData.reservationDate = builder.EntityRecognizer.resolveTime([results.response]);
builder.Prompts.text(session, "How many people are in your party?");
},
function (session, results) {
session.dialogData.partySize = results.response;
builder.Prompts.text(session, "Who's name will this reservation be under?");
},
function (session, results) {
session.dialogData.reservationName = results.response;
// Process request and display reservation details
session.send("Reservation confirmed. Reservation details: <br/>Date/Time: %s <br/>Party size: %s <br/>Reservation name: %s",
session.dialogData.reservationDate, session.dialogData.partySize, session.dialogData.reservationName);
session.endDialog();
}]);
In my code, I have a similar multi-parameter dialog, but I want to allow the user to answer with multiple information at the same time in any of the responses it have. For example, after providing the reservation date the user can say "a reserve for Robert for 10 people", so both numbers of people and reservation name are giving at the same time.
To identify these text entities I suppose I have to call LUIS and get the entities resolved from the session context. I notice that the bot object has a recognized method that I think can work for that.
My question is how do I organize the structure of the code and the LUIS utterances and entities? Right now I have an intent with some entities and several utterances samples, but if I send this 'partial' user sentence I think it will not be mapped to the same intent and may not identify the entities with a small sentence like that.
How should I handle this? Do I need to provide samples for the intent with these partial sentences, that may contain only some of the entities?
Thanks
Yes, you should provide samples for all those utterances that you want to your intent to recognize. Not a million of samples, but just as few to get everything trained.
Then, the other problem that you might want to solve next, is asking for the information for those entities missing in the utterance. You can do that manually or you could go one step further and explore the LUIS Action Binding library.

Spring Data MongoDB - Embedded Document as Reference in Other Document

I'd like to know if it's possible (or even correct) to use embedded documents as reference in other documents.
I know I can move the embedded document to its own collection but the main goal is to have the performance benefit of embedded document and also avoid duplication.
For example:
User
{
_id: ObjectId("4fed0591d17011868cf9c982"),
_class: "User"
...
addresses: [ {
_id: ObjectId("87KJbk87gjgjjygREewakj86"),
_class: "Address",
...
} ]
}
Order
{
_id: ObjectId("gdh60591d123487658cf9c982"),
_class: "Order",
...
address: ObjectId("87KJbk87gjgjjygREewakj86")
}
Your case reminds me of the typical relational approach, which I was a victim of, too, when starting to use document-oriented DBs. All of your
entities in the example are referenced, there is no redundancy anymore.
You should start to get used to the idea of letting go normalization and starting to duplicate data. In many cases it is hard to determine which data should be referenced and which should be embedded. Your case tends to be quite clear, though.
Without knowing your entire domain model, the address seems to be a perfect candidate for a value object. Do not maintain an Address collection, embed it within the user object. In Order, you could either make a reference to the user, which gives you implicitly the address object and might make sense, since an order is made by a user.
But...I recommend that you embed the address entirely in the Order. First, it is faster since you don't need to resolve a reference. Second, the address in shipped orders should never change! Consider orders of the last year. If you hold a reference to the address you would lose the information to which address they were shipped, once the user changes his address.
Suggestion: Always take a snapshot of the address and embed it in the Order. Save the MongoDB ID of the user as a regular string (no #DBRef) within the `Order. If a user should change his address, you can make a query for all non-shipped orders of that user and amend the address.
Since you asked if this is even correct, I would say, gently, "No." At least not typically.
But if you did want to insist on using an embedded address from user:
You can reference the user embedded address in the Order object, just not the way you might think! If you stored the id of the user in the order (it should already be there if Order belongs_to User), then you merely use user.address instead of copying the address instance as you have done.
ALTERNATIVE
I hope to illustrate a better approach to modeling the domain...
A more common approach is to instantiate a new order object, using the user's address as the default "ship to" address for the order, yet allow the user to override the shipping address if desired. In theory, each order could have a different "ship to" address.
Just because two classes have an address, does not mean they are necessarily the same address.
COMMENTARY
Orders are more of an historical document, versus one that changes. Therefore, Orders are generally immutable once placed, your model allows the address to change every time the user changes their address. That change ripples into the Orders, and would be incorrect insofar as normal order business logic goes.
Assume your address last year was in Spain and you had Order #1 show Spain when you ran a report of Orders last year. Imagine if your address this year is now Portugal and Order #1 now shows Portugal in the same report. That would be factually incorrect.
BTW: #Matt gave you the tip that from a "problem domain" perspective, you likely do not want to model it as you have. I am merely elaborating on that...
Since I got no answer I will post here how I did it. If you have a better solution I am happy to here it.
It looks like there's no way to create/reference a collection inside another collection, so I had to extract the addresses from the user collection to it's own collection and create a reference in the User and Order collections as mentioned here. I was expecting something more flexible, but couldn't find one.
User
{
_id: ObjectId("4fed0591d17011868cf9c982"),
_class: "User"
...
addresses: [ {
"$ref" : "addresses",
"$id" : ObjectId("87KJbk87gjgjjygREewakj86")
} ]
}
Address
{
_id: ObjectId("87KJbk87gjgjjygREewakj86"),
...
}
Order
{
_id: ObjectId("gdh60591d123487658cf9c9867"),
_class: "Order",
...
address: {
"$ref" : "addresses",
"$id" : ObjectId("87KJbk87gjgjjygREewakj86")
}
}

Resources