I am trying to have a "generic" info in my object.
Basically what I want to do is to have 2 different kinds of object, lets say image and document
Both have different fields except for the ID
I was wondering what would be the best approach to define my datamodel.prisma so when I use my graphql model (in GO) I can use a generic interface like data
Is it even possible? If not what solution could be the best?
I know in graphql there are interfaces but Im not sure how to define it in prisma.
Ideas?
Interface and union types are not currently supported in Prisma (see feature requests here and here).
Related
I am new to NestJs and GraphQl, I am learning going over some tutorials. It appears to be an inconstancy in the usage of the terminology model or entity. The nestjs schematics resource generator for graphql code first produces entities, yet the example shown on their website use models.
produces entities:
nx generate #nestjs/schematics:resource generated --language=ts --type=graphql-code-first
uses models no mention of entities in code first approach
https://docs.nestjs.com/graphql/resolvers
which one terminology is most appropriate?
Thank You,
Michael
Both are generally correct. It comes down to naming preferences.
I view entities as database entities, or database table maps. They map from your database data to a class representation that your code will understand. Models can also be used for this, which I believe is the term that sequilize and mongoose prefer to use.
Models, as described in the docs you linked, are generally your DTOs, your schema objects that you expect the API to accept and respond with.
You'll notice that the generator also generates two #InputType() files as well, which will be more closely tied to your incoming DTO while the entity.ts will be closer to your response DTO.
So, both are correct, and it comes down to naming preferencec.
Is it possible to mark fields as read only in a .proto file such that when the code is generated, these fields do not have setters?
Ultimately, I think the answer here will be "no". There's a good basic guidance rule that applies to DTOs:
DTOs should generally be as simple as possible to convey the data for serialization in a manner well-suited to the specific serializer.
if that basic model is sufficient for you to work with above that layer, then fine
but if not: do not fight the serializer; instead, create a separate domain model above the DTO layer, and simply map between the two models before serialization or after deserialization
Or put another way: the fact that the generator doesn't want to expose read-only members is irrelevant, because if you need something exotic, you shouldn't be using the generated type outside of the code that directly touches serialization. So: in your domain type that mirrors the DTO: make it read-only there.
As for why read-only fields aren't usually a thing in serialization tools: you presumably want to be able to give it a value. Serialization tools usually want to be able to write everything they can read, and read everything they can write.
Minor note for completeness since you mention C#: if you are using a code-first approach with protobuf-net, it'll work fine with {get;}-only auto-props, and with {get;}-only manual props if all public members trivially map to an obvious constructor.
I'm not quite sure the wording I should be searching for on this.
I have a GraphQL schema which wraps a group of services using graphql-link-schema to perform the data resolution on the client side. The schema is intended to be built against a separate reference schema. How can I programmatically validate that my implementation matches the reference?
For bonus points- is it possible to determine whether a schema is a superset of another?
Thanks in advance (:
It's an interesting use case, but it's a bit unclear how validation like that would work. What causes validation to fail? Any differences between the two schemas? Extra types? Extra fields on existing types? Differences in return types? Differences in arguments or argument types?
Depending on your answer to the above questions, though, you may be able to cobble together your own validation function using the utility functions available here. Outside the main findBreakingChanges function, some of the utility functions available in that module:
findRemovedTypes
findTypesThatChangedKind
findFieldsThatChangedTypeOnObjectOrInterfaceTypes
findFieldsThatChangedTypeOnInputObjectTypes
findTypesRemovedFromUnions
findValuesRemovedFromEnums
findArgChanges
findInterfacesRemovedFromObjectTypes
If you have a reference or base schema available, though, rather than validating against it, you might also consider extending it when building the second schema. In doing so, you would effectively guarantee that the second schema matches the first except in whatever ways you intentionally deviate from it (by extending existing types, etc.). You could use extendSchema for relatively simply changes, or something like graphql-tool's mergeSchemas for more complicated changes.
We're providing a public API that we use internally but also provide to our SaaS users as a feature. I have a Model, a ModelSerializer and a ModelViewSet. Everything is functional but it's blurting out the Model help_text for the description in the API documentation.
While this works for some fields, we would like to be a lot more explicit for API users, providing examples, not just explanations of guidance.
I realise I can redefine each field in a Serializer (with the same name, then just add a new help_text argument, but this is pretty boring work.
Can I provide (eg) a dictionary of field names and their text?
If not, how can I intercede in the documentation process to make something like that work?
Also, related, is there a way to provide a full example for each Viewset endpoint? Eg showing what is submitted and returned, like a lot of popular APIs do (Stripe as an example). Or am I asking too much from DRF's documentation generation? Should I handle this all externally?
To override help_text values coming from the models, you'll need to use your own schema generator subclass and override get_path_fields. There you'd be able to prioritize a mapping on the viewset (as you envision) over the model fields help_text values.
On adjusting the example generation - you could define a JSON language which just deals with raw JSON and illustrate the request side of things pretty easily, however, illustrating responses is difficult without really getting deep into the plumbing, as the default schema generated does not contain response structure data.
Imagine, for example, you have front-end and back-end applications. They are both written in different technologies, lets say backend is in python using django and front end in typescript using angular.
Now there will be some data that will need to be shared between those two. Enums, serialized dictionaries of class instances or names of some fields.
Very quickly a problem arises of data structure duplication and possibility of desynchronisation. (E.g. you have to have to exact enums on both platforms)
I was wondering are there any "best practices" out there?
Like XML based standardizing data or something?
Could you point me to some books / articles?
Could you share your knowledge of how you do this?
Thank you.
I know two ways to solve this obstacle:
Write your own clients for consumers.
Use model contracts like RAML and generate models from declarations.
And you don't have to have the same exact enum or any other class on both platforms. There should be a layer that consumes and returns any data to the outside world(external client). This layer has its own models. Everything that lies below should have their own. You can have many small objects stored in database, but return to the client huge aggregates and that would be different models. Read about data models in standard N-tier application.