Apollo client and DGS server integration conflict on 2 schemas - graphql

For using apollo client, it needs to download "schema.graphqls" from the server we are going to send queries to. It is stored under "main/graphql". For using DGS server we need to have our "schema.graphqls" file, which is created under "resources/schema" as required by DGS. those 2 files have common endpoint, and on the second file it shows error like this - " 'Student' type tried to redefine existing 'Student' type ". I want these schemas to be independent from each other. How can I achieve that?
I have tried manually giving path for DGS schema, but it didn't work.

Related

How can I create a NestJS graphql server with dynamic schema with respect to user?

I am using Nest JS for server and new to Graphql, I want to create a graphql server but schema defines in Graphql module in app.module. I am using schema first appraoch.
app.module
in importing Graphql Module, typepaths property defines to create typings from schemas present in any path.
but I don't have any particular schema because I want my user to enter any thing he want and fetch data using Graphql where typings should be with respect to particular user.
Things I tried:
I tried to rewrite the schema using filesystem methods from service but to update typings from schema nest server needs to restart to generate typings again.
please anyone give me a guide or any approach how can I achieve dynamic typings. I want a server which shows a graphql playground but user should be allow to query with respect to their data.
like for user 1 this highlighted box can be a schema but for different user this schema should be with respect to himself:user 1 should see this schema and should query only using this schema
Related Images are attached in link.
Any guide would be appreciated, Thanks!

How to log calls to deprecated fields in apollo server

We marked some fields in our schema using the #deprecated directive. Now we want to log if these fields are still in use from some of our clients. What would be the best way to do this, without using Apollo Studio.
If you have access to the client code, then you can utilize GraphQL Inspector to check for deprecated usage. Using the CLI, you just do:
graphql-inspector validate DOCUMENTS SCHEMA
where DOCUMENTS is a glob pattern used to match the files containing the queries and SCHEMA is a pointer to the schema used for validation. The files containing the queries can be .graphql files or .js/.ts files. The schema pointer can be a URL to your schema or one or more .graphql files with your schema's type definitions. See here and here for additional ways to provide the schema and documents.
If you don't have access to the client code, or specifically need to log deprecated usage on every request, then you can write your own Apollo Server plugin and utilize GraphQL Inspector's programmatic API instead to validate each request's parsed document as it comes in. The parsed document will be available beginning with the validationDidStart lifecycle hook. See the docs for a complete example of how to write your own plugin.

Extract Graphql queries sent by a browser application with Apollo client

I am trying to simplify the process of exporting GraphQL queries sent by my application for documentation purposes. For the record, I want to be able to paste those queries into Postman collections.
Here are my different approaches:
Relying on .graphql files: first it's still very difficult to setup with a full fledged TypeScript + Webpack + Babel setup (using Next.js). Anyway, it does not provide variables, so you only have half the query.
Relying on the network tab. From there, we can copy queries content and import into Postman. Combined with Cypress it could provide an awesome workflow. It works OK, but Apollo Client will send queries as JSON objects, difficult to interpret
I tried to use the "application/graphql" content-type. It's way more readable and available in Postman. BUUUT it is non-standard, and thus not available in Apollo client.
So my question is rather open, but what are the possibilities to enable extracting graphql queries (and variables) sent by my browser and inject them into Postman?
Most promising solution is enabling "application/graphql" client side, or converting the JSON representation back to a string representation. But I could explore another possibility (eg using Apollo Engine as an intermediate)
A way to do this is to use the apollo CLI tool. It includes a client:extract command that extracts all of the GraphQL operations/documents in your application into a file. You run the tool on your JS(X)/TS(X) files and it extracts the GraphQL documents into a file that looks like this (this output is the result of pointing the tool at a single file containing a single query):
{
"version": 2,
"operations": [
{
"signature": "b4f318e6aebcc3631bc88eedef09c6001bb8c310917e97ee6df4a99e17c3c056",
"document": "query BootstrapQuery{user:viewer{__typename delivery_time_1 delivery_time_2 devices{__typename fcm_token id notification{__typename enabled}}has_password id label location name next_delivery_string oauths{__typename email id name picture provider}plan plan_billing_service plan_expires plan_since plan_will_renew profile_picture recovery_email timezone username}}",
"metadata": {
"engineSignature": ""
}
}
]
}
You can then use that file however you want.
In my case, I use this tool to publish an allow-list of operations to Hasura. I'm not sure what you mean by injecting queries into Postman, but I think this tool may provide you with an automated start that would be an improvement over manual copy/pasting.

Marklogic Spring Boot - Installing a Rest Endpoint

I am currently using the Marklogic spring boot demo. So far I have been able to add indexes, facets, front-end logic etc just fine.
Right now, I am trying to layer in some semantic logic into a rest endpoint.
I wrote a simple query into the query console, and attempted to add it to the src/main/ext folder so that it would get deployed by the ml-gradle bootrun.
Right now. This file will get deployed to the test-modules database, and is visible once saved (I can see it in explorer under URI /ext/my-endpoint. I also tried adding a folder named rest-api but that just adds it to /ext/rest-api/my-endpoint
At the top of the endpoint I have it declared as
`module namespace ext = "http://marklogic.com/rest-api/resource/my-endpoint";
However when I navigate to the URL it should be living at http://localhost:8090/LATEST/resources/my-endpoint?
It tells me it does not exist.
So I can see it in the modules database, but I can't use it on HTTP. the Query works in the query console (and is rather trivial, and-query of json-property-word-queries)
My question is:
How can I properly update the marklogic-spring-boot framework to properly deploy rest endpoints.
So I figured it out it seems.
Manually creating the file isn't registering the distribution workflow properly.
Instead I create the resource using ml-gradle
gradle mlCreateResource -PresourceName=my_endpoint
This will create a new folder called services, and create the endpoint for me, which can then have code over written.
Still not sure what gradle is doing special, so I can know what the proper way to do this manually would be, but at least it works.

Combining metadata from multiple sources

In a SPA app using breeze, how would I go about combining metadata from multiple sources for related data so that I can use them in 1 manager on the client. For example, I might have the following
Entity Framework Metadata from WebAPI controller (e.g. Account)
Custom Metadata from DTOs (e.g. Invoices)
Data from a third party service with metadata provided from client side metadata (e.g. Invoice transmission result)
In each case the data has related properties so I might want to be able to use Account.Transactions.TransmissionResults
UPDATE
I have tried several ways of getting this to work but to no avail. From Jay's answer, it is not possible at present to update the metadata from the server once it has been retrieved, so if and until that changes (see breeze user voice issue) I am left with one of the following approaches
1 Retrieve metadata from the server from Entity Framework and add metadata on the client to add extra entities. This worked to a degree but I could not add navigation properties from entity types added on the client to entity types retrieved from the server because I cannot add the foreign key association to the entity retrieved from the server, again back to the need to modifying metadata after it has been retrieved.
2 Write the complete metadata by hand, which will work but makes maintainability that much harder and seems wrong to be manually writing mostly the same code that the designer would write.
3 Generate most of the code from Entity Framework as described in the docs and then update it afterwards to add in the custom entities. Again similar issues than with option 2, it seems hacky.
Anyone else tried something similar? Is there something I am missing, which I could be, I've only started with breeze and js.
Thanks
A breeze EntityManager can have metadata from any number of DataService endpoints, and you can manually add metadata (new EntityTypes) on the client at any point. The only current restriction is that once you have metadata from a specific service, you can't change it. ( We are considering reviewing the last restriction).
So the question is, what are you trying to do that you can't right now?

Resources