Extract Graphql queries sent by a browser application with Apollo client - graphql

I am trying to simplify the process of exporting GraphQL queries sent by my application for documentation purposes. For the record, I want to be able to paste those queries into Postman collections.
Here are my different approaches:
Relying on .graphql files: first it's still very difficult to setup with a full fledged TypeScript + Webpack + Babel setup (using Next.js). Anyway, it does not provide variables, so you only have half the query.
Relying on the network tab. From there, we can copy queries content and import into Postman. Combined with Cypress it could provide an awesome workflow. It works OK, but Apollo Client will send queries as JSON objects, difficult to interpret
I tried to use the "application/graphql" content-type. It's way more readable and available in Postman. BUUUT it is non-standard, and thus not available in Apollo client.
So my question is rather open, but what are the possibilities to enable extracting graphql queries (and variables) sent by my browser and inject them into Postman?
Most promising solution is enabling "application/graphql" client side, or converting the JSON representation back to a string representation. But I could explore another possibility (eg using Apollo Engine as an intermediate)

A way to do this is to use the apollo CLI tool. It includes a client:extract command that extracts all of the GraphQL operations/documents in your application into a file. You run the tool on your JS(X)/TS(X) files and it extracts the GraphQL documents into a file that looks like this (this output is the result of pointing the tool at a single file containing a single query):
{
"version": 2,
"operations": [
{
"signature": "b4f318e6aebcc3631bc88eedef09c6001bb8c310917e97ee6df4a99e17c3c056",
"document": "query BootstrapQuery{user:viewer{__typename delivery_time_1 delivery_time_2 devices{__typename fcm_token id notification{__typename enabled}}has_password id label location name next_delivery_string oauths{__typename email id name picture provider}plan plan_billing_service plan_expires plan_since plan_will_renew profile_picture recovery_email timezone username}}",
"metadata": {
"engineSignature": ""
}
}
]
}
You can then use that file however you want.
In my case, I use this tool to publish an allow-list of operations to Hasura. I'm not sure what you mean by injecting queries into Postman, but I think this tool may provide you with an automated start that would be an improvement over manual copy/pasting.

Related

Uploading PDF to AWS Lambda via API Gateway mangles the bits...why?

I have deployed an AWS Lambda function, written in Python, and AWS API Gateway structure to cause POST requests to an API endpoint to be redirected to my function. I want to upload a PDF document to my function and have it store the document in a S3 bucket. The problem I have is that the payload of any POST request to my API is being UTF-8 encoded. I don't want that but can't figure out the magic mojo to disable encoding of the request payload.
I am testing using curl, with the following command line:
curl -XPOST https://xxxxxxxxxx.execute-api.us-west-1.amazonaws.com/test -H 'content-type: application/pdf' --data-binary #document.pdf
UPDATE: I just found the following article describing how API Gateway and Lambda support uploading binary data:
https://aws.amazon.com/blogs/compute/handling-binary-data-using-amazon-api-gateway-http-apis/
This article suggests that all of the complexities that I discussed in the initial formation of my question (still provided below) should not be necessary. All I should need to do to upload binary content to my Lambda function is insure that my request includes an appropriate Content-Type header. I was already doing that, but I massaged my Curl command a bit (modified above) to define my request in exactly the way that is done in this article. I still get UTF-8 encoded data and NOT base-64 encoded data. I tried uploading a jpeg file rather than a PDF so I was doing exactly what was done in the article. Still no love. I don't get it. This article demonstrates exactly what I'm doing. But I don't get the result it suggests I should. Ggggrrrr.
ORIGINAL POST:
I am using Terraform to define my deployment. I want to cause the PDF to not be encoded/mangled at all. This is my first time using API Gateway, and I'm obviously missing some bit of config. The one thing I'm doing specifically right now to say that I want incoming payloads to be treated as binary is via the binary_media_types argument to my API definition in Terraform:
resource aws_api_gateway_rest_api proxy {
...
binary_media_types = [
"application/pdf",
"application/octet-stream",
"*/*"
]
This sets the Binary Media Types configuration associated with the API I've defined. I've confirmed via the AWS Console that this setting is having the desired effect...I can see these types in the console. I should need just the first item in the list, but I've added the others while I try to figure out the problem here. By adding that wildcard item, I believe that it shouldn't matter what the incoming Content-Type is...all payloads should be being treated as binary.
The other bit of config that I know about that might be important is the "integration contentHandling property". Here is the key bit of AWS docs that seems to explain all this:
I think the case that applies to me here is the one I've highlighted, per what I say above. This says to me that I shouldn't need to do anything else, per the "unspecified" value in the table for "contentHandling. I've tried setting the "contentHandling" argument on the integration record of my Terraform config, like this:
resource aws_api_gateway_integration proxy {
...
passthrough_behavior = "WHEN_NO_MATCH"
content_handling = "CONVERT_TO_BINARY"
}
I first tried only specifying the content_handling value. I've also tried setting that value to "CONVERT_TO_TEXT", hoping to then get base64-encoded data. Neither of these has any effect. I've tried adding the passthrough_behavior value as shown. I've also tried replacing "WHEN_NO_MATCH" with "WHEN_NO_TEMPLATES". Nothing I do changes the behavior. I haven't been able to figure out where these settings would show up in the AWS console. If I knew they were necessary, I'd explore this further. But I don't think I need to set these.
What am I missing? How can I POST a PDF document to my AWS Lambda function through API Gateway and have the payload of the request not be converted in any way? TIA!
NOTE: I am aware of this Q/A: PDF Uploaded via AWS API Gateway getting corrupted. The answer there doesn't apply to me, as I need to avoid having to form-encode the upload. The client code that will eventually be doing the upload is set in stone and sends a POST request with a payload that is just the bytes of the PDF.

How to log calls to deprecated fields in apollo server

We marked some fields in our schema using the #deprecated directive. Now we want to log if these fields are still in use from some of our clients. What would be the best way to do this, without using Apollo Studio.
If you have access to the client code, then you can utilize GraphQL Inspector to check for deprecated usage. Using the CLI, you just do:
graphql-inspector validate DOCUMENTS SCHEMA
where DOCUMENTS is a glob pattern used to match the files containing the queries and SCHEMA is a pointer to the schema used for validation. The files containing the queries can be .graphql files or .js/.ts files. The schema pointer can be a URL to your schema or one or more .graphql files with your schema's type definitions. See here and here for additional ways to provide the schema and documents.
If you don't have access to the client code, or specifically need to log deprecated usage on every request, then you can write your own Apollo Server plugin and utilize GraphQL Inspector's programmatic API instead to validate each request's parsed document as it comes in. The parsed document will be available beginning with the validationDidStart lifecycle hook. See the docs for a complete example of how to write your own plugin.

Share fragments between client and server

I've got a bunch of graphQL fragments set up in my React/Apollo app, but I really need to access them on my Node server.
For example, in my client I'm attempting to do this query, to get all relevant Person and Company entities:
query GET_REPORTING_CLIENTS{
reportingClients{
people {
...PersonFragment
}
companies {
...CompanyFragment
}
}
}
Now, I can't just pass info into the queries on my server, cause context.db.query.person obviously won't have a key for 'people'.
Ideally, I'd be able to go:
context.db.query.person({
where: (query details)
}, PersonFragment)
...but that doesn't work cause the server doesn't have the fragment. At the moment I'm getting around that by copy-pasting huge blocks of graphQL from the client to the app, but it's a really poor solution.
Is there an answer, or does everything just need to double up?
I recommend using yarn workspaces, especially if you plan to make a mobile app. You can package chunks of code where it can be shared between the frontend and backend applications.
https://yarnpkg.com/en/docs/workspaces

Output all language strings in Revel?

I'm developing an API Server in Go and the server (at the moment) handles all translations for clients. When an API client fetches particular data it also asks for the translations that are available for the given section.
Ideally I want to have the following folder structure:
/messages
/home.en
/home.fr
/home.sv
/news.en
/news.fr
/news.sv
Where news and home are distinct modules.
Now the question I have for Revel is is it possible to fetch ALL language strings for a given module and given locale? For example pull all home strings for en-US.
EDIT:
I would like the output (something I can return to the client) a key:value string of translations.
Any guidance would be appreciated.
It seems to me that revel uses messaged based translation (just like gettext does), so you need
the original string to get the translation. These strings are stored in Config objects,
which are themselves stored in messages of i18n.go, sorted by language.
As you can see, this mapping is not exported, so you can't access it. The best way
to fix this is to write a function for what you want (getting the config by supplying a language)
or exporting one of the existing functions and create a pull request for revel.
You may workaround this by copying the code of loadMessageFile or by forking your version
of revel and exporting loadMessageFile or parseMessagesFile. This also is a great opportunity
to create a pull request.
Note that the localizations are stored in a INI file format parsed by robfig/config,
so manually parsing is also an option (although not recommended).

ProtocolViolationException Load testing web service (GET action with content-body)

I created an ASP.NET MVC4 Web API service (REST) with a single GET action. The action currently needs 11 input values, so rather than passing all of those values in the URL, I opted to encapsulate those values into a single class type and have it passed as Content-Body. When I test in Fiddler, I specify the verb as GET, and enter the JSON text in the "Request Body" input box. This works great!
The problem is when I attempt to perform Load Testing in Visual Studio 2010 Ultimate. I am able to specify the GET action and the JSON Content-Body just fine. But when I run the Load test, VS reports exceptions of type ProtocolViolationException (Cannot send a content-body with this verb-type) in the test results. The test executes in 1ms so I suspect the exceptions are causing the test to immediately abort. What can I do to avoid those exceptions? I'd prefer to not change my API to use URL arguments just to work-around the test tooling. If I should change the API for other reasons, let me know. Thanks!
I found it easier to put this answer rather than carry on the discussions.
Sending content with GET is not defined in RFC 2616 yet it has not been prohibited. So as far as the spec is concerned we are in a territory that we have to make our judgement.
GET is canonically used to get a resource. So you are retrieving this resource using this verb with the parameters you are sending. Since GET is both safe and idempotent, it is ideal for caching. Caching usually takes place based on the resource URI - and sometimes based on various headers. The point is cache implementations - AFAIK - would not use the GET content (and to be honest I have not seen any GET with content in real world). And it would not make sense to include the content in the key generation since it reduces the scalability of the caches.
If you have parameters to send, they must be in the URI since this is part of what defines that URI. As such, I strongly believe sending content with GET is wrong.
Even when you look at implementations such as OData, they put the criteria in the URI. I cannot imagine your (or any) applications requirements is beyond OData query requirements.

Resources