/* in transcoding from HTTP to gRPC - protocol-buffers

rpc CreateBook(CreateBookRequest) returns (Book) {
option (google.api.http) = {
post: "/v1/{parent=publishers/*}/books"
body: "book"
};
}
message CreateBookRequest {
// The publisher who will publish this book.
// When using HTTP/JSON, this field is automatically populated based
// on the URI, because of the `{parent=publishers/*}` syntax.
string parent = 1 [
(google.api.field_behavior) = REQUIRED,
(google.api.resource_reference) = {
child_type: "library.googleapis.com/Book"
}];
Book book = 2 [(google.api.field_behavior) = REQUIRED];
string book_id = 3;
}
I don't understand post: "/v1/{parent=publishers/*}/books"
I thought publishers was a field in CreateBookRequest, then it populates to http, so it is something like this
post: "/v1/parent=publishers_field_value/books"
But publishers is not a field in CreateBookRequest

No, publishers is part of the expected value of the parent field. So suppose you have a protobuf request like this:
{
"parent": "publishers/pub1",
"book_id": "xyz"
"book": {
"author": "Stacy"
}
}
That can be transcoded by a client into an HTTP request with:
Method: POST
URI: /v1/publishers/pub1/books?bookId=xyz (with the appropriate host name)
Body:
{
"author": "Stacy"
}
If you try to specify a request with a parent that doesn't match publishers/*, I'd expect transcoding to fail.
That's in terms of transcoding from protobuf to HTTP, in the request. (That's the direction I'm most familiar with, having been coding it in C# just this week...)
In the server, it should just be the opposite - so given the HTTP request above, the server should come up with the original protobuf request including parent="publishers/pub1".
For a lot more information on all of this, see the proto defining HttpRule.

Related

What is the most elegant way in Golang to create JSON-RPC struct with params

I'm creating a Golang app to communicate with proprietary third-party JSON-RPC API that is sensitive to the order of elements.
Request should be sent using POST over HTTPS. Here is a request sample:
{
"jsonrpc": "2.0",
"method": "ab_123",
"params": [
"ee0fe150-6648-11ed-bd23-c1392b9c96ae",
"test#example.com",
"NONE"
]
}
In API docs it's documented that first param should be id, second param is email and third param is name. There are may be dozens of params, some of them are optional (can have value NONE).
I'm looking for the most elegant way to create the request struct that will be readable and easy maintainable.
Here is what I have so far:
type APIRequest struct {
Jsonrpc string `json:"jsonrpc"`
Method string `json:"method"`
Params []string `json:"params"`
}
var request = APIRequest{
Jsonrpc: "2.0",
Method: "ab_123",
Params: []string{
"ee0fe150-6648-11ed-bd23-c1392b9c96ae",
"test#example.com",
"NONE",
},
}

grpc error: packets.SubmitCustomerRequest.details: object expected

I just created a proto file like this
service Customer {
rpc SubmitCustomer(SubmitCustomerRequest) returns (SubmitCustomerResponse) {}
}
message SubmitCustomerRequest {
string name = 1;
map<string, google.protobuf.Any> details = 2;
}
message SubmitCustomerResponse {
int64 id = 1;
}
The code itself works when I called this from the client. But, I'm having trouble with testing it directly from bloomrpc or postman.
{
"name": "great name",
"details": {
"some_details": "detail value",
"some_int": 123
}
}
it throws me this error when I tried to hit it
.packets.SubmitCustomerRequest.details: object expected
I think I'm aware that the problem is with the details syntax format when I hit it on postman, but I'm not sure what the correct format is supposed to be. I've tried modifying it with any other possible format and none works either.

Resolve to the same object from two incoherent sources in graphql

I have a problem I don't know how to solve properly.
I'm working on a project where we use a graphql server to communicate with different apis. These apis are old and very difficult to update so we decided to use graphql to simplify our communications.
For now, two apis allow me to get user data. I know it's not coherent but sadly I can't change anything to that and I need to use the two of them for different actions. So for the sake of simplicity, I would like to abstract this from my front app, so it only asks for user data, always on the same format, no matter from which api this data comes from.
With only one api, the resolver system of graphql helped a lot. But when I access user data from a second api, I find very difficult to always send back the same object to my front page. The two apis, even though they have mostly the same data, have a different response format. So in my resolvers, according to where the data is coming from, I should do one thing or another.
Example :
API A
type User {
id: string,
communication: Communication
}
type Communication {
mail: string,
}
API B
type User {
id: string,
mail: string,
}
I've heard a bit about apollo-federation but I can't put a graphql server in front of every api of our system, so I'm kind of lost on how I can achieve transparency for my front app when data are coming from two different sources.
If anyone has already encounter the same problem or have advice on something I can do, I'm all hear :)
You need to decide what "shape" of the User type makes sense for your client app, regardless of what's being returned by the REST APIs. For this example, let's say we go with:
type User {
id: String
mail: String
}
Additionally, for the sake of this example, let's assume we have a getUser field that returns a single user. Any arguments are irrelevant to the scenario, so I'm omitting them here.
type Query {
getUser: User
}
Assuming I don't know which API to query for the user, our resolver for getUser might look something like this:
async () => {
const [userFromA, userFromB] = await Promise.all([
fetchUserFromA(),
fetchUserFromB(),
])
// transform response
if (userFromA) {
const { id, communication: { mail } } = userFromA
return {
id,
mail,
}
}
// response from B is already in the correct "shape", so just return it
if (userFromB) {
return userFromB
}
}
Alternatively, we can utilize individual field resolvers to achieve the same effect. For example:
const resolvers = {
Query: {
getUser: async () => {
const [userFromA, userFromB] = await Promise.all([
fetchUserFromA(),
fetchUserFromB(),
])
return userFromA || userFromB
},
},
User: {
mail: (user) => {
if (user.communication) {
return user.communication.mail
}
return user.mail
}
},
}
Note that you don't have to match your schema to either response from your existing REST endpoints. For example, maybe you'd like to return a User like this:
type User {
id: String
details: UserDetails
}
type UserDetails {
email: String
}
In this case, you'd just transform the response from either API to fit your schema.

How to test and automate APIs implemented in GraphQL

In our company, we are creating an application by implementing graphQL.
I want to test and automate this APIs for CI/CD.
I have tried REST-assured but since graphQL queries are different than Json,
REST-assured doesn't have proper support for graphQL queries as discussed here.
How can we send graphQL query using REST-assured?
Please suggest the best approach to test and automate graphQL APIs
And tools which can be used for testing and automation.
So I had the same issue and I was able to make it work on a very simple way.
So I've been strugling for a while trying to make this graphQL request with Restassured in order to validate the response (amazing how scarce is the info about this) and since yesterday I was able to make it work, thought sharing here might help someone else.
What was wrong? By purely copying and pasting my Graphql request (that is not json format) on the request was not working. I kept getting error "Unexpected token t in JSON at position". So I thought it was because graphql is not JSON or some validation of restassured. That said I tried to convert the request to JSON, imported library and lot of other things but none of them worked.
My grahql query request:
String reqString = "{ trade { orders { ticker } }}\n";
How did I fixed it? By using postman to format my request. Yes, I just pasted on the QUERY window of postman and then clicked on code button on the right side (fig. 1). That allowed my to see my request on a different formatt, a formatt that works on restassured (fig. 2). PS: Just remeber to configure postman, which I've pointed with red arrows.
My grahql query request FORMATTED:
String reqString = {"query":"{ trade { orders { ticker } }}\r\n","variables":{}}
Fig 1.
Fig 2.
Hope it helps you out, take care!
You can test it with apitest
{
vars: { #describe("share variables") #client("echo")
req: {
v1: 10,
}
},
test1: { #describe("test graphql")
req: {
url: "https://api.spacex.land/graphql/",
body: {
query: `\`query {
launchesPast(limit: ${vars.req.v1}) {
mission_name
launch_date_local
launch_site {
site_name_long
}
}
}\`` #eval
}
},
res: {
body: {
data: {
launchesPast: [ #partial
{
"mission_name": "", #type
"launch_date_local": "", #type
"launch_site": {
"site_name_long": "", #type
}
}
]
}
}
}
}
}
Apitest is declarative api testing tool with JSON-like DSL.
See https://github.com/sigoden/apitest

AWS AppSync resolver cache layer?

What is a common pattern to have AWS AppSync resolvers cache their output?
I'm writing an API that fronts data that will not change at all over time. The API returns the contents of books (title, author, chapters, etc.).
My initial idea was to have the resolver request some JSON payload from CloudFront. If the requested document is not in CloudFront, CloudFront would trigger a Lambda function, which would know how to fetch the JSON document (from a database), then put the payload in CloudFront. This seems weird conceptually, but it would solve the caching problem.
Example
const query = `{
bookById(bookID: "468c95") {
bookID
title
author
chapters {
title
text
}
}
}`;
const book = query(query);
// book => {
// bookId: "468c95",
// title: "AppSync for Normal People",
// author: null,
// chapters: [
// {
// title: "Chapter 1: Dawn of Men",
// text: [
// "It was the best of times, it was the worst of times.",
// "..."
// ]
// },
// { ... }
// ]
// }
}
In other words, calling the fictitious query method will trigger some resolver in AppSync. That resolver will absolutely always return the same data. Thus, why not have the data that the resolver works with (I guess you can view that as the input to the resolver) be cached in CloudFront so it can be served from memory instead of having to hit some backend storage (like a database) or trigger a Lambda?!

Resources