Accessing the request object with express-graphql - graphql

According to
http://graphql.org/graphql-js/authentication-and-express-middleware/
To use middleware with a GraphQL resolver, just use the middleware like you would with a normal Express app. The request object is then available as the second argument in any resolver.
However, when I run my resolver
module.exports = {
Query: {
books(root, args, context) {
return books;
}
}
};
the second argument is my query arguments. The third argument, however, unless I override the context config property to expressGraphql is indeed my request object.
My full config is
app.use(
"/graphql",
expressGraphql({
schema,
graphiql: true
})
);
Are the docs wrong, or am I doing something incorrectly?

Neither :)
Even when building your schema with plain ole' GraphQL.js (as opposed to using graphql-tools or another library), there's a couple of different ways to pass in your resolvers.
You can generate it by creating a GraphQLSchema object, for example, in which case your resolvers are included within the GraphQLObjectTypes you add to it. Here is a repo that does just that.
Alternatively, you can declare a schema using a string with buildSchema. You can see that being done in the docs. If you go this route, the only way to pass in your resolvers is to utilize the rootValue object passed into your endpoint configuration. Normally, the values parameters passed to your resolvers are
root
args
context
info
However, when you go the above route, you lose the first parameter... since I imagine we can't pass root to itself
It's a good illustration of why, if you're going to generate a schema declaratively, graphql-tools is the way to go.

Related

Will graphql run resolver with additional arguments

If graphql query schema is like this:
user(user_id: Int): User
Will apollo run resolver if in query will be additional (email) argument not defined in query schema?
I want iterate arguments in resolver, but not sure if it possible to pollute them.
P.S. If there is documentation of how apollo parcing arguments, will appreciate.
In GraphQL, a schema defines what fields are available to the client, including what arguments are available for that field and what the types of those arguments are. Any query submitted to a GraphQL service will first be validated before it's executed. If the query includes any extraneous arguments, it will fail validation and won't be executed. This is explained here in the spec:
Formal Specification
For each argument in the document
Let argumentName be the Name of argument.
Let argumentDefinition be the argument definition provided by the parent field or definition named argumentName.
argumentDefinition must exist.
Explanatory Text
Every argument provided to a field or directive must be defined in the set of possible arguments of that field or directive.
For a better idea of how GraphQL works, I would suggest taking at least a cursory look through the spec.

Mocking specific invocation of Lambda Invoke especially when chaining invocations

So I was wondering - been using the aws-sdk-mock library for Node / Jasmine.
This particular library allows you to mock the service method invocations. However this appears to be a problem when attempting to mock a method called more than once, but fed different parameters (thus invoking a different lambda).
Aws.mock('lambda', 'invoke', function(params, callback){
callback(null, {})
}
This will mock every call to invoke, which really isn't flexible, what I think would be useful would be to see if the params passed to it contained a specific value.
Now I would not be tied to the AWS.mock framework I don't believe, so if anyone has any pointers how to handle this, it would be great. See the invocation flow below.
Custom function (called from test) -> custom function (calling the invoke)
I found the solution to this to be checking the parameters of the lambda being mocked. For example if you have a lambda named lambdaOne and a lambda named lambdaTwo, your mock would look like this:
Aws.mock('lambda', 'invoke', function(params, callback){
if (params.FunctioName === 'lambdaOne'){
callback(null, lambdaOneResponse)
}
else if (params.FunctioName === 'lambdaTwo')
callback(null, lambdaTwoResponse)
}
I hope this helps!

Avoid call to a nested resolver if all the response fields have been already fulfilled

I have a schema that receive a timeInterval iput on root query and the pass it to nested levels/resolvers. I'm trying to add validation to that input on root level, so if the validation fails then I should return and error and return null for other fields. The issue is that if I do this on root level then I don't know how to avoid graphql-tools calling nested resolvers (which fails because they don't have the timeInterval variable defined in the obj of each resolver).
Let me know know if you need and schema example and more details, thanks!
This is built in to how GraphQL.js, the reference implementation from Facebook, works. GraphQL-Tools is just a library on top that makes writing resolvers and schemas a bit nicer.
In GraphQL.js, the child resolvers are called whenever the parent resolver returns anything other than null or undefined, or if it throws an error.
So it sounds like in your case you are returning some data but with a single field missing, in which case GraphQL.js has no idea that it should avoid calling nested fields. Having more detail about your schema and resolvers would definitely help me come up with a specific solution.

Golang: struct inside an interface?

So I have an interface, called UserService inside a package service
I have two simple structs representing the body and response of a HTTP call. I have another struct implementing the UserService interface.
I want to put these structs, call them UserResponse and UserRequest inside the interface so other services can use them to make the HTTP call. Furthermore, the request and response should be available (struct UserReponse, not struct userResponse) so other parts of the code can use them.
I define a function in the interface called GetUser(request UserRequest) UserResponse
However, whenever I reference UserRequest I have to use service.UserRequest and not service.UserService.UserRequest. This is bad because I don't want user-related objects to go into the service namespace. I want each service related data to organized under its own interface, file, etc. Unfortunately, I get an error if I put UserResponse inside the UserService interface. So I put it at the same level as UserService, which is why they are showing up as service.UserResponse. How do I go about accessing UserResponse as service.UserService.UserResponse?
Here is a suggestion to organize your code in more "idiomatic" Go way:
package user
type Request struct {
...
}
type Response struct {
...
}
type Service interface {
GetUser(r Request) Response
}
Outside of user package the code will look like:
s := user.NewService()
var req user.Request
var resp user.Response
resp = s.GetUser(req)
As you can see the code uses much shorter names and still remains very readable.
Package names like service shows that you organize the code in your app by layers instead of by features. I wouldn't recommend it. Here is interesting article about it: http://www.javapractices.com/topic/TopicAction.do?Id=205. It uses Java but the principle applies to any programming language.
Go is not Java; Go is even not C++. You should think of Go as "C with interfaces".
In particular, you cannot do as you want, as you cannot nest struct definitions in interface definitions. Note that you can nest structs inside structs, but you cannot instantiate a nested struct directly.
The closest you can do is to move each logical group of objects to its own package.

Model Binder of Json.Net not being used when i post an object

To clarify...
I configure my WebApiConfig like so:
config.Formatters.JsonFormatter.SerializerSettings.Binder = new TypeNameSerializationBinder("namespace.{0}, assembly");
config.Formatters.JsonFormatter.SerializerSettings.TypeNameHandling = TypeNameHandling.Auto;
This should allow me to bind derived classes to base class.
And binder does work when WebApi serializes objects to JSON, and sends them to client, but when I post them back to server, the binder isn't used (BindToType method never gets called), and my objects get bound to base class.
When i serialize/deserialize objects manually with this settings it all works fine.
Any ideas?
I had the same problem when trying to deserialize complex objects with a custom JsonConverters. I needed this because I'm using DbGeometry for storing users locations.
I broke my head on this a couple of days, I really thought I was doing something wrong, because every time I posted an geometry to the Web API, the complex type parameter was set to null. This while JsonConverter was perfectly able to convert the json to an filled object.
My workaround for this is written below. I don't like that I can't just use the parameter as I'm supposed to do. But it works, at last.
[HttpPost]
public MyComplexType SaveMyComplexType()
{
var json = Request.Content.ReadAsStringAsync().Result;
var myComplexType = JsonConvert.DeserializeObject<MyComplexType>(json);
//todo: validation and save to db
return myComplexType;
}
After some research, I found that this is a bug in ASP.NET Web Api. When the url encoded parameters are parsed, it just creates a new JsonSerializer (without passing global settings).
I filed it here
http://aspnetwebstack.codeplex.com/workitem/609

Resources