Authorizations on GraphQL ruby mutations - ruby

Still quite new to GraphQL,
The idea is to 'secure' mutations, meaning restricting those to the current user passed in the context. Basic one :
Create = GraphQL::Relay::Mutation.define do
name "AddItem"
input_field :title, !types.String
return_field :item, Types::ItemType
return_field :errors, types[types.String]
resolve -> (object, inputs, ctx) {
if ctx[:current_user]
... do the stuff...
else
...returns an error...
end
}
end
Let's say for one having multiple mutations… this very same conditions would have to be repeated everytime needed.
I'm obviously biased by before_action available in rails; is there something similar available in graphql-ruby ? (like, 'protected mutations', in any case looking to selectively protect specific parts of the available output, in a centralized setup)
Or should the approach be completely different ?

As of the time of this writing, the GraphQL spec does not define anything having to do with authz/authn. Generally speaking, people put their GraphQL layer behind a gateway of some sort and pass the auth token in with the query. How to do this will depend on your implementation. In the JavaScript GraphQL server, there is a "context" that will is passed to all resolvers.
In other words, securing queries and mutations at the resolver level is currently the best practice in GraphQL.
Specific to Ruby, however, it does look like there is a paid version of the software that has some nice auth features built in.
http://graphql-ruby.org/pro/authorization

Related

Hot Chocolate GraphQL method versioning

How i can implement the Hot Chocolate GraphQL method versioning similar like to REST, i.e. /graphql/v1 or using request header. i don't want to use Deprecating fields because there is no any change in input or out param but i have change in method implementation(business wise).
GraphQL has no concept of versioning. What you would tend to do is still deprecate and introduce a new method under a new name.
Hot Chocolate can expose multiple schemas though, so you can host two separate schemas on the same server.
services.AddGraphQLServer("a")
services.AddGraphQLServer("b")
in the MapGraphQL method, you would need to map a specific schema to a specific route.
app.MapGraphQL("/graphql/a", schemaName: "a")
Having multiple versions of the schema will not scale very well and the more you introduce of these the harder it will get.

Is possible to use Oauth2 scopes to scope model data?

Every place I read about Oauth2 scopes it uses examples of read, write, delete, post:read, post:delete, etc... This always representing "actions", like It was a permission...
I am in a situation that I must implement an API that must authenticate the user but limit user's access to data that only belongs to the same corporation he belongs, this user may belong to "N" corporations.
I came with the idea to use the Oauth2 scopes for that purpose then use Laravel's eloquent global scopes in model to filter the data.
I am stuck and dont know How to proceed. Could anyone give some advice?
There are 2 concepts in the requirements you mention:
Scopes are high level privileges that represent an area of data and operations allowed on that data - they are also static values defined as part of the system design. Avoid attempting to use them for dynamic logic.
Claims are where the real authorization happens, and what most domain specific authorization uses. Claims are just extra fields included in JWTs. In your case an array claim of Corporation IDs could be issued and included in JWTs received by APIs.
These two Curity articles explain this in more detail, along with some real world examples. When done well, the result should be simple code in your APIs:
Scope Best Practices
Claims Best Practices

How to use a single AWS Lambda for both Alexa Skills Kit and API.AI?

In the past, I have setup two separate AWS lambdas written in Java. One for use with Alexa and one for use with Api.ai. They simply return "Hello world" to each assitant api. So although they are simple they work. As I started writing more and more code for each one, I started to see how similar my java code was and I was just repeating myself by having two separate lambdas.
Fast forward to today.
What I'm working on now is having a single AWS lambda that can handle input from both Alexa and Api.ai but I'm having some trouble. Currently, my thought is that when the lambda is run, there would be a simple if statement like so:
The following is not real code, just what I think I can do in my head
if (figureOutIfInputType.equals("alexa")){
runAlexaCode();
} else if (figureOutIfInputType.equals("api.ai")){
runApiAiCode();
}
The thing is now I need to somehow tell if the function is being called by an alexa or api.ai.
This is my actual java right now:
public class App implements RequestHandler<Object, String> {
#Override
public String handleRequest(Object input, Context context) {
System.out.println("myLog: " + input.toString());
return "Hello from AWS";
}
I then ran the lambda from Alexa and Api.ai to see what Object input would get generated in java.
API.ai
{id=asdf-6801-4a9b-a7cd-asdffdsa, timestamp=2017-07-
28T02:21:15.337Z, lang=en, result={source=agent, resolvedQuery=hi how
are you, action=, actionIncomplete=false, parameters={}, contexts=[],
metadata={intentId=asdf-3a2a-49b6-8a45-97e97243b1d7,
webhookUsed=true, webhookForSlotFillingUsed=false,
webhookResponseTime=182, intentName=myIntent}, fulfillment=
{messages=[{type=0, speech=I have failed}]}, score=1}, status=
{code=200, errorType=success}, sessionId=asdf-a7ac-43c8-8ae8-
bc1bf5ecaad0}
Alexa
{version=1.0, session={new=true, sessionId=amzn1.echo-api.session.asdf-
7e03-4c35-9d98-d416eefc5b23, application=
{applicationId=amzn1.ask.skill.asdf-a02e-4938-a747-109ea09539aa}, user=
{userId=amzn1.ask.account.asdf}}, context={AudioPlayer=
{playerActivity=IDLE}, System={application=
{applicationId=amzn1.ask.skill.07c854eb-a02e-4938-a747-109ea09539aa},
user={userId=amzn1.ask.account.asdf}, device=
{deviceId=amzn1.ask.device.asdf, supportedInterfaces={AudioPlayer={}}},
apiEndpoint=https://api.amazonalexa.com}}, request={type=IntentRequest,
requestId=amzn1.echo-api.request.asdf-5de5-4930-8f04-9acf2130e6b8,
timestamp=2017-07-28T05:07:30Z, locale=en-US, intent=
{name=HelloWorldIntent, confirmationStatus=NONE}}}
So now I have both my Alexa and Api.ai output, and they're different. So that's good. I'll be able to tell which one is which. but I'm stuck. I'm not really sure if I should try to create an AlexaInput object and an ApiAIinput object.
Am I doing this all wrong? Am I wrong with trying to have one lambda fulfill my "assistant" requests from more than one service (Alexa and ApiAI)?
Any help would be appreciated. Surely, someone else must be writing their assistant functionality in AWS and wants to reuse their code for both "assistant" platforms.
I had the same question and same thought, but as I got further and further in implementing, I realized that it wasn't quite practical for one big reason:
While a lot of my logic needed to be the same - the format of the results was different. Sometimes, even the details or formatting of the results would be different.
What I did was go back to some concepts that were familiar in web programming by dividing it into two parts:
A back-end system that was responsible for taking parameters and applying the business logic to produce results. These results would be fairly low-level, not entire phrases, but more a set of keys/value pairs that indicated what kind of result to give and what values would be needed in that result.
A front-end system that was responsible for handling things that were Alexa/Assistant specific. So it would take the request, extract parameters and state, call the back-end system with this information, get a result back which included what kind of reply to send and the values needed, and then format the exact phrase (and any other supporting info, such as a card or whatever) and put it into a properly formatted response.
The front-end components would be a different lambda function for each agent type, mostly to make the logic a little cleaner. The back-end components can either be a library function or another lambda function, whatever makes the most sense for the task, but is independent of the front-end implementation.
I suppose one could also this by having an abstract parent class that implements the back-end logic, and having the front-end logic be subclasses of this. I wouldn't do it this way because it doesn't provide as clear an interface boundary between the two, but its not unreasonable.
You can achieve the result (code reuse) a different way.
Firstly, create a method for each type of event (Alexa, API Gateway, etc) using the aws-lambda-java-events library. Some information here:
http://docs.aws.amazon.com/lambda/latest/dg/java-programming-model-handler-types.html
Each entry point method should deal with the semantics of the event triggering it (API Gateway) and call into common code to give you code reuse.
Secondly, upload your JAR/ZIP to an S3 bucket.
Thirdly, for each event you want to handle - create a Lambda function, referencing the same ZIP/JAR in the S3 bucket and specifying the relevant entry point.
This way, you'll get code reuse without having to juggle multiple copies of the code on AWS, albeit at the cost of having multiple Lambdas defined.
There's a great tool that supports working this way called Serverless Framework which I'd highly recommend looking at:
https://serverless.com/framework/docs/providers/aws/
I've been using a single Lambda to handle Alexa ASK and Microsoft Luis.ai responses. I'm using Python instead of Java but the idea is the same and I believe that using an AlexaInput and ApiAIinput object, both extending the same interface should be the way to go.
I first use the context information to identify where the request is coming from and parse it into the appropriate object (I use a simple nested dictionary). Then pass this to my main processing function and finally, pass the output to a formatter again based on the context. The formatter will be aware of what you need to return. The only caveat is that handling session information; which in my case I serialize to my own DynamoDB table anyway.

Can GraphQL Queries be named, kind of like stored procedures, and reused?

I'm building a Graphene-Django based GraphQL API. One of my colleagues, who is building an Angular client that will use the API, has asked if there's a way to store frequently used queries somehow on the server-side so that he can just call them by name?
I have not yet encountered such functionality so am not sure if it's even possible.
FYI he is using the Apollo Client so maybe such "named" queries is strictly client-side? Here's a page he referred me to: http://dev.apollodata.com/angular2/cache-updates.html
Robert
Excellent question! I think the thing you are looking for is called "persisted queries." The GraphQL spec only outlines
A Type System for a schema
A formal language for queries
How to validate/execute a query against a schema
Beyond that, it is up to the implementation to make specific optimizations. There are a few ways to do persisted queries, and different ones may be more or less helpful for your project.
Storing Queries as a String
Queries can easily be stored as Strings, and the convention is to use *.gql files to do that. Many editors/IDEs will even have syntax highlighting for this. To consume them later, just URL Encode them, and you're all set! Since these strings are "known" you can whitelist the requests on the server if you choose.
const myQuery = `
{
user {
firstName
lastName
}
}
`
const query = `www.myserver.com/query=${urlEncode(myQuery)}`
Persisted Queries
For a more sophisticated approach, you can take queries that are extracted from your project (either from strings or using a build tool), pre-run them and put the result in a DB. This is what Facebook does. There are plenty of tools out there to help you with this, and the Awesome-GraphQL repo is a good place to start looking.
Resources
Check out this blog for more info on Persisted Queries

GraphQL: Utilizing introspection functionality for data mutation [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
From my understanding, GraphQL is a great query language for fetching data. However, data mutation, even when using a GraphQL client framework such as Relay, does not seem to be client-side developer friendly. Reason being, they need to know the logic behind the mutation and use it inside the client code.
Would it be better if GraphQL could expose some information to Relay via the introspection functionality, because no other frameworks seem to already be doing this? Also, what would be some of the technical challenges involved building a GraphQL client this way?
GraphQL has chosen to implement mutations in a purely RPC-style model. That is, mutations don't include any metadata about what kinds of changes they are likely to make to the backend. As a contrast, we can look at something like REST, where verbs like POST and PATCH indicate the client's intention about what should happen on the backend.
There are pros and cons to this. On the one hand, it's more convenient to write client code if your framework can learn to incorporate changes automatically, however I would claim this is not possible in all but the most principled of REST APIs. On the other hand, the RPC model has a huge advantage in that the server is not limited in the kinds of operations it can perform. Rather than needing to describe modifications in terms of updates to particular objects, you can simply define any semantic operation you like as long as you can write the server code.
Is this consistent with the rest of GraphQL?
I believe that the current implementation of mutations is consistent with the data fetching part of GraphQL's design, which has a similar concept: Any field on any object could be computed from the others, meaning that there is no stable concept of an "object" in the output of a query. So in order to have mutations which automatically update the results from a query, you would need to take into account computed fields, arguments, aggregates, etc. GraphQL as currently specified seems to explicitly make the trade off that it's fine for the information transfer from the server to be lossy, in order to enable complete flexibility in the implementation of server-side fields.
Are there some mutations that can be incorporated automatically?
Yes. In particular, if your mutation return values incorporate the same object types as your queries, a smart GraphQL client such as Apollo Client will merge those results into the cache without any extra work. By using fragments and picking convenient return types for mutations, you can get by with this approach for most or all mutations:
fragment PostDetails {
id
score
title
}
query PostWithDetails {
post(id: 5) {
...PostDetails
}
}
mutation UpdatePostTitle {
updatePostTitle(id: 5, newTitle: "Great new title") {
...PostDetails
}
}
The place where things get tricky are for mutations that are inserting and deleting objects, since it's not immediately clear what the client should do with that mutation result.
Can this be improved on with introspection or otherwise?
I think it would be very advantageous to have a restricted model for mutations that works more automatically, if the ability to upgrade to a more flexible approach is preserved.
One particular example would be to have a semantic way to declare "delete" mutations:
type Mutation {
deletePost(id: ID!): DeletePostResult #deletes
}
If a client can read the directives on these mutation fields via introspection, then it could identify the deletes directive and guess that the id field represents an object that was deleted and should be purged from the cache.
I'm one of the core contributors to Apollo, and I think it would be quite easy to experiment with features like this in companion packages. We had some inklings of this in core as well, and intentionally designed the store format to make things like this possible.
TL;DR
The current approach makes GraphQL super flexible and is consistent with the rest of the design, but it would be interesting to add conventions to make some mutations automatic.

Resources