I have a mutation in my Graph that uses two datasources. We are migrating one of the datasources out to a federated service. That datasource in the federated service uses a portion of the input from the mutation being called. The originating service also uses a portion of the input. For example:
mutation($verifyUserInput: VerifyUserInput!) {
verifyUser(input: $verifyUserInput) {
user {
specialId
}
otherServiceField
}
}
I need to pass part of the VerifyUserInput to the other service that is not the PK for the entity. I can't find the input in the __resolveReference function reference (obviously since reference only passes the typename and PK), context, or info args. Is the original input to the mutation available in the federated service? If so how can I retrieve it?
I ended up sniffing for the exact input I needed on every request in the gateway and adding that value to a header so it could propagate across the services. This definitely feels like a hack. I assume there is a more correct way to do this in Apollo Federation.
Related
I need to access the complete GraphQLSchema object outside GraphQL request handling. When I used graphql-java directly I was in full control and had the control of this. Right now I need to accomplish the same with Netflix DGS and can't find how to do so (and keep up with runtime changes/reloading later).
For more context - I need to do a few things with this - one is to create a complete, downloadable SDL version of the schema (i.e. not the same as federation _service { sdl }) and also gather and expose some directive-driven metadata differently, since I can't introspect it :( from the client...
You can use DgsDataFetchingEnvironment
#DgsQuery
public Student getStudent(DgsDataFetchingEnvironment dfe) {
GraphQLSchema schema = dfe.getGraphQLSchema();
//if you want to know which fields are selected in query
DataFetchingFieldSelectionSet fields = dfe.getSelectionSet();
}
I'm new to Spring Boot and I just started using graphql-spqr for Spring Boot since it allows for easy bootstrapping of Java projects.
However, as per my understanding, GraphQL basically allows the fetching of selected fields from the database. As per the examples, I've seen, this type of selection in the graphql-spqr library happens on the client side. Is there a way to do selection both client-side and server-side so as to speed up the queries?
I've looked into EntityGraph examples for GraphQL but they are mostly implemented for complex queries that involve JOINs. However, nothing exists for simple queries like findAll(), findById() etc.
I would like to use findAll() with the server fetching only the fields as requested by the client. How can I do that?
What was said in the comments is correct: GraphQL (and hence SPQR, as it's merely a tool to hook the schema up) does not know anything about SQL, databases, JOINs or anything else. It's a communication protocol, the rest is up to you.
As for your situation, you'd have to inject the subselection into the resolver and pass it down to SQL. In the simplest case, it can look like this (in pseudo code):
public List<Book> books(#GraphQLEnvironment Set<String> fields) {
//pass the requested field names further
return database.query("SELECT " + fields + " FROM book");
}
You can inject ResolutionEnvironment using the same annotation, in case you need the full context.
After see this documentation I'm not sure if to use a simple context, as I has done other times, or if it is better to use dataSources to handle the database.
DataSource is the correct way to comunicate with the database or it is better use it only to comunicate with a REST API?
Basically, does it have any advantage to use dataSources vs context in this case?
I think it's better to go with DataSource (as the name suggests) and it might be easy to add a caching layer on top of it. You can create a DBDataSource class extending the DataSource class since Apollo doesn't provide any DBDataSource class.
DataSource class is a generic Apollo data source class whereas RESTDataSource class that is responsible for fetching data from a REST API.
So to fetch data from rest APIs it's better to go with RESTDataSource.
Build a custom data source
Apollo doesn't have support for a SQL data source yet (although we'd love to help guide you if you're interested in contributing), so we will need to create a custom data source for our database by extending the generic Apollo data source class. You can create your own with the apollo-datasource package.
Here are some of the core concepts for creating your own data source:
The initialize method: You'll need to implement this method if you want to pass in any configuration options to your class. Here, we're using this method to access our graph API's context.
this.context: A graph API's context is an object that's shared among every resolver in a GraphQL request. We're going to explain this in more detail in the next section. Right now, all you need to know is that the context is useful for storing user information.
Caching: While the REST data source comes with its own built-in cache, the generic data source does not. You can use our cache primitives to build your own, however!
I generally keep resolvers very thin, pass the incoming args to dataSources and they make models and loaders communicate. Most of the logic and validations are in models. You may of course make your own rules like "don't call another dataSource inside a dataSource. Pass their outpus to make them communicate through resolvers" etc. This is only an example of course. Not a strict rule I follow.
There may be better solutions and actually for simple things, using models directly is much more straightforward. But dataSources help you keep things organised and if you want to add caching etc, you can create a BaseDataSource, put all the logic in that and just extend it with other dataSources.
If you look at the documentation that your pointed out. You add data Sources when initializing Apollo Server like this :
const server = new ApolloServer({
typeDefs,
dataSources: () => ({
launchAPI: new LaunchAPI(),
userAPI: new UserAPI({ store })
})
});
and it is because of this dataSources becomes the part of context. If you remember you do the de-structuring of context to expose dataSources as shown here
module.exports = {
Query: {
launches: (_, __, { dataSources }) =>
dataSources.launchAPI.getAllLaunches(),
launch: (_, { id }, { dataSources }) =>
dataSources.launchAPI.getLaunchById({ launchId: id }),
me: (_, __, { dataSources }) => dataSources.userAPI.findOrCreateUser()
}
};
If you want to access the instance of a dataSource such as UserAPI or LaunchAPI, you will have to do that with dataSources.userAPI
Currently my app looks at router parameter and logged in user (Principal.Identity) to authorize access to certain resources (e.g: Add student to your class [identity + class id]). However, If I'm not wrong, breeze js support just one bulk save. It seems to be that I will have to open each and every data and run through the validation/authorization. That is fine,
but what I may lose is nice separation of cross cutting concern out side my business logic (as a message handler) (finding what roles user has on the class) and nice Authroize annotation feature (just say what roles are needed). So do I have to trade off or is there better programming model which Breeze JS might suggest?
Update:
My question is more on how to separate the authorization (find assigned roles in message handler + verify if required roles are present by adding authorize attribute to controller methods) logic from business or data access logic. Without breeze, I will inspect the incoming message and its route parameter to fetch all its roles then in my put/post/delete methods I would annotate with required roles. I cannot use this technique with breeze (its not breeze's limitation, its trade off when you go for bulk save). So wanted to know if there is any programming model or design pattern already used by breeze guys. There is something on breeze's samples which is overriding context and using repository pattern, will follow that for now.
Breeze can have as many 'save' endpoints as you want. For example, a hypothetical server implementation might be
[BreezeController]
public class MyController : ApiController {
[HttpPost]
[Authorize(...)]
public SaveResult SaveCustomersAndOrders(JObject saveBundle) {
// CheckCustomersAndOrders would be a custom method that validates your data
ContextProvider.BeforeSaveEntitiesDelegate = CheckCustomerAndOrders;
return ContextProvider.SaveChanges(saveBundle);
}
[HttpPost]
[Authorize]
public SaveResult SaveSuppliersAndProducts(JObject saveBundle) {
...
}
You would call these endpoints like this
var so = new SaveOptions({ resourceName: "SaveWithFreight2", tag: "freight update" });
myEntityManager.saveChanges(customerAndOrderEntities, {
resourceName: "SaveCustomersAndOrder" }
).then(...)
or
myEntityManager.saveChanges(supplierAndProductEntities, {
resourceName: "SaveSuppliersAndProducts" }
).then(...)
Authorization is mediated via the [Authorize] attribute on each of the [HttpPost] methods. You can read more about the [Authorize] attribute here:
http://sixgun.wordpress.com/2012/02/29/asp-net-web-api-basic-authentication/
The proper way to do this IMHO is to separate the endpoint authorization and the database actions authorization.
First, create an entity that manages the grands per controller/method and role. For each method you have a value allowed - not allowed for the specific role. You create a special attribute (subclass of Authorize) that you apply to your controllers (breeze or plain web api) that reads the data and decides whether the specific endpoint can be called for the user/role. Otherwise it throws the Unauthorized exception.
On the breeze side (client) you extend the default adapter settings with a method that adds the authentication headers from identity that you received at login, something like this :
var origAjaxCtor = breeze.config.getAdapterInstance('ajax');
$.extend(true, origAjaxCtor.defaultSettings, Security.getAuthenticationHeaders());
On the server, add a second entity that manages the authorization for the CRUD operations. You need a table like (EntityName, AllowInsert, AllowUpdate, AllowDelete). Add a BeforeSave event on the Context Manager or on the ORM (EF or something else) that loops all entities and applies the policy specified on the table above.
This way you have a clear separation of the endpoint logic from the backend CRUD logic.
In all cases the authorization logic should first be implemented server side and if needed should be pushed to the clients.
The way breeze is implemented and with the above design you should not need more than 1 save endpoint.
Hope it helps.
However, If I'm not wrong, breeze js support just one bulk save.
That is entirely wrong. You have free reign to create your own save methods. Read the docs, it's all there.
What do you think about exposing domain entities through services? I tried it in an application, but I came to the conclusion that exposing domain model to the client is not such a good idea.
Advantages:
Really easy to transport data from-to client
List item
(De)Serialization is really easy: just put jackson in the classpath and it will handle it. No extra logic is needed.
No need to duplicate entities POJOs. At least in early stages, the API resources will be pretty much the same as the domain model.
Disadvantages:
The API's get very tightly coupled to the model and you can't change the model without affecting the API
Partial responses. There are cases where you don't want to return all the fields of the entities, just some of them. How do you accomplish it?
So, let's take the following REST example. The following API declares that GET on the user resource returns the following information.
GET
/users/12
{
"firstName":"John",
"lastName":"Poe"
"address":"my street"
}
Usually, I would create a User entity, a user service to return the user and a REST controller to serve the request like this:
#RequestMapping("/users/{id}")
public #ResponseBody User getUser(#PathVariable Long id) {
return userService.findById(id);
}
Should I avoid returning the User entity?
If yes, should I create another class and handle myself the mapping between this class and the entity?
Is there a pattern for this?
How to accomplish partial expansion? (i.e. return only the firstName and lastName for the user)
P.S: using #JSONFilter and ObjectMapper to accomplish partial responses seems too heavyweight to me because you loose the beauty of spring data