How to access the complete runtime GraphQLSchema object with Netflix DGS? - graphql

I need to access the complete GraphQLSchema object outside GraphQL request handling. When I used graphql-java directly I was in full control and had the control of this. Right now I need to accomplish the same with Netflix DGS and can't find how to do so (and keep up with runtime changes/reloading later).
For more context - I need to do a few things with this - one is to create a complete, downloadable SDL version of the schema (i.e. not the same as federation _service { sdl }) and also gather and expose some directive-driven metadata differently, since I can't introspect it :( from the client...

You can use DgsDataFetchingEnvironment
#DgsQuery
public Student getStudent(DgsDataFetchingEnvironment dfe) {
GraphQLSchema schema = dfe.getGraphQLSchema();
//if you want to know which fields are selected in query
DataFetchingFieldSelectionSet fields = dfe.getSelectionSet();
}

Related

GraphQL SPQR fetches all fields on the server side

I'm new to Spring Boot and I just started using graphql-spqr for Spring Boot since it allows for easy bootstrapping of Java projects.
However, as per my understanding, GraphQL basically allows the fetching of selected fields from the database. As per the examples, I've seen, this type of selection in the graphql-spqr library happens on the client side. Is there a way to do selection both client-side and server-side so as to speed up the queries?
I've looked into EntityGraph examples for GraphQL but they are mostly implemented for complex queries that involve JOINs. However, nothing exists for simple queries like findAll(), findById() etc.
I would like to use findAll() with the server fetching only the fields as requested by the client. How can I do that?
What was said in the comments is correct: GraphQL (and hence SPQR, as it's merely a tool to hook the schema up) does not know anything about SQL, databases, JOINs or anything else. It's a communication protocol, the rest is up to you.
As for your situation, you'd have to inject the subselection into the resolver and pass it down to SQL. In the simplest case, it can look like this (in pseudo code):
public List<Book> books(#GraphQLEnvironment Set<String> fields) {
//pass the requested field names further
return database.query("SELECT " + fields + " FROM book");
}
You can inject ResolutionEnvironment using the same annotation, in case you need the full context.

Mutation Input Split Across Two Apollo GraphQL Federated Services

I have a mutation in my Graph that uses two datasources. We are migrating one of the datasources out to a federated service. That datasource in the federated service uses a portion of the input from the mutation being called. The originating service also uses a portion of the input. For example:
mutation($verifyUserInput: VerifyUserInput!) {
verifyUser(input: $verifyUserInput) {
user {
specialId
}
otherServiceField
}
}
I need to pass part of the VerifyUserInput to the other service that is not the PK for the entity. I can't find the input in the __resolveReference function reference (obviously since reference only passes the typename and PK), context, or info args. Is the original input to the mutation available in the federated service? If so how can I retrieve it?
I ended up sniffing for the exact input I needed on every request in the gateway and adding that value to a header so it could propagate across the services. This definitely feels like a hack. I assume there is a more correct way to do this in Apollo Federation.

Apollo Server: dataSources vs context to use it with database

After see this documentation I'm not sure if to use a simple context, as I has done other times, or if it is better to use dataSources to handle the database.
DataSource is the correct way to comunicate with the database or it is better use it only to comunicate with a REST API?
Basically, does it have any advantage to use dataSources vs context in this case?
I think it's better to go with DataSource (as the name suggests) and it might be easy to add a caching layer on top of it. You can create a DBDataSource class extending the DataSource class since Apollo doesn't provide any DBDataSource class.
DataSource class is a generic Apollo data source class whereas RESTDataSource class that is responsible for fetching data from a REST API.
So to fetch data from rest APIs it's better to go with RESTDataSource.
Build a custom data source
Apollo doesn't have support for a SQL data source yet (although we'd love to help guide you if you're interested in contributing), so we will need to create a custom data source for our database by extending the generic Apollo data source class. You can create your own with the apollo-datasource package.
Here are some of the core concepts for creating your own data source:
The initialize method: You'll need to implement this method if you want to pass in any configuration options to your class. Here, we're using this method to access our graph API's context.
this.context: A graph API's context is an object that's shared among every resolver in a GraphQL request. We're going to explain this in more detail in the next section. Right now, all you need to know is that the context is useful for storing user information.
Caching: While the REST data source comes with its own built-in cache, the generic data source does not. You can use our cache primitives to build your own, however!
I generally keep resolvers very thin, pass the incoming args to dataSources and they make models and loaders communicate. Most of the logic and validations are in models. You may of course make your own rules like "don't call another dataSource inside a dataSource. Pass their outpus to make them communicate through resolvers" etc. This is only an example of course. Not a strict rule I follow.
There may be better solutions and actually for simple things, using models directly is much more straightforward. But dataSources help you keep things organised and if you want to add caching etc, you can create a BaseDataSource, put all the logic in that and just extend it with other dataSources.
If you look at the documentation that your pointed out. You add data Sources when initializing Apollo Server like this :
const server = new ApolloServer({
typeDefs,
dataSources: () => ({
launchAPI: new LaunchAPI(),
userAPI: new UserAPI({ store })
})
});
and it is because of this dataSources becomes the part of context. If you remember you do the de-structuring of context to expose dataSources as shown here
module.exports = {
Query: {
launches: (_, __, { dataSources }) =>
dataSources.launchAPI.getAllLaunches(),
launch: (_, { id }, { dataSources }) =>
dataSources.launchAPI.getLaunchById({ launchId: id }),
me: (_, __, { dataSources }) => dataSources.userAPI.findOrCreateUser()
}
};
If you want to access the instance of a dataSource such as UserAPI or LaunchAPI, you will have to do that with dataSources.userAPI

Relax Security for a Spring Data REST Projection

I have a User class and I want to authorize access such that only a user gets to see what he is entitled to.
This was easily achievable using Spring Security in conjunction with Spring Data Rest where in JPA Repository I did below -
public interface UserRepository extends JPARepository<User,Integer> {
#PreAuthorize("hasRole('LOGGED_IN') and principal.user.id == #id")
User findOne(#Param("id") Integer id);
}
In this way, a user when visits to Spring Data REST scaffolded URLs like -
/users/{id}
/users/{id}/userPosts
Only those logged in with {id} get to see these and everyone else gets 401 like I would have wanted.
My problem is that I have one of Projections which is a public view of each user and I am crating it using Spring Data Rest projections as below which I want to be accessible for every {id}
#Projection(name = "details", types = User.class)
public interface UserDetailsProjection {
..
}
So, /users/{id1}?projection=details as well as /users/{id2}?projection=details should give 200 OK and show data even though user is logged in by {id1}
I began implementing this by marking projection with #PreAuthorize("permitAll") but that won't work since Repository has harder security check. Can we have this functionality where for a projection we can relax security ?
I am using latest Spring Data Rest and Spring Security distributions
Seems reasonable to add a custom controller for this use-case.
Please also consider:
Evaluate access in projections using #Value annotations
Add another entity for the same database data but with different field set for read-only operations, e.g. using inheritance (be careful with caching, etc.) - depends on your data storage type
Modify model to split User entity into two different entities (profile, account) since they seem to have different access and possibly even operations
You can also add a ResourceProcessor<UserSummaryProjection> to evaluate access programmatically and replace resource content (projection) with a DTO
Example of evaluating access in projections with #Value annotations:
#Projection(types = User.class, name = "summary")
public interface UserSummaryProjection {
#Value("#{#userSecurity.canReadEmail(target) ? target.email: null}")
String getEmail();
}
Added spring security code in the data access layer is not a good idea. I would suggest you to add the #PreAuthorize annotation to the controller/service method. Since you have a query parameter, ?projection=details, you can have separate controller/service method for the details projection.
Add following to your details projection method:
#RequestMapping("/url", params = {"projection"})
#PreAuthorize("hasRole('LOGGED_IN') and principal.user.id == #id")

Is it sometimes okay to use service locator pattern in a domain class?

This question may be more appropriate for the Programmers stack. If so, I will move it. However I think I may get more answers here.
So far, all interface dependencies in my domain are resolved using DI from the executing assembly, which for now, is a .NET MVC3 project (+ Unity IoC container). However I've run across a scenario where I think service locator may be a better choice.
There is an entity in the domain that stores (caches) content from a URL. Specifically, it stores SAML2 EntityDescriptor XML from a metadata URL. I have an interface IConsumeHttp with a single method:
public interface IConsumeHttp
{
string Get(string url);
}
The current implementation uses the static WebRequest class in System.Net:
public class WebRequestHttpConsumer : IConsumeHttp
{
public string Get(string url)
{
string content = null;
var request = WebRequest.Create(url);
var response = request.GetResponse();
var stream = response.GetResponseStream();
if (stream != null)
{
var reader = new StreamReader(stream);
content = reader.ReadToEnd();
reader.Close();
stream.Close();
}
response.Close();
return content;
}
}
The entity which caches the XML content exists as a non-root in a much larger entity aggregate. For the rest of the aggregate, I am implementing a somewhat large Facade pattern, which is the public endpoint for the MVC controllers. I could inject the IConsumeHttp dependency in the facade constructor like so:
public AnAggregateFacade(IDataContext dataContext, IConsumeHttp httpClient)
{
...
The issue I see with this is that only one method in the facade has a dependency on this interface, so it seems silly to inject it for the whole facade. Object creation of the WebRequestHttpConsumer class shouldn't add a lot of overhead, but the domain is unaware of this.
I am instead considering moving all of the caching logic for the entity out into a separate static factory class. Still, the code will depend on IConsumeHttp. So I'm thinking of using a static service locator within the static factory method to resolve IConsumeHttp, but only when the cached XML needs to be initialized or refreshed.
My question: Is this a bad idea? It does seem to me that it should be the domain's responsibility to make sure the XML metadata is appropriately cached. The domain does this periodically as part of other related operations (such as getting metadata for SAML Authn requests & responses, updating the SAML EntityID or Metadata URL, etc). Or am I just worrying about it too much?
It does seem to me that it should be the domain's responsibility to
make sure the XML metadata is appropriately cached
I'm not sure about that, unless your domain is really about metadata manipulation, http requests and so on. For a "normal" application with a non-technical domain, I'd rather deal with caching concerns in the Infrastructure/Technical Services layer.
The issue I see with this is that only one method in the facade has a
dependency on this interface, so it seems silly to inject it for the
whole facade
Obviously, Facades usually don't lend themselves very well to constructor injection since they naturally tend to point to many dependencies. You could consider other types of injection or, as you pointed out, using a locator. But what I'd personnaly do is ask myself if a Facade is really appropriate and consider using finer-grained objects instead of the same large interface in all of my controllers. This would allow for more modularity and ad-hoc injection rather than inflating a massive object upfront.
But that may just be because I'm not a big Facade fan ;)
In your comment to #ian31, you mention "It seems like making the controller ensure the domain has the correct XML is too granular, giving the client too much responsibility". For this reason, I'd prefer the controller asks its service/repository (which can implement the caching layer) for the correct & current XML. To me, this responsibility is a lot to ask of the domain entity.
However, if you're OK with the responsibilities you've outlined, and you mention the object creation isn't much overhead, I think leaving the IConsumeHttp in the entity is fine.
Sticking with this responsibility, another approach could be to move this interface down into a child entity. If this was possible for your case, at least the dependency is confined to the scenario that requires it.

Resources