Extracting all children belongs to specific parent in graphql - graphql

I am using GrapgQL and Java. I need to extract all the children belongs to specific parent. I have used the below way but it will fetch only the parent and it does not fetch any children.
schema {
query: Query
}
type LearningResource{
id: ID
name: String
type: String
children: [LearningResource]
}
type Query {
fetchLearningResource: LearningResource
}
#Component
public class LearningResourceDataFetcher implements DataFetcher{
#Override
public LearningResource get(DataFetchingEnvironment dataFetchingEnvironment) {
LearningResource lr3 = new LearningResource();
lr3.setId("id-03");
lr3.setName("Resource-3");
lr3.setType("Book");
LearningResource lr2 = new LearningResource();
lr2.setId("id-02");
lr2.setName("Resource-2");
lr2.setType("Paper");
LearningResource lr1 = new LearningResource();
lr1.setId("id-01");
lr1.setName("Resource-1");
lr1.setType("Paper");
List<LearningResource> learningResources = new ArrayList<>();
learningResources.add(lr2);
learningResources.add(lr3);
learningResource1.setChildren(learningResources);
return lr1;
}
}
return RuntimeWiring.newRuntimeWiring().type("Query", typeWiring -> typeWiring.dataFetcher("fetchLearningResource", learningResourceDataFetcher)).build();
My Controller endpoint
#RequestMapping(value = "/queryType", method = RequestMethod.POST)
public ResponseEntity query(#RequestBody String query) {
System.out.println(query);
ExecutionResult result = graphQL.execute(query);
System.out.println(result.getErrors());
System.out.println(result.getData().toString());
return ResponseEntity.ok(result.getData());
}
My request would be like below
{
fetchLearningResource
{
name
}
}
Can anybody please help me to sort this ?

Because I get asked this question a lot in real life, I'll answer it in detail here so people have easier time googling (and I have something to point at).
As noted in the comments, the selection for each level has to be explicit and there is no notion of an infinitely recursive query like get everything under a node to the bottom (or get all children of this parent recursively to the bottom).
The reason is mostly that allowing such queries could easily put you in a dangerous situation: a user would be able to request the entire object graph from the server in one easy go! For any non-trivial data size, this would kill the server and saturate the network in no time. Additionally, what would happen once a recursive relationship is encountered?
Still, there is a semi-controlled escape-hatch you could use here. If the scope in which you need everything is limited (and it really should be), you could map the output type of a specific query as a (complex) scalar.
In your case, this would mean mapping LearningResource as a scalar. Then, fetchLearningResource would effectively be returning a JSON blob, where the blob would happen to be all the children and their children recursively. Query resolution doesn't descent deeper once a scalar field is reached, as scalars are leaf nodes, so it can't keep resolving the children level-by-level. This means you'd have to recursively fetch everything in one go, by yourself, as GraphQL engine can't help you here. It also means sub-selections become impossible (as scalars can't have sub-selections - again, they're leaf nodes), so the client would always get all the children and all the fields from each child back. If you still need the ability to limit the selection in certain cases, you can expose 2 different queries e.g. fetchLearningResource and fetchAllLearningResources, where the former would be mapped as it is now, and the latter would return the scalar as explained.
An object scalar implementation is provided by the graphql-java ExtendedScalars project.
The schema could then look like:
schema {
query: Query
}
scalar Object
type Query {
fetchLearningResource: Object
}
And you'd use the method above to produce the scalar implementation:
RuntimeWiring.newRuntimeWiring()
.scalar(ExtendedScalars.Object) //register the scalar impl
.type("Query", typeWiring -> typeWiring.dataFetcher("fetchLearningResource", learningResourceDataFetcher)).build();
Depending on how you process the results of this query, the DataFetcher for fetchLearningResource may need to turn the resulting object into a map-of-maps (JSON-like object) before returning to the client. If you simply JSON-serialize the result anyway, you can likely skip this. Note that you're side-stepping all safety mechanisms here and must take care not to produce enormous results. By extension, if you need this in many places, you're very likely using a completely wrong technology for your problem.
I have not tested this with your code myself, so I might have skipped something important, but this should be enough to get you (or anyone googling) onto the right track (if you're sure this is the right track).
UPDATE: I've seen someone implement a custom Instrumentation that rewrites the query immediately after it's parsed, and adds all fields to the selection set if no field had already been selected, recursively. This effectively allows them to select everything implicitly.
In graphql-java v11 and prior, you could mutate the parsed query (represented by the Document class), but as of v12, it will no longer be possible, but instrumentations in turn gain the ability to replace the Document explicitly via the new instrumentDocument method.
Of course, this only makes sense if your schema is such that it can not be exploited or you fully control the client so there's no danger. You could also only do it selectively for some types, but it would be extremely confusing to use.

Related

Spring Webflux: efficiently using Flux and/or Mono stream multiple times (possible?)

I have the method below, where I am calling several ReactiveMongoRepositories in order to receive and process certain documents. Since I am kind of new to Webflux, I am learning as I go.
To my feeling the code below doesn't feel very efficient, as I am opening multiple streams at the same time. This non-blocking way of writing code makes it complicated somehow to get a value from a stream and re-use that value in the cascaded flatmaps down the line.
In the example below I have to call the userRepository twice, since I want the user at the beginning and than later as well. Is there a possibility to do this more efficiently with Webflux?
public Mono<Guideline> addGuideline(Guideline guideline, String keycloakUserId) {
Mono<Guideline> guidelineMono = userRepository.findByKeycloakUserId(keycloakUserId)
.flatMap(user -> {
return teamRepository.findUserInTeams(user.get_id());
}).zipWith(instructionRepository.findById(guideline.getInstructionId()))
.zipWith(userRepository.findByKeycloakUserId(keycloakUserId))
.flatMap(objects -> {
User user = objects.getT2();
Instruction instruction = objects.getT1().getT2();
Team team = objects.getT1().getT1();
if (instruction.getTeamId().equals(team.get_id())) {
guideline.setAddedByUser(user.get_id());
guideline.setTeamId(team.get_id());
guideline.setDateAdded(new Date());
guideline.setGuidelineStatus(GuidelineStatus.ACTIVE);
guideline.setGuidelineSteps(Arrays.asList());
return guidelineRepository.save(guideline);
} else {
return Mono.error(new InstructionDoesntBelongOrExistException("Unable to add, since this Instruction does not belong to you or doesn't exist anymore!"));
}
});
return guidelineMono;
}
i'll post my earlier comment as an answer. If anyone feels like writing the correct code for it then go ahead.
i don't have access to an IDE current so cant write an example but you could start by fetching the instruction from the database.
Keep that Mono<Instruction> then you fetch your User and flatMap the User and fetch the Team from the database. Then you flatMap the team and build a Mono<Tuple> consisting of Mono<Tuple<User, Team>>.
After that you take your 2 Monos and use zipWith with a Combinator function and build a Mono<Tuple<User, Team, Instruction>> that you can flatMap over.
So basically fetch 1 item, then fetch 2 items, then Combinate into 3 items. You can create Tuples using the Tuples.of(...) function.

Why does using Linq's .Select() return IEnumerable<dynamic> even though the type is clearly defined?

I'm using Dapper to return dynamic objects and sometimes mapping them manually. Everything's working fine, but I was wondering what the laws of casting were and why the following examples hold true.
(for these examples I used 'StringBuilder' as my known type, though it is usually something like 'Product')
Example1: Why does this return an IEnumerable<dynamic> even though 'makeStringBuilder' clearly returns a StringBuilder object?
Example2: Why does this build, but 'Example1' wouldn't if it was IEnumerable<StringBuilder>?
Example3: Same question as Example2?
private void test()
{
List<dynamic> dynamicObjects = {Some list of dynamic objects};
IEnumerable<dynamic> example1 = dynamicObjects.Select(s => makeStringBuilder(s));
IEnumerable<StringBuilder> example2 = dynamicObjects.Select(s => (StringBuilder)makeStringBuilder(s));
IEnumerable<StringBuilder> example3 = dynamicObjects.Select(s => makeStringBuilder(s)).Cast<StringBuilder>();
}
private StringBuilder makeStringBuilder(dynamic s)
{
return new StringBuilder(s);
}
With the above examples, is there a recommended way of handling this? and does casting like this hurt performance? Thanks!
When you use dynamic, even as a parameter, the entire expression is handled via dynamic binding and will result in being "dynamic" at compile time (since it's based on its run-time type). This is covered in 7.2.2 of the C# spec:
However, if an expression is a dynamic expression (i.e. has the type dynamic) this indicates that any binding that it participates in should be based on its run-time type (i.e. the actual type of the object it denotes at run-time) rather than the type it has at compile-time. The binding of such an operation is therefore deferred until the time where the operation is to be executed during the running of the program. This is referred to as dynamic binding.
In your case, using the cast will safely convert this to an IEnumerable<StringBuilder>, and should have very little impact on performance. The example2 version is very slightly more efficient than the example3 version, but both have very little overhead when used in this way.
While I can't speak very well to the "why", I think you should be able to write example1 as:
IEnumerable<StringBuilder> example1 = dynamicObjects.Select<dynamic, StringBuilder>(s => makeStringBuilder(s));
You need to tell the compiler what type the projection should take, though I'm sure someone else can clarify why it can't infer the correct type. But I believe by specifying the projection type, you can avoid having to actually cast, which should yield some performance benefit.

How can I intercept the result of an IQueryProvider query (other than single result)

I'm using Entity Framework and I have a custum IQueryProvider. I use the Execute method so that I can modify the result (a POCO) of a query after is has been executed. I want to do the same for collections. The problem is that the Execute method is only called for single result.
As described on MSDN :
The Execute method executes queries that return a single value
(instead of an enumerable sequence of values). Expression trees that
represent queries that return enumerable results are executed when
their associated IQueryable object is enumerated.
Is there another way to accomplish what I want that I missed?
I know I could write a specific method inside a repository or whatever but I want to apply this to all possible queries.
This is true that the actual signature is:
public object Execute(Expression expression)
public TResult Execute<TResult>(Expression expression)
However, that does not mean that the TResult will always be a single element! It is the type expected to be returned from the expression.
Also, note that there are no constraints over the TResult, not even 'class' or 'new()'.
The TResult is a MyObject when your expression is of singular result, like .FirstOrDefault(). However, the TResult can also be a double when you .Avg() over the query, and also it can be IEnumerable<MyObject> when your query is plain .Select.Where.
Proof(*) - I've just set a breakpoint inside my Execute() implementation, and I've inspected it with Watches:
typeof(TResult).FullName "System.Collections.Generic.IEnumerable`1[[xxxxxx,xxxxx]]"
expression.Type.FullName "System.Linq.IQueryable`1[[xxxxxx,xxxxx]]"
I admit that three overloads, one object, one TResult and one IEnumerable<TResult> would probably be more readable. I think they did not place three of them as extensibility point for future interfaces. I can imagine that in future they came up with something more robust than IEnumerable, and then they'd need to add another overload and so on. With simple this interface can process any type.
Oh, see, we now also have IQueryable in addition to IEnumerable, so it would need at least four overloads:)
The Proof is marked with (*) because I have had a small bug/feature in my IQueryProvider's code that has is obscuring the real behavior of LINQ.
LINQ indeed calls the generic Execute only for singular cases. This is a shortcut, an optimization.
For all other cases, it ... doesn't call Execute() it at all
For those all other cases, the LINQ calls .GetEnumerator on your custom IQueryable<> implementation, that what happens is dictated by .. simply what you wrote there. I mean, assuming that you actually provided custom implementations of IQueryable. That would be strange if you did not - that's just about 15 lines in total, nothing compared to the length of custom provider.
In the project where I got the "proof" from, my implementation looks like:
public System.Collections.IEnumerator GetEnumerator()
{
return Provider.Execute<IEnumerable>( this.Expression ).GetEnumerator();
}
public IEnumerator<TOut> GetEnumerator()
{
return Provider.Execute<IEnumerable<TOut>>( this.Expression ).GetEnumerator();
}
of course, one of them would be explicit due to name collision. Please note that to fetch the enumerator, I actually call the Execute with explicitely stated TResult. This is why in my "proof" those types occurred.
I think that you see the "TResult = Single Element" case, because you wrote i.e. something like this:
public IEnumerator<TOut> GetEnumerator()
{
return Provider.Execute<TOut>( this.Expression ).GetEnumerator();
}
Which really renders your Execute implementation without choice, and must return single element. IMHO, this is just a bug in your code. You could have done it like in my example above, or you could simply use the untyped Execute:
public System.Collections.IEnumerator GetEnumerator()
{
return ((IEnumerable)Provider.Execute( this.Expression )).GetEnumerator();
}
public IEnumerator<TOut> GetEnumerator()
{
return ((IEnumerable<TOut>)Provider.Execute( this.Expression )).GetEnumerator();
}
Of course, your implementation of Execute must make sure to return proper IEnumerables for such queries!
Expression trees that represent queries that return enumerable results are executed when their associated IQueryable object is enumerated.
I recommend enumerating your query:
foreach(T t in query)
{
CustomModification(t);
}
Your IQueryProvider must implement CreateQuery<T>. You get to choose the implemenation of the resulting IQueryable. If you want that IQueryable to do something to each row when enumerated, you get to write that implementation.
The final answer is that it's not possible.

Doesn't IQueryable.Expression = Expression.Constant(this); cause an infinite loop?

Expression trees represent code in a tree-like data structure, where each node is an expression. In Linq they are used by linq providers to convert them into native language of a target process.
I know very little of expression trees, but I've been reading the following code where author uses Expression.Constant(this) to describe the initial query. Thus according to author Expression.Constant(this) should enable provider to retrieve the initial sequence of elements for someQuery.
But to my understanding this should instead cause an infinite loop, since expression tree in someQuery.Expression is describing someQuery object and not the details of the query itself ( or to put it differently, if target platform is SQL DB, Expression.Constant(this) doesn't describe in non-sql terms which rows or tables a query should retrieve from a DB ). And thus when provider looks into someQuery.Expression, it will only find description D of someQuery object.And if it further inspects the details of D.Expression property, it again finds the description of someQuery object and so on – thus infinite loop:
public class Query<T> : IQueryable<T>...
{
QueryProvider provider;
Expression expression;
public Query(QueryProvider provider) {
this.provider = provider;
this.expression = Expression.Constant(this);
}
...
}
Query<string> someQuery = new Query<string>();
thank you
I'd expect the provider to have knowledge of the query type and know that when it hit a constant expression of type Query<T>, it had hit a leaf, effectively. Sooner or later, the provider has to get to something describing "the whole table" or an equivalent. Of course the Query<T> would need the information about which table etc, but in a full example I'd expect it to have that information.
(Out of interest, am I the author in question? I wrote something very similar in C# in Depth...)

Gson, How to write a JsonDeserializer for Generic Typed Classes?

Situation
I have a class that holds a generic type, and it also has a non-zero arg constructor. I don't want to expose a zero arg constructor because it can lead to erroneous data.
public class Geometries<T extends AbstractGeometry>{
private final GeometryType geometryType;
private Collection<T> geometries;
public Geometries(Class<T> classOfT) {
this.geometryType = lookup(classOfT);//strict typing.
}
}
There are several (known and final) classes that may extend AbstractGeometry.
public final Point extends AbstractGeometry{ ....}
public final Polygon extends AbstractGeometry{ ....}
Example json:
{
"geometryType" : "point",
"geometries" : [
{ ...contents differ... hence AbstractGeometry},
{ ...contents differ... hence AbstractGeometry},
{ ...contents differ... hence AbstractGeometry}
]
}
Question
How can I write a JsonDeserializer that will deserialize a Generic Typed class (such as Geometires)?
CHEERS :)
p.s. I don't believe I need a JsonSerializer, this should work out of the box :)
Note: This answer was based on the first version of the question. The edits and subsequent question(s) change things.
p.s. I don't believe I need a JsonSerializer, this should work out of the box :)
That's not the case at all. The JSON example you posted does not match the Java class structure you apparently want to bind to and generate.
If you want JSON like that from Java like that, you'll definitely need custom serialization processing.
The JSON structure is
an object with two elements
element 1 is a string named "geometryType"
element 2 is an object named "geometries", with differing elements based on type
The Java structure is
an object with two fields
field 1, named "geometryType", is a complex type GeometryType
field 2, named "geometries" is a Collection of AbstractGeometry objects
Major Differences:
JSON string does not match Java type GeometryType
JSON object does not match Java type Collection
Given this Java structure, a matching JSON structure would be
an object with two elements
element 1, named "geometryType", is a complex object, with elements matching the fields in GeometryType
element 2, named "geometries", is a collection of objects, where the elements of the different objects in the collection differ based on specific AbstractGeometry types
Are you sure that what you posted is really what you intended? I'm guessing that either or both of the structures should be changed.
Regarding any question on polymorphic deserialization, please note that the issue was discussed a few times on StackOverflow.com already. I posted a link to four different such questions and answers (some with code examples) at Can I instantiate a superclass and have a particular subclass be instantiated based on the parameters supplied.

Resources