Though both directives can hide a field.
When include is false it works same like when skip is true then what made them different.
According to the spec, there is no real difference -- both directives can be used to prevent a field from being resolved. Under the hood, the only difference is that if both skip and include exist on a field, the skip logic will be evaluated first (i.e. if skip is true, the field will always be omitted regardless of the value of include).
There is no preference between the two. Having both directives allows you to reuse the same variable for both cases where you want to include or exclude different fields. It also makes queries easier to read and reason about.
For example, if you had a schema like this:
type Query {
pet: Pet
}
type Pet {
# other fields
numberLitterBoxes: Int
numberDogHouses: Int
}
Having both directives allows you to reduce the number of variables that have to be included with your request. For example, you can query:
query ExampleQuery ($isCat: Boolean) {
pet {
numberLitterBoxes #include(if: $isCat)
numberDogHouses #skip(if: $isCat)
}
}
If you only had one directive or the other, the above query would require you to pass in two variables (isCat and isNotCat).
The only difference should be in case when you apply both directives at the same time. Skip should have higher prio.
The code from both directives looks quite similar
Related
I have a struct such as
type Info struct {
Foo string
FooBar string
Services string
Clown string
}
and lets say I have already populated the first 2 fields
input := &Info{
Foo: "true",
Services: "Massage",
}
Is there a way to "reopen" the struct to add missing elements. Something like this :
input = {
input,
Foobar: "Spaghetti",
Clown: "Carroussel"
}
Instead of
input.Foobar = "Spaghetti"
input.Clown = "Carroussel"
I have multiple fields and just don't really like having many lines input.Fields. I didn't find anything like it. So I was wondering.
No, this is not supported by the language's syntax.
Btw, the solution you want to avoid consists of less lines that the theoretical alternative :) (2 lines vs 4 lines in your example).
One could create a helper function which copies non-zero fields from one instance of a struct to another, so you could create a struct with the additional fields using a composite literal and use that as the source, but this requires using reflection, which is slow, and this solution wouldn't be more readable.
See related: "Merge" fields two structs of same type
I have to produce a proto class for an object which will have around 12 variations. All 12 variations share four fields which are the same, and then have specific fields. In most cases there will be many more non specific fields than there are common fields.
I was wondering what would be the most performant way to achieve this.
First option: defining the common fields in a common proto class and then declaring a field of this type in all the specific types:
message CommonFields {
// common_field1
// ... common_fieldN
}
message SpecificType1 {
CommonFields common = 1;
// specific fields...
}
Or would it be better to define one top level proto which contains the fields, and then having a oneof field, which can refer to another type containing the specific fields:
message BaseType {
// common_field_1
// ... common_field_N
oneof specific_fields {
SpecificTypeFields1 type1_fields = N;
SpecificTypeFields2 type1_fields = N+1;
}
}
message SpecificTypeFields1 {
// specific fields...
}
message SpecificTypeFields2 {
// specific fields...
}
I'm particularly interested in performance and also convention. Or if there are any more typical ways, such as just repeating the common fields.. Bearing in mind though my protos will only have 4 common fields, and typically 3-8 specific ones.
Depending on the protobuf library, there is usually some performance penalty for encoding submessages. For most libraries, such as the Google's own protobuf libraries, the difference is very small. With either of your options, you end up encoding 1 submessage per message, further reducing the impact.
I have seen both formats commonly used. If the decoder side already knows the message type (from e.g. rpc method name), aggregation is usually easier to implement as it doesn't require separately checking the oneof type.
However, if the message type is not known, oneof method is better as it allows easy detection of the type.
As you can see in the title , I'd appreciate it if somebody can tell the usage of the Class .
There's a inside enum Type ,how to use it?
public static enum Type {
BETWEEN(2, "IsBetween", "Between"), IS_NOT_NULL(0, "IsNotNull", "NotNull"), IS_NULL(0, "IsNull", "Null"), LESS_THAN(
"IsLessThan", "LessThan"), LESS_THAN_EQUAL("IsLessThanEqual", "LessThanEqual"), GREATER_THAN("IsGreaterThan",
"GreaterThan"), GREATER_THAN_EQUAL("IsGreaterThanEqual", "GreaterThanEqual"), BEFORE("IsBefore", "Before"), AFTER(
"IsAfter", "After"), NOT_LIKE("IsNotLike", "NotLike"), LIKE("IsLike", "Like"), STARTING_WITH("IsStartingWith",
"StartingWith", "StartsWith"), ENDING_WITH("IsEndingWith", "EndingWith", "EndsWith"), NOT_CONTAINING(
"IsNotContaining", "NotContaining", "NotContains"), CONTAINING("IsContaining", "Containing", "Contains"), NOT_IN(
"IsNotIn", "NotIn"), IN("IsIn", "In"), NEAR("IsNear", "Near"), WITHIN("IsWithin", "Within"), REGEX(
"MatchesRegex", "Matches", "Regex"), EXISTS(0, "Exists"), TRUE(0, "IsTrue", "True"), FALSE(0, "IsFalse",
"False"), NEGATING_SIMPLE_PROPERTY("IsNot", "Not"), SIMPLE_PROPERTY("Is", "Equals");
// Need to list them again explicitly as the order is important
// (esp. for IS_NULL, IS_NOT_NULL)
private static final List<Part.Type> ALL = Arrays.asList(IS_NOT_NULL, IS_NULL, BETWEEN, LESS_THAN, LESS_THAN_EQUAL,
GREATER_THAN, GREATER_THAN_EQUAL, BEFORE, AFTER, NOT_LIKE, LIKE, STARTING_WITH, ENDING_WITH, NOT_CONTAINING,
CONTAINING, NOT_IN, IN, NEAR, WITHIN, REGEX, EXISTS, TRUE, FALSE, NEGATING_SIMPLE_PROPERTY, SIMPLE_PROPERTY);
...}
Part is internal to Spring Data. It is not intended to be used by client code. So if you don't implement your own Spring Data Modul you shouldn't use it at all nor anything inside it.
A Part is basically an element of an AST that will probably result in an element of a where clause or equivalent depending on the store in use.
E.g. if you have a method findByNameAndDobBetween(String, Date, Date) parsing the method name will result in two parts. One for the name condition and one for the DOB between condition.
The type enum lists all the different types of conditions that are possible.
The parameters of the elements are the number of method arguments required and (possibly multiple) Strings that identify this type inside a method name.
I understand how to use multiple return values in go. I further understand that in most cases one of the returns is an error, so ignoring returned values can be dangerous.
Is there a way to ignore a value in struct initializer like this? The below example does not work as Split returns two values, but I am interested only in the first one. I can of course create a variable but...
someFile := "test/filename.ext"
contrivedStruct := []struct{
parentDir string
}{
{ parentDir: filepath.Split(someFile) },
}
It's not possible to use only one of the return values when initializing members in Go.
Using variables clearly expresses your intent.
Go sometimes feels like it could be more succinct, but the Go authors favoured readability over brevity.
Alternatively, use a wrapper function. There are several 'Must' wrapper functions in the standard library, like: template.Must.
func first(args ...string) string {
return args[0]
}
For your particular example, splitting paths, see filepath.Base or filepath.Dir.
No, there is no way to skip one of the returned values in structure initializer.
I am using GrapgQL and Java. I need to extract all the children belongs to specific parent. I have used the below way but it will fetch only the parent and it does not fetch any children.
schema {
query: Query
}
type LearningResource{
id: ID
name: String
type: String
children: [LearningResource]
}
type Query {
fetchLearningResource: LearningResource
}
#Component
public class LearningResourceDataFetcher implements DataFetcher{
#Override
public LearningResource get(DataFetchingEnvironment dataFetchingEnvironment) {
LearningResource lr3 = new LearningResource();
lr3.setId("id-03");
lr3.setName("Resource-3");
lr3.setType("Book");
LearningResource lr2 = new LearningResource();
lr2.setId("id-02");
lr2.setName("Resource-2");
lr2.setType("Paper");
LearningResource lr1 = new LearningResource();
lr1.setId("id-01");
lr1.setName("Resource-1");
lr1.setType("Paper");
List<LearningResource> learningResources = new ArrayList<>();
learningResources.add(lr2);
learningResources.add(lr3);
learningResource1.setChildren(learningResources);
return lr1;
}
}
return RuntimeWiring.newRuntimeWiring().type("Query", typeWiring -> typeWiring.dataFetcher("fetchLearningResource", learningResourceDataFetcher)).build();
My Controller endpoint
#RequestMapping(value = "/queryType", method = RequestMethod.POST)
public ResponseEntity query(#RequestBody String query) {
System.out.println(query);
ExecutionResult result = graphQL.execute(query);
System.out.println(result.getErrors());
System.out.println(result.getData().toString());
return ResponseEntity.ok(result.getData());
}
My request would be like below
{
fetchLearningResource
{
name
}
}
Can anybody please help me to sort this ?
Because I get asked this question a lot in real life, I'll answer it in detail here so people have easier time googling (and I have something to point at).
As noted in the comments, the selection for each level has to be explicit and there is no notion of an infinitely recursive query like get everything under a node to the bottom (or get all children of this parent recursively to the bottom).
The reason is mostly that allowing such queries could easily put you in a dangerous situation: a user would be able to request the entire object graph from the server in one easy go! For any non-trivial data size, this would kill the server and saturate the network in no time. Additionally, what would happen once a recursive relationship is encountered?
Still, there is a semi-controlled escape-hatch you could use here. If the scope in which you need everything is limited (and it really should be), you could map the output type of a specific query as a (complex) scalar.
In your case, this would mean mapping LearningResource as a scalar. Then, fetchLearningResource would effectively be returning a JSON blob, where the blob would happen to be all the children and their children recursively. Query resolution doesn't descent deeper once a scalar field is reached, as scalars are leaf nodes, so it can't keep resolving the children level-by-level. This means you'd have to recursively fetch everything in one go, by yourself, as GraphQL engine can't help you here. It also means sub-selections become impossible (as scalars can't have sub-selections - again, they're leaf nodes), so the client would always get all the children and all the fields from each child back. If you still need the ability to limit the selection in certain cases, you can expose 2 different queries e.g. fetchLearningResource and fetchAllLearningResources, where the former would be mapped as it is now, and the latter would return the scalar as explained.
An object scalar implementation is provided by the graphql-java ExtendedScalars project.
The schema could then look like:
schema {
query: Query
}
scalar Object
type Query {
fetchLearningResource: Object
}
And you'd use the method above to produce the scalar implementation:
RuntimeWiring.newRuntimeWiring()
.scalar(ExtendedScalars.Object) //register the scalar impl
.type("Query", typeWiring -> typeWiring.dataFetcher("fetchLearningResource", learningResourceDataFetcher)).build();
Depending on how you process the results of this query, the DataFetcher for fetchLearningResource may need to turn the resulting object into a map-of-maps (JSON-like object) before returning to the client. If you simply JSON-serialize the result anyway, you can likely skip this. Note that you're side-stepping all safety mechanisms here and must take care not to produce enormous results. By extension, if you need this in many places, you're very likely using a completely wrong technology for your problem.
I have not tested this with your code myself, so I might have skipped something important, but this should be enough to get you (or anyone googling) onto the right track (if you're sure this is the right track).
UPDATE: I've seen someone implement a custom Instrumentation that rewrites the query immediately after it's parsed, and adds all fields to the selection set if no field had already been selected, recursively. This effectively allows them to select everything implicitly.
In graphql-java v11 and prior, you could mutate the parsed query (represented by the Document class), but as of v12, it will no longer be possible, but instrumentations in turn gain the ability to replace the Document explicitly via the new instrumentDocument method.
Of course, this only makes sense if your schema is such that it can not be exploited or you fully control the client so there's no danger. You could also only do it selectively for some types, but it would be extremely confusing to use.