graphql-java: DataFetchEnvironment coming in as null on query resolver - graphql

I'm using graphql with java on the server side and graphql/apollo with javascript on the client side. Generally I am very happy with the setup, but most recently I am stuck on an issue with the DataFetchingEnvironment.
I have two graphql queries which are being called identically on the client side. On the server side, the resolver methods take in the DataFetchingEnvironment as a parameter in order to get the context and retrieve a value from a cookie being passed in.
The one query executes flawlessly and the cookie value is read. On the other query, the DataFetchingEnvironment is coming in as null.
I'm puzzled as to why this is happening. Are there any reasons or conditions under which a graphql-java resolver method is not supposed to receive a Data Fetching Environment? Any ideas as to why this may be?
Here are my pom dependencies to show version numbers.
<groupId>com.graphql-java</groupId>
<artifactId>graphql-spring-boot-starter</artifactId>
<version>3.10.0</version>
<groupId>com.graphql-java</groupId>
<artifactId>graphql-java-tools</artifactId>
<version>4.3.0</version>
<groupId>com.graphql-java</groupId>
<artifactId>graphql-java-servlet</artifactId>
<version>4.6.0</version>
<groupId>com.zhokhov.graphql</groupId>
<artifactId>graphql-datetime-spring-boot-starter</artifactId>
<version>1.1.0</version>
Here is a skeletal version of the offending resolver method:
public List<Something> getSomething( String somethingId, DataFetchingEnvironment dataFetchingEnvironment ) {
log( dataFetchingEnvironment ); // result is null
}

Related

Can't serialize `PaginatedScanList` - potentially after adding awssdk libraries

We have two versions of a SpringBoot server, both with a generic CrudController, simply relaying GET, PUT, POST and DELETE requests to the relevant DyamoDb table in the data layer.
The get handler is super-simple:
#GetMapping
public ResponseEntity<Iterable<U>> get() {
return ResponseEntity.ok(service.get());
}
Using the implementation of the service/CrudRepository as (e.g.)
public Iterable<U> get() {
return repository.findAll();
}
In the older version of the server, that doesn't have additional awssdk libraries (namely s3, sts, cognitoidentity and cognitoidentityprovider) the response gets serialized perfectly fine - as an array of the response objects.
In the new version, however, it gets serialized as an empty object - {}
I'm guessing this is down to losing the ability to serialize PaginatedScanList - as the return value is exactly the same in both server versions.
It's entirely possible that the libraries are a red herring but comparing the two versions, there aren't any other relevant changes on these code paths.
Any idea what could be causing this and how to fix it?

Methods in OPC-UA Apache-Camel Milo Client

A few months ago I started working on a project that requires the integration of OPC / UA to communicate with an automatic machine. Working with SpringBoot I looked for a library that was well integrated with this framework and in several posts and thesis I found Eclipse Milo, but in the version integrated with Apache Camel. Not knowing either Camel or Milo, I had to study both at least.
Camel has a huge documentation, while the integration with Milo is limited to the parameterization and configuration of the Nodes to perform the reading and writing. That said it seems more than enough but in reality, since there are no specific examples, I had to search for posts several times to understand where I was wrong and clearly it took a lot of time.
Now for example, I was able to run the reads and writes correctly while the function calls have a strange behavior, that is, every time I call the test function, the value that is returned to me is the parameter that I give in input, even if, enabling the TRACE on Camel and Milo I see that the function is called correctly and in the OutputArguments there is the result I expect but Camel keeps returning the InputArguments. It's certainly my mistake but I can't find anything to help me understand where I'm wrong. Is the choice I made the right one? I don't know what else to try.
Here the test simplified code I'm trying to do:
Variant[] params = new Variant[1];
params[0] = new Variant(13);
String endpointUri = "milo-client:opc.tcp://milo.digitalpetri.com:62541/milo?node=RAW(ns=2;s=Methods)"&method=RAW(ns=2;s=Methods/sqrt(x))";
return producerTemplate.requestBody(endpointUri, params, "await", true, Variant.class);
The returned object is the same as I input, even if looking at the log I see that the function call is executed correctly:
2021-mag-20 11:14:07.613 TRACE [milo-netty-event-loop-1] o.e.m.o.s.c.t.t.OpcTcpTransport - Write succeeded for request=PublishRequest, requestHandle=16
2021-mag-20 11:14:07.598 DEBUG [milo-shared-thread-pool-1] o.a.c.c.m.c.i.SubscriptionManager - Call to node=ExpandedNodeId{ns=2, id=Methods, serverIndex=0}, method=ExpandedNodeId{ns=2, id=Methods/sqrt(x), serverIndex=0} = [Variant{value=13.0}]-> CallMethodResult{StatusCode=StatusCode{name=Good, value=0x00000000, quality=good}, InputArgumentResults=[StatusCode{name=Good, value=0x00000000, quality=good}], InputArgumentDiagnosticInfos=[], OutputArguments=[Variant{value=3.605551275463989}]}
This are my dependencies :
<dependency>
<groupId>org.apache.camel.springboot</groupId>
<artifactId>camel-spring-boot-starter</artifactId>
<version>3.9.0</version>
</dependency>
<dependency>
<groupId>org.apache.camel.springboot</groupId>
<artifactId>camel-milo-starter</artifactId>
<version>3.9.0</version>
</dependency>

How to create Apis on Spring, that are using up to date environment values

I want to create API, which takes part of its input from environment (urls etc..). Basically a Jar file.
I also wan't the values being auto updated when application.properties changes. When there is change, this is called:
org.springframework.cloud.endpoint.RefreshEndpoint#refresh
However, I consider it bad practice to have magic env variable keys like 'server.x.url' in the Api contract between client application and the jar. (Problem A)
That's why I'd like to use Api like this. But there's problem of old values.
public class MyC {
TheAPI theApi=null;
void MyC(){
theApi = new TheApi();
theApi.setUrl( env.get("server.x.url") );
}
doStuff() {
theApi.doStuff(); // fails as theApi has obsolete value of server.x.url, Problem B
}
So either I have ugly API contract or I get obsolete values in the API calls.
I'm sure there must be Spring way of solving this, but I cant get it to my head just now.

Aws integration spring: Extend Visibility Timeout

Is it possible to extend the visibility time out of a message that is in flight.
See:
http://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/AboutVT.html.
Section: Changing a Message's Visibility Timeout.
http://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/services/sqs/AmazonSQSClient.html#changeMessageVisibility-com.amazonaws.services.sqs.model.ChangeMessageVisibilityRequest-
In summary I want to be able to extend the first set visibility timeout for a given message that is in flight.
Example if 15secs have passed I then want to extend the timeout by another 20secs. Better example in java docs above.
From my understanding in the links above you can do this on the amazon side.
Below are my current settings;
SqsMessageDrivenChannelAdapter adapter =
new SqsMessageDrivenChannelAdapter(queue);
adapter.setMessageDeletionPolicy(SqsMessageDeletionPolicy.ON_SUCCESS);
adapter.setMaxNumberOfMessages(1);
adapter.setSendTimeout(2000);
adapter.setVisibilityTimeout(200);
adapter.setWaitTimeOut(20);
Is it possible to extend this timeout?
Spring Cloud AWS supports this starting with Version 2.0. Injecting a Visiblity parameter in your SQS listener method does the trick:
#SqsListener(value = "my-sqs-queue")
void onMessageReceived(#Payload String payload, Visibility visibility) {
...
var extension = visibility.extend(20);
...
}
Note, that extend will work asynchronously and will return a Future. So if you want to be sure further down the processing, that the visibility of the message is really extended at the AWS side of things, either block on the Future using extension.get() or query the Future with extension.isDone()
OK. Looks like I see your point.
We can change visibility for particular message using API:
AmazonSQS.changeMessageVisibility(String queueUrl, String receiptHandle, Integer visibilityTimeout)
For this purpose in downstream flow you have to get access to (inject) AmazonSQS bean and extract special headers from the Message:
#Autowired
AmazonSQS amazonSqs;
#Autowired
ResourceIdResolver resourceIdResolver;
...
MessageHeaders headers = message.getHeaders();
DestinationResolver destinationResolver = new DynamicQueueUrlDestinationResolver(this.amazonSqs, this.resourceIdResolver);
String queueUrl = destinationResolver.resolveDestination(headers.get(AwsHeaders.QUEUE));
String receiptHandle = headers.get(AwsHeaders.RECEIPT_HANDLE);
amazonSqs.changeMessageVisibility(queueUrl, receiptHandle, YOUR_DESIRED_VISIBILITY_TIMEOUT);
But eh, I agree that we should provide something on the matter as out-of-the-box feature. That may be even something similar to QueueMessageAcknowledgment as a new header. Or even just one more changeMessageVisibility method to this one.
Please, raise a GH issue for Spring Cloud AWS project on the matter with link to this SO topic.

which jar contains org.apache.spark.sql.api.java.JavaSQLContext

The following dependency is in the pom:
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-sql_2.10</artifactId>
<version>1.3.0</version>
</dependency>
I expect the jar to contain the following class:
org.apache.spark.sql.api.java.JavaSQLContext
but while it contains the package org.apache.spark.sql.api.java, all that package appears to contain are interfaces named UDF1- UDSF22.
Which is the correct dependency to get JavaSQLContext?
Thanks.
The JavaSQLContext class has been removed from version 1.3.0 onwards. You should use org.apache.spark.sql.SQLContext class instead. The documentation states the following:
Prior to Spark 1.3 there were separate Java compatible classes (JavaSQLContext and JavaSchemaRDD) that mirrored the Scala API. In Spark 1.3 the Java API and Scala API have been unified. Users of either language should use SQLContext and DataFrame. In general theses classes try to use types that are usable from both languages (i.e. Array instead of language specific collections). In some cases where no common type exists (e.g., for passing in closures or Maps) function overloading is used instead.
Additionally the Java specific types API has been removed. Users of both Scala and Java should use the classes present in org.apache.spark.sql.types to describe schema programmatically.
As an aside if you want to search which jars contain a specific class you can use the Advanced Search of Maven Central and search "By Classname". So here is the search for JavaSQLContext:
http://search.maven.org/#search|ga|1|fc%3A%22org.apache.spark.sql.api.java.JavaSQLContext%22
From a cursory search, it appears that the class org.apache.spark.sql.api.java.JavaSQLContext only appears in the 1.2 versions and earlier of the spark-sql JAR file. It is likely that the code with which you are working is also using this older dependency. You have two choices at this point: you can either upgrade your code usage, or you can downgrade the spark-sql JAR. You probably want to go with the former option.
If you insist on keeping your code the same, then including the following dependency in your POM should fix the problem:
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-sql_2.10</artifactId>
<version>1.2.2</version>
</dependency>
If you want to upgrade your code, see the answer given by #DB5
I had the same problem, and it was because I was looking at the wrong version of the documentation.
My understanding from the latest documentation - https://spark.apache.org/docs/latest/sql-programming-guide.html#loading-data-programmatically - is to use something like this (copied from the doc):
SQLContext sqlContext = null; // Determine;
DataFrame schemaPeople = null; // The DataFrame from the previous example.
// DataFrames can be saved as Parquet files, maintaining the schema information.
schemaPeople.write().parquet("people.parquet");
// Read in the Parquet file created above. Parquet files are self-describing so the schema is preserved.
// The result of loading a parquet file is also a DataFrame.
DataFrame parquetFile = sqlContext.read().parquet("people.parquet");
// Parquet files can also be registered as tables and then used in SQL statements.
parquetFile.registerTempTable("parquetFile");
DataFrame teenagers = sqlContext.sql("SELECT name FROM parquetFile WHERE age >= 13 AND age <= 19");
List<String> teenagerNames = teenagers.javaRDD().map(new Function<Row, String>() {
public String call(Row row) {
return "Name: " + row.getString(0);
}
}).collect();

Resources