Methods in OPC-UA Apache-Camel Milo Client - spring-boot

A few months ago I started working on a project that requires the integration of OPC / UA to communicate with an automatic machine. Working with SpringBoot I looked for a library that was well integrated with this framework and in several posts and thesis I found Eclipse Milo, but in the version integrated with Apache Camel. Not knowing either Camel or Milo, I had to study both at least.
Camel has a huge documentation, while the integration with Milo is limited to the parameterization and configuration of the Nodes to perform the reading and writing. That said it seems more than enough but in reality, since there are no specific examples, I had to search for posts several times to understand where I was wrong and clearly it took a lot of time.
Now for example, I was able to run the reads and writes correctly while the function calls have a strange behavior, that is, every time I call the test function, the value that is returned to me is the parameter that I give in input, even if, enabling the TRACE on Camel and Milo I see that the function is called correctly and in the OutputArguments there is the result I expect but Camel keeps returning the InputArguments. It's certainly my mistake but I can't find anything to help me understand where I'm wrong. Is the choice I made the right one? I don't know what else to try.
Here the test simplified code I'm trying to do:
Variant[] params = new Variant[1];
params[0] = new Variant(13);
String endpointUri = "milo-client:opc.tcp://milo.digitalpetri.com:62541/milo?node=RAW(ns=2;s=Methods)"&method=RAW(ns=2;s=Methods/sqrt(x))";
return producerTemplate.requestBody(endpointUri, params, "await", true, Variant.class);
The returned object is the same as I input, even if looking at the log I see that the function call is executed correctly:
2021-mag-20 11:14:07.613 TRACE [milo-netty-event-loop-1] o.e.m.o.s.c.t.t.OpcTcpTransport - Write succeeded for request=PublishRequest, requestHandle=16
2021-mag-20 11:14:07.598 DEBUG [milo-shared-thread-pool-1] o.a.c.c.m.c.i.SubscriptionManager - Call to node=ExpandedNodeId{ns=2, id=Methods, serverIndex=0}, method=ExpandedNodeId{ns=2, id=Methods/sqrt(x), serverIndex=0} = [Variant{value=13.0}]-> CallMethodResult{StatusCode=StatusCode{name=Good, value=0x00000000, quality=good}, InputArgumentResults=[StatusCode{name=Good, value=0x00000000, quality=good}], InputArgumentDiagnosticInfos=[], OutputArguments=[Variant{value=3.605551275463989}]}
This are my dependencies :
<dependency>
<groupId>org.apache.camel.springboot</groupId>
<artifactId>camel-spring-boot-starter</artifactId>
<version>3.9.0</version>
</dependency>
<dependency>
<groupId>org.apache.camel.springboot</groupId>
<artifactId>camel-milo-starter</artifactId>
<version>3.9.0</version>
</dependency>

Related

Apache Geode - Creating region on DUnit Based Test Server/Remote Server with same code from client

I am tryint to reuse the code in following documentation : https://geode.apache.org/docs/guide/11/developing/region_options/dynamic_region_creation.html
The first problem that i met is that
Cache cache = CacheFactory.getAnyInstance();
Region<String,RegionAttributes<?,?>> regionAttributesMetadataRegion = createRegionAttributesMetadataRegion(cache);
should not be executed in constructor. In case it is , the code is executed in client instance , it is failed on not server error.When this fixed i receive
[fatal 2021/02/15 16:38:24.915 EET <ServerConnection on port 40527 Thread 1> tid=81] Serialization filter is rejecting class org.restcomm.cache.geode.CreateRegionFunction
java.lang.Exception:
at org.apache.geode.internal.ObjectInputStreamFilterWrapper.lambda$createSerializationFilter$0(ObjectInputStreamFilterWrapper.java:233)
The problem is that code is getting executed on dunit MemberVM and the required class is actually the part of the package under which the test is getting executed.
So i guess i should somehow register the classes ( or may be jar ) separately to dunit MemberVM. How it can be done?
Another question is: currently the code is checking if the region exists and if not it calls the method. In both cases it also tries to create the clientRegion. The question is whether this is a correct approach?
Region<?,?> cache = instance.getRegion(name);
if(cache==null) {
Execution execution = FunctionService.onServers(instance);
ArrayList argList = new ArrayList();
argList.add(name);
Function function = new CreateRegionFunction();
execution.setArguments(argList).execute(function).getResult();
}
ClientRegionFactory<Object, Object> cf=this.instance.createClientRegionFactory(ClientRegionShortcut.CACHING_PROXY).addCacheListener(new ExtendedCacheListener());
this.cache = cf.create(name);
BR
Yulian Oifa
The first problem that i met is that
Cache cache = CacheFactory.getAnyInstance();
should not be executed in constructor. In case it is , the code is executed in client instance , it is failed on not server error.When this fixed i receive
Once the Function is registered on server side, you can execute it by ID instead of sending the object across the wire (so you won't need to instantiate the function on the client), in which case you'll also avoid the Serialization filter error. As an example, FunctionService.onServers(instance).execute(CreateRegionFunction.ID).
The problem is that code is getting executed on dunit MemberVM and the required class is actually the part of the package under which the test is getting executed. So i guess i should somehow register the classes ( or may be jar ) separately to dunit MemberVM. How it can be done?
Indeed, for security reasons Geode doesn't allow serializing / deserializing arbitrary classes. Internal Geode distributed tests use the MemberVM and set a special property (serializable-object-filter) to circumvent this problem. Here's an example of how you can achieve that within your own tests.
Another question is: currently the code is checking if the region exists and if not it calls the method. In both cases it also tries to create the clientRegion. The question is whether this is a correct approach?
If the dynamically created region is used by the client application then yes, you should create it, otherwise you won't be able to use it.
As a side note, there's a lot of internal logic implemented by Geode when creating a Region so I wouldn't advice to dynamically create regions on your own. Instead, it would be advisable to use the gfsh create region command directly, or look at how it works internally (see here) and try to re-use that.

Spring boot logging setup of FileAppender - where does it use the max-size property?

Overflowers
Please pardon my question if it's answer it or the answer is naive.
I have a very basic Spring Boot (1.5.4) logging setup in application.properties:
logging.level.org=WARN
logging.level.com=WARN
logging.level.springfox=OFF
logging.level.org.hibernate.hql.internal.ast=ERROR
logging.level.com.MyCompany.kph=DEBUG
logging.file=/var/MyProduct/logs/MyProduct.log
logging.file.max-size=2GB
logging.file.max-history=100
The 2GB is not being honoured. No value I put in there is being honoured. Even xxxxx as a value does not cause a blow-up.
logging.file does - and I can see that being used inside DefaultLogbackConfiguration.
From my source-following I can see method DefaultLogbackConfiguration#setMaxFileSize(a, b) being called. But that method is fixed at 10MB. This aligns with the behaviour i'm seeing.
Am I doing something wrong and triggering the very default behaviour? Or Does default behavior get loaded first then specific stuff goes on top? (If it does, I can't find it and it's not working for me).
Can someone point to me where max-size gets consumed and used?
Thanks
Rich
Christ just by writing this post and reading the docs for MY-SPRING-VERSION, I see max-size is not used at all. That is why it's not working.
https://docs.spring.io/spring-boot/docs/1.5.19.BUILD-SNAPSHOT/reference/htmlsingle/#boot-features-logging

graphql-java: DataFetchEnvironment coming in as null on query resolver

I'm using graphql with java on the server side and graphql/apollo with javascript on the client side. Generally I am very happy with the setup, but most recently I am stuck on an issue with the DataFetchingEnvironment.
I have two graphql queries which are being called identically on the client side. On the server side, the resolver methods take in the DataFetchingEnvironment as a parameter in order to get the context and retrieve a value from a cookie being passed in.
The one query executes flawlessly and the cookie value is read. On the other query, the DataFetchingEnvironment is coming in as null.
I'm puzzled as to why this is happening. Are there any reasons or conditions under which a graphql-java resolver method is not supposed to receive a Data Fetching Environment? Any ideas as to why this may be?
Here are my pom dependencies to show version numbers.
<groupId>com.graphql-java</groupId>
<artifactId>graphql-spring-boot-starter</artifactId>
<version>3.10.0</version>
<groupId>com.graphql-java</groupId>
<artifactId>graphql-java-tools</artifactId>
<version>4.3.0</version>
<groupId>com.graphql-java</groupId>
<artifactId>graphql-java-servlet</artifactId>
<version>4.6.0</version>
<groupId>com.zhokhov.graphql</groupId>
<artifactId>graphql-datetime-spring-boot-starter</artifactId>
<version>1.1.0</version>
Here is a skeletal version of the offending resolver method:
public List<Something> getSomething( String somethingId, DataFetchingEnvironment dataFetchingEnvironment ) {
log( dataFetchingEnvironment ); // result is null
}

which jar contains org.apache.spark.sql.api.java.JavaSQLContext

The following dependency is in the pom:
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-sql_2.10</artifactId>
<version>1.3.0</version>
</dependency>
I expect the jar to contain the following class:
org.apache.spark.sql.api.java.JavaSQLContext
but while it contains the package org.apache.spark.sql.api.java, all that package appears to contain are interfaces named UDF1- UDSF22.
Which is the correct dependency to get JavaSQLContext?
Thanks.
The JavaSQLContext class has been removed from version 1.3.0 onwards. You should use org.apache.spark.sql.SQLContext class instead. The documentation states the following:
Prior to Spark 1.3 there were separate Java compatible classes (JavaSQLContext and JavaSchemaRDD) that mirrored the Scala API. In Spark 1.3 the Java API and Scala API have been unified. Users of either language should use SQLContext and DataFrame. In general theses classes try to use types that are usable from both languages (i.e. Array instead of language specific collections). In some cases where no common type exists (e.g., for passing in closures or Maps) function overloading is used instead.
Additionally the Java specific types API has been removed. Users of both Scala and Java should use the classes present in org.apache.spark.sql.types to describe schema programmatically.
As an aside if you want to search which jars contain a specific class you can use the Advanced Search of Maven Central and search "By Classname". So here is the search for JavaSQLContext:
http://search.maven.org/#search|ga|1|fc%3A%22org.apache.spark.sql.api.java.JavaSQLContext%22
From a cursory search, it appears that the class org.apache.spark.sql.api.java.JavaSQLContext only appears in the 1.2 versions and earlier of the spark-sql JAR file. It is likely that the code with which you are working is also using this older dependency. You have two choices at this point: you can either upgrade your code usage, or you can downgrade the spark-sql JAR. You probably want to go with the former option.
If you insist on keeping your code the same, then including the following dependency in your POM should fix the problem:
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-sql_2.10</artifactId>
<version>1.2.2</version>
</dependency>
If you want to upgrade your code, see the answer given by #DB5
I had the same problem, and it was because I was looking at the wrong version of the documentation.
My understanding from the latest documentation - https://spark.apache.org/docs/latest/sql-programming-guide.html#loading-data-programmatically - is to use something like this (copied from the doc):
SQLContext sqlContext = null; // Determine;
DataFrame schemaPeople = null; // The DataFrame from the previous example.
// DataFrames can be saved as Parquet files, maintaining the schema information.
schemaPeople.write().parquet("people.parquet");
// Read in the Parquet file created above. Parquet files are self-describing so the schema is preserved.
// The result of loading a parquet file is also a DataFrame.
DataFrame parquetFile = sqlContext.read().parquet("people.parquet");
// Parquet files can also be registered as tables and then used in SQL statements.
parquetFile.registerTempTable("parquetFile");
DataFrame teenagers = sqlContext.sql("SELECT name FROM parquetFile WHERE age >= 13 AND age <= 19");
List<String> teenagerNames = teenagers.javaRDD().map(new Function<Row, String>() {
public String call(Row row) {
return "Name: " + row.getString(0);
}
}).collect();

SOAP faultcode list

I'm developing a magento script to import products from a XML file using the API and a SOAP wsdl connection.
I would like to know the faultcode list, I've been searching it for several days without luck, anyone know if there is one at all and where I can find it?
I need to handle the error codes to avoid the code to stop instead of just skipping the errors and continue importing what is correct.
At the moment I just discovered that the faultcode 101 is "Product not exists.".
Here's how to grab the list for your version of Magento. (I can't imagine this would be radically different between versions, but one never knows what's been done to a system)
Find all your api.xml files.
$ find app/code/core -name 'api.xml'
app/code/core/Mage/Api/etc/api.xml
app/code/core/Mage/Catalog/etc/api.xml
app/code/core/Mage/CatalogInventory/etc/api.xml
app/code/core/Mage/Checkout/etc/api.xml
app/code/core/Mage/Core/etc/api.xml
app/code/core/Mage/Customer/etc/api.xml
app/code/core/Mage/Directory/etc/api.xml
app/code/core/Mage/Downloadable/etc/api.xml
app/code/core/Mage/GiftMessage/etc/api.xml
app/code/core/Mage/Sales/etc/api.xml
app/code/core/Mage/Tag/etc/api.xml
Each file will have one or many <faults/> nodes which will contain the code and message.
<!-- File: app/code/core/Mage/CatalogInventory/etc/api.xml -->
<faults module="cataloginventory">
<not_exists>
<code>101</code>
<message>Product not exists.</message>
</not_exists>
<not_updated>
<code>102</code>
<message>Product inventory not updated. Details in error message.</message>
</not_updated>
</faults>
It's probably worth mentioning that the numeric codes aren't unique. Each "soap object" (unsure what to call these) defines its own.
<!-- File: app/code/core/Mage/Sales/etc/api.xml -->
<faults module="sales">
<not_exists>
<code>100</code>
<message>Requested order not exists.</message>
</not_exists>
<filters_invalid>
<code>101</code>
<message>Invalid filters given. Details in error message.</message>
</filters_invalid>
Good luck!

Resources