protobuf version compatability - protocol-buffers

I am using a cloud service which is building on protobuf2.0. This cloud service cannot be changed.
Now we have client to connect to this cloud service, which is built on .netcore 2.0.
As I test, that .netcore only works with protobuf3.0 syntax.
And 3.0 syntax is little different with 2.0. If I deploy client with protobuf3.0 in C# .netcore 2.0, can it consume the service which is built on 2.0 protobuf?

The actual binary serialization format hasn't changed at all in this time, so there are no fundamental blockers.
The biggest feature difference between proto2 and proto3 is the treatment of default / optional values. Proto3 has no concept of "the default value is 4" (defaults are always zero/nil), and has no concept of explicitly specifying a value that happens to also be the default value (non-zero values are always sent, zeros are never sent). So if your proto2 schema makes use of non-zero defaults, it can be awkward to transition.
As I test, that .netcore only works with protobuf3.0 syntax.
That statement makes me think you're not using protobuf-net (tags), bit are in fact using Google's C# implementation - Jon's original port was proto2 only, and the version migrated to the Google codebase is proto3 only. However, protobuf-net (a separate implementation) has no such limitation, and supports both proto2 and proto3 on all platforms including .NET Core. It does have a different API, however. Protobuf-net can be found on nuget, with a .proto processing tool available here (it also provides access to all the "protoc" outputs if you want to compare to the Google version).

Related

I am looking for Fhir Resource validation against Fhir Structure definition

I am looking for Fhir Resource validation against Fhir Structure definition using .net core
I found there is a lib org.hl7.fhir.validator.jar and I couldn't find a better way to do validation through c# code, my requirements are simple
Cardinality validation
Values
Bindings
Profiles
I have an idea in my mind, which is passing the FhirResource as a parameter and load the structure definition file and check the properties and return the error messages as Operational Outcome. Can someone advise me a best way to try it in C# specially in .NET core?
You can use the validation functionality of the .NET FHIR API (https://www.nuget.org/packages/Hl7.Fhir.STU3/), see here (https://github.com/FirelyTeam/Furore.Fhir.ValidationDemo) for a demo application that uses this library.
Although the demo is a winforms project, the .NET FHIR API is fully .NET Core compatible.

vdm Odata version compatibility

A conceptual question for vdm usage. Assume my OData evolves in a S4 cloud system and I am consuming it in a microservice. Since vdm needs the edmx file to generate entitiy classes, assume my odata has a new field or has eliminated one field that I do not use. If I do not change my edmx and will not generate new classes, will it be still work my call? And second question is, if one of the fields I use change, and I need to ensure 0 downtime, how do I handle 2 versions of generated classes in the same time?
The generated OData VDM ultimately performs an OData call based on the fields that are used. So if you would not use fields that are removed, this should not be a problem. Note however, that such removals would have to be done in a new version of the SAP S/4HANA service.
Since breaking changes affect all consumers independent of whether the Java or JavaScript VDM of the SAP S/4HANA Cloud SDK is used, developers of services in SAP S/4HANA have to follow a certain API guideline that includes specific deprecation rules.
So, if a breaking change is really required, according to the S/4HANA API guideline, a new version of the service has to be published and this will be also available with a different URL. This then gives you the possibility to migrate from an old to a new version without interruptions.

Most pain-free way to add XPath 3.0 to a codebase using dom4j

From what I understand, SAXON is the only library which supports XPath 3.0 in Java.
Its JAXP implementation only supports XPath 2.0.
Its XPath 3.0 implementation should be called this way and requires me to build the document in the first place with its own API rather than using a JAXP-compliant API like DOM4J.
This is a pain because I'd been careful to abstract away everything that uses XPath to a proxy interface taking a JAXP node and xpath string as parameters, but this seems pointless if I have to refactor everything to use SAXON nodes from the top down.
Am I misunderstanding something? Is there a less painful way?
I'm increasingly trying to encourage users to use the s9api API in preference to JAXP for XPath processing. There are a variety of reasons: the JAXP interface only gives very half-hearted support to tree models other than DOM; it really struggles with the extended type system of XPath 2.0 and now 3.0, and in the case of Saxon, it doesn't interoperate at all well with other XML technologies and APIs.
However, Saxon continues to support the JAXP XPath API, with all its limitations, both against its own tree model and against third-party tree models such as DOM4J.
One thing that we have dropped is support for the XPath services interface, whereby an application using the XPathFactory.newInstance() method will pick up Saxon if it's on the classpath. The reason for that is that you really need to know when you're writing an application whether you want an XPath 1.0 or 2.0 processor, and the JAXP mechanism gives you no way of saying which you want, or discovering which you have been given. The result was that a lot of applications were experiencing hard-to-diagnose failures when deployed with an incorrect classpath. If you want Saxon as your JAXP XPath provider you now have to ask for it explicitly.
It would be useful if you could be more specific about what you are trying to do, and how it is failing.

Java 8, Using Nashorn with Java 8 Compact Profiles

Is it possible to use Nashorn (the new JavaScript engine for Java8) together with each of the three Java 8 compact profiles?
Yes, you can use Nashorn in all compact profiles. This is explicitly documented in the compact profiles for embedded documentation:
Compact1 Profile APIs
Similar to the legacy Connected Device Configuration (CDC) with the Foundation Profile, secure sockets layer (SSL), logging, and scripting language support, including Javascript. When configured with the minimal JVM, the compact1 profile APIs have a static footprint of about 12MB.
Each compact profile is a superset of the previous one, so by virtue of being usable in compact1, Nashorn is also usable in compact2 and compact3.
As further evidence, bug JDK-8027532 was filed and resolved to ensure Nashorn doesn't use any classes outside of compact1.
Note that there is no requirement for JVMs to provide any particular script engine. Thus while Nashorn is compatible with all compact profiles, a particular JVM may not make it available.

Integrating Java and non-Java systems over JMS

We have been thinking recently about integrating our J2EE system with other applications written in Python/Perl. Our application integrates very nice with other Java systems over JMS. Is it possible that non-java systems will receive Serializable messages and do some modification on it (at some level every class property is java primitive type)? Also we would like to do it in the other direction, e.g. python application constructs object which then will be sent over JMS and modified (at least understandable) by our java app. Do you have any experience in this topic / hints for us?
Thanks in advance,
Piotr
You don't want to use Serializeable objects for this. You'll need a more portable format, such as a text based format like XML or JSON or CSV. It's simply not worth the effort to try and read serialized java object on other platforms.
Now you could use another binary format, such as the Google format (protocol buffers I think it's called). You can also change your java classes, specifically the ones that you plan to exchange, and you can implement the Externalizable interface. This let's you have full control over the reading and writing of your java classes. That way you can still use the java serialization protocol and workflow, but write and read a more portable format.
This let's you incrementally add support for the Python system without really disturbing the rest of the system, especially for messaging, as long as there are no legacy messages to be processed in your queues when you switch over.

Resources